100+ datasets found
  1. BigBasket Descriptive Analysis

    • kaggle.com
    zip
    Updated Apr 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juan Pablo Pinillos (2023). BigBasket Descriptive Analysis [Dataset]. https://www.kaggle.com/datasets/juanpablopinillos/bigbasket-descriptive-analysis
    Explore at:
    zip(14644654 bytes)Available download formats
    Dataset updated
    Apr 29, 2023
    Authors
    Juan Pablo Pinillos
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Big Basket is an online grocery and food delivery platform based in India. It allows users to order a variety of products including fresh produce, packaged foods, household essentials, and more, which are then delivered to their doorstep.

    I'm pleased to present this project focused on the analysis of sales, ratings, and pricing discounts of BigBasket. This storytelling will conduct a comprehensive analysis of BigBasket's sales data to identify key trends and patterns, as well as provide insights into customer behavior and preferences. My aim is to help the store optimize its performance and enhance its competitive edge in the online retail market.

    Source & License

    • kaggle.com & bigbasket.com
    • CC BY-NC-SA 4.0

    How to reach me: jppinillos2002@gmail.com

  2. Google Data Analytics Capstone Project

    • kaggle.com
    zip
    Updated Nov 13, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NANCY CHAUHAN (2021). Google Data Analytics Capstone Project [Dataset]. https://www.kaggle.com/datasets/nancychauhan199/google-case-study-pdf
    Explore at:
    zip(284279 bytes)Available download formats
    Dataset updated
    Nov 13, 2021
    Authors
    NANCY CHAUHAN
    Description

    Case Study: How Does a Bike-Share Navigate Speedy Success?¶

    Introduction

    Welcome to the Cyclistic bike-share analysis case study! In this case study, you will perform many real-world tasks of a junior data analyst. You will work for a fictional company, Cyclistic, and meet different characters and team members. In order to answer the key business questions, you will follow the steps of the data analysis process: ask, prepare, process, analyze, share, and act. Along the way, the Case Study Roadmap tables — including guiding questions and key tasks — will help you stay on the right path. By the end of this lesson, you will have a portfolio-ready case study. Download the packet and reference the details of this case study anytime. Then, when you begin your job hunt, your case study will be a tangible way to demonstrate your knowledge and skills to potential employers.

    Scenario

    You are a junior data analyst working in the marketing analyst team at Cyclistic, a bike-share company in Chicago. The director of marketing believes the company’s future success depends on maximizing the number of annual memberships. Therefore, your team wants to understand how casual riders and annual members use Cyclistic bikes differently. From these insights, your team will design a new marketing strategy to convert casual riders into annual members. But first, Cyclistic executives must approve your recommendations, so they must be backed up with compelling data insights and professional data visualizations. Characters and teams ● Cyclistic: A bike-share program that features more than 5,800 bicycles and 600 docking stations. Cyclistic sets itself apart by also offering reclining bikes, hand tricycles, and cargo bikes, making bike-share more inclusive to people with disabilities and riders who can’t use a standard two-wheeled bike. The majority of riders opt for traditional bikes; about 8% of riders use the assistive options. Cyclistic users are more likely to ride for leisure, but about 30% use them to commute to work each day. ● Lily Moreno: The director of marketing and your manager. Moreno is responsible for the development of campaigns and initiatives to promote the bike-share program. These may include email, social media, and other channels. ● Cyclistic marketing analytics team: A team of data analysts who are responsible for collecting, analyzing, and reporting data that helps guide Cyclistic marketing strategy. You joined this team six months ago and have been busy learning about Cyclistic’s mission and business goals — as well as how you, as a junior data analyst, can help Cyclistic achieve them. ● Cyclistic executive team: The notoriously detail-oriented executive team will decide whether to approve the recommended marketing program.

    About the company

    In 2016, Cyclistic launched a successful bike-share offering. Since then, the program has grown to a fleet of 5,824 bicycles that are geotracked and locked into a network of 692 stations across Chicago. The bikes can be unlocked from one station and returned to any other station in the system anytime. Until now, Cyclistic’s marketing strategy relied on building general awareness and appealing to broad consumer segments. One approach that helped make these things possible was the flexibility of its pricing plans: single-ride passes, full-day passes, and annual memberships. Customers who purchase single-ride or full-day passes are referred to as casual riders. Customers who purchase annual memberships are Cyclistic members. Cyclistic’s finance analysts have concluded that annual members are much more profitable than casual riders. Although the pricing flexibility helps Cyclistic attract more customers, Moreno believes that maximizing the number of annual members will be key to future growth. Rather than creating a marketing campaign that targets all-new customers, Moreno believes there is a very good chance to convert casual riders into members. She notes that casual riders are already aware of the Cyclistic program and have chosen Cyclistic for their mobility needs. Moreno has set a clear goal: Design marketing strategies aimed at converting casual riders into annual members. In order to do that, however, the marketing analyst team needs to better understand how annual members and casual riders differ, why casual riders would buy a membership, and how digital media could affect their marketing tactics. Moreno and her team are interested in analyzing the Cyclistic historical bike trip data to identify trends

    Three questions will guide the future marketing program:

    How do annual members and casual riders use Cyclistic bikes differently? Why would casual riders buy Cyclistic annual memberships? How can Cyclistic use digital media to influence casual riders to become members? Moreno has assigned you the first question to answer: How do annual members and casual rid...

  3. Z

    Assessing the impact of hints in learning formal specification: Research...

    • data.niaid.nih.gov
    • data-staging.niaid.nih.gov
    Updated Jan 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Macedo, Nuno; Cunha, Alcino; Campos, José Creissac; Sousa, Emanuel; Margolis, Iara (2024). Assessing the impact of hints in learning formal specification: Research artifact [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10450608
    Explore at:
    Dataset updated
    Jan 29, 2024
    Dataset provided by
    Centro de Computação Gráfica
    INESC TEC
    Authors
    Macedo, Nuno; Cunha, Alcino; Campos, José Creissac; Sousa, Emanuel; Margolis, Iara
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This artifact accompanies the SEET@ICSE article "Assessing the impact of hints in learning formal specification", which reports on a user study to investigate the impact of different types of automated hints while learning a formal specification language, both in terms of immediate performance and learning retention, but also in the emotional response of the students. This research artifact provides all the material required to replicate this study (except for the proprietary questionnaires passed to assess the emotional response and user experience), as well as the collected data and data analysis scripts used for the discussion in the paper.

    Dataset

    The artifact contains the resources described below.

    Experiment resources

    The resources needed for replicating the experiment, namely in directory experiment:

    alloy_sheet_pt.pdf: the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment. The sheet was passed in Portuguese due to the population of the experiment.

    alloy_sheet_en.pdf: a version the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment translated into English.

    docker-compose.yml: a Docker Compose configuration file to launch Alloy4Fun populated with the tasks in directory data/experiment for the 2 sessions of the experiment.

    api and meteor: directories with source files for building and launching the Alloy4Fun platform for the study.

    Experiment data

    The task database used in our application of the experiment, namely in directory data/experiment:

    Model.json, Instance.json, and Link.json: JSON files with to populate Alloy4Fun with the tasks for the 2 sessions of the experiment.

    identifiers.txt: the list of all (104) available participant identifiers that can participate in the experiment.

    Collected data

    Data collected in the application of the experiment as a simple one-factor randomised experiment in 2 sessions involving 85 undergraduate students majoring in CSE. The experiment was validated by the Ethics Committee for Research in Social and Human Sciences of the Ethics Council of the University of Minho, where the experiment took place. Data is shared the shape of JSON and CSV files with a header row, namely in directory data/results:

    data_sessions.json: data collected from task-solving in the 2 sessions of the experiment, used to calculate variables productivity (PROD1 and PROD2, between 0 and 12 solved tasks) and efficiency (EFF1 and EFF2, between 0 and 1).

    data_socio.csv: data collected from socio-demographic questionnaire in the 1st session of the experiment, namely:

    participant identification: participant's unique identifier (ID);

    socio-demographic information: participant's age (AGE), sex (SEX, 1 through 4 for female, male, prefer not to disclosure, and other, respectively), and average academic grade (GRADE, from 0 to 20, NA denotes preference to not disclosure).

    data_emo.csv: detailed data collected from the emotional questionnaire in the 2 sessions of the experiment, namely:

    participant identification: participant's unique identifier (ID) and the assigned treatment (column HINT, either N, L, E or D);

    detailed emotional response data: the differential in the 5-point Likert scale for each of the 14 measured emotions in the 2 sessions, ranging from -5 to -1 if decreased, 0 if maintained, from 1 to 5 if increased, or NA denoting failure to submit the questionnaire. Half of the emotions are positive (Admiration1 and Admiration2, Desire1 and Desire2, Hope1 and Hope2, Fascination1 and Fascination2, Joy1 and Joy2, Satisfaction1 and Satisfaction2, and Pride1 and Pride2), and half are negative (Anger1 and Anger2, Boredom1 and Boredom2, Contempt1 and Contempt2, Disgust1 and Disgust2, Fear1 and Fear2, Sadness1 and Sadness2, and Shame1 and Shame2). This detailed data was used to compute the aggregate data in data_emo_aggregate.csv and in the detailed discussion in Section 6 of the paper.

    data_umux.csv: data collected from the user experience questionnaires in the 2 sessions of the experiment, namely:

    participant identification: participant's unique identifier (ID);

    user experience data: summarised user experience data from the UMUX surveys (UMUX1 and UMUX2, as a usability metric ranging from 0 to 100).

    participants.txt: the list of participant identifiers that have registered for the experiment.

    Analysis scripts

    The analysis scripts required to replicate the analysis of the results of the experiment as reported in the paper, namely in directory analysis:

    analysis.r: An R script to analyse the data in the provided CSV files; each performed analysis is documented within the file itself.

    requirements.r: An R script to install the required libraries for the analysis script.

    normalize_task.r: A Python script to normalize the task JSON data from file data_sessions.json into the CSV format required by the analysis script.

    normalize_emo.r: A Python script to compute the aggregate emotional response in the CSV format required by the analysis script from the detailed emotional response data in the CSV format of data_emo.csv.

    Dockerfile: Docker script to automate the analysis script from the collected data.

    Setup

    To replicate the experiment and the analysis of the results, only Docker is required.

    If you wish to manually replicate the experiment and collect your own data, you'll need to install:

    A modified version of the Alloy4Fun platform, which is built in the Meteor web framework. This version of Alloy4Fun is publicly available in branch study of its repository at https://github.com/haslab/Alloy4Fun/tree/study.

    If you wish to manually replicate the analysis of the data collected in our experiment, you'll need to install:

    Python to manipulate the JSON data collected in the experiment. Python is freely available for download at https://www.python.org/downloads/, with distributions for most platforms.

    R software for the analysis scripts. R is freely available for download at https://cran.r-project.org/mirrors.html, with binary distributions available for Windows, Linux and Mac.

    Usage

    Experiment replication

    This section describes how to replicate our user study experiment, and collect data about how different hints impact the performance of participants.

    To launch the Alloy4Fun platform populated with tasks for each session, just run the following commands from the root directory of the artifact. The Meteor server may take a few minutes to launch, wait for the "Started your app" message to show.

    cd experimentdocker-compose up

    This will launch Alloy4Fun at http://localhost:3000. The tasks are accessed through permalinks assigned to each participant. The experiment allows for up to 104 participants, and the list of available identifiers is given in file identifiers.txt. The group of each participant is determined by the last character of the identifier, either N, L, E or D. The task database can be consulted in directory data/experiment, in Alloy4Fun JSON files.

    In the 1st session, each participant was given one permalink that gives access to 12 sequential tasks. The permalink is simply the participant's identifier, so participant 0CAN would just access http://localhost:3000/0CAN. The next task is available after a correct submission to the current task or when a time-out occurs (5mins). Each participant was assigned to a different treatment group, so depending on the permalink different kinds of hints are provided. Below are 4 permalinks, each for each hint group:

    Group N (no hints): http://localhost:3000/0CAN

    Group L (error locations): http://localhost:3000/CA0L

    Group E (counter-example): http://localhost:3000/350E

    Group D (error description): http://localhost:3000/27AD

    In the 2nd session, likewise the 1st session, each permalink gave access to 12 sequential tasks, and the next task is available after a correct submission or a time-out (5mins). The permalink is constructed by prepending the participant's identifier with P-. So participant 0CAN would just access http://localhost:3000/P-0CAN. In the 2nd sessions all participants were expected to solve the tasks without any hints provided, so the permalinks from different groups are undifferentiated.

    Before the 1st session the participants should answer the socio-demographic questionnaire, that should ask the following information: unique identifier, age, sex, familiarity with the Alloy language, and average academic grade.

    Before and after both sessions the participants should answer the standard PrEmo 2 questionnaire. PrEmo 2 is published under an Attribution-NonCommercial-NoDerivatives 4.0 International Creative Commons licence (CC BY-NC-ND 4.0). This means that you are free to use the tool for non-commercial purposes as long as you give appropriate credit, provide a link to the license, and do not modify the original material. The original material, namely the depictions of the diferent emotions, can be downloaded from https://diopd.org/premo/. The questionnaire should ask for the unique user identifier, and for the attachment with each of the depicted 14 emotions, expressed in a 5-point Likert scale.

    After both sessions the participants should also answer the standard UMUX questionnaire. This questionnaire can be used freely, and should ask for the user unique identifier and answers for the standard 4 questions in a 7-point Likert scale. For information about the questions, how to implement the questionnaire, and how to compute the usability metric ranging from 0 to 100 score from the answers, please see the original paper:

    Kraig Finstad. 2010. The usability metric for user experience. Interacting with computers 22, 5 (2010), 323–327.

    Analysis of other applications of the experiment

    This section describes how to replicate the analysis of the data collected in an application of the experiment described in Experiment replication.

    The analysis script expects data in 4 CSV files,

  4. H

    High Performance Data Analytics (HPDA) Market Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Feb 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). High Performance Data Analytics (HPDA) Market Report [Dataset]. https://www.promarketreports.com/reports/high-performance-data-analytics-hpda-market-9182
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Feb 7, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The size of the High Performance Data Analytics (HPDA) Market was valued at USD 44.58 Billion in 2023 and is projected to reach USD 148.62 Billion by 2032, with an expected CAGR of 18.77% during the forecast period.

  5. H

    High Performance Computing (HPC) and High Performance Data Analytics (HPDA)...

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Jun 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). High Performance Computing (HPC) and High Performance Data Analytics (HPDA) Market Report [Dataset]. https://www.marketresearchforecast.com/reports/high-performance-computing-hpc-and-high-performance-data-analytics-hpda-market-1785
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Jun 5, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The High Performance Computing (HPC) and High Performance Data Analytics (HPDA) Marketsize was valued at USD 46.01 USD billion in 2023 and is projected to reach USD 84.65 USD billion by 2032, exhibiting a CAGR of 9.1 % during the forecast period. Recent developments include: December 2023: Lenovo, a company offering computer hardware, software, and services, extended the HPC system “LISE” at the Zuse Institute Berlin (ZIB). This expansion would provide researchers at the institute with high computing power required to execute data-intensive applications. The major focus of this expansion is to enhance the energy efficiency of “LISE”. , August 2023: atNorth, a data center services company, announced the acquisition of Gompute, the HPC cloud platform offering Cloud HPC services, as well as on-premises and hybrid cloud solutions. Under the terms of the agreement, atNorth would add Gompute’s data center to its portfolio., July 2023: HCL Technologies Limited, a consulting and information technology services firm, extended its collaboration with Microsoft Corporation to provide HPC solutions, such as advanced analytics, ML, core infrastructure, and simulations, for clients across numerous sectors., June 2023: Leostream, a cloud-based desktop provider, launched new features designed to enhance HPC workloads on AWS EC2. The company develops zero-trust architecture around HPC workloads to deliver cost-effective and secure resources to users on virtual machines., November 2022: Intel Corporation, a global technology company, launched the latest advanced processors for HPC, artificial intelligence (AI), and supercomputing. These processors include data center version GPUs and 4th Gen Xeon Scalable CPUs.. Key drivers for this market are: Technological Advancements Coupled with Robust Government Investments to Fuel Market Growth. Potential restraints include: High Cost and Skill Gap to Restrain Industry Expansion. Notable trends are: Comprehensive Benefits Provided by Hybrid Cloud HPC Solutions to Aid Industry Expansion .

  6. f

    Data from: HOW TO PERFORM A META-ANALYSIS: A PRACTICAL STEP-BY-STEP GUIDE...

    • datasetcatalog.nlm.nih.gov
    • scielo.figshare.com
    Updated May 27, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Helito, Camilo Partezani; Gonçalves, Romeu Krause; de Lima, Lana Lacerda; Clazzer, Renata; de Lima, Diego Ariel; de Camargo, Olavo Pires (2022). HOW TO PERFORM A META-ANALYSIS: A PRACTICAL STEP-BY-STEP GUIDE USING R SOFTWARE AND RSTUDIO [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000403452
    Explore at:
    Dataset updated
    May 27, 2022
    Authors
    Helito, Camilo Partezani; Gonçalves, Romeu Krause; de Lima, Lana Lacerda; Clazzer, Renata; de Lima, Diego Ariel; de Camargo, Olavo Pires
    Description

    ABSTRACT Meta-analysis is an adequate statistical technique to combine results from different studies, and its use has been growing in the medical field. Thus, not only knowing how to interpret meta-analysis, but also knowing how to perform one, is fundamental today. Therefore, the objective of this article is to present the basic concepts and serve as a guide for conducting a meta-analysis using R and RStudio software. For this, the reader has access to the basic commands in the R and RStudio software, necessary for conducting a meta-analysis. The advantage of R is that it is a free software. For a better understanding of the commands, two examples were presented in a practical way, in addition to revising some basic concepts of this statistical technique. It is assumed that the data necessary for the meta-analysis has already been collected, that is, the description of methodologies for systematic review is not a discussed subject. Finally, it is worth remembering that there are many other techniques used in meta-analyses that were not addressed in this work. However, with the two examples used, the article already enables the reader to proceed with good and robust meta-analyses. Level of Evidence V, Expert Opinion.

  7. Replication Package: Unboxing Default Argument Breaking Changes in 1 + 2...

    • zenodo.org
    application/gzip
    Updated Jul 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi; João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi (2024). Replication Package: Unboxing Default Argument Breaking Changes in 1 + 2 Data Science Libraries in Python [Dataset]. http://doi.org/10.5281/zenodo.11584961
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jul 15, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi; João Eduardo Montandon; Luciana Lourdes Silva; Cristiano Politowski; Daniel Prates; Arthur Bonifácio; Ghizlane El Boussaidi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Replication Package

    This repository contains data and source files needed to replicate our work described in the paper "Unboxing Default Argument Breaking Changes in Scikit Learn".

    Requirements

    We recommend the following requirements to replicate our study:

    1. Internet access
    2. At least 100GB of space
    3. Docker installed
    4. Git installed

    Package Structure

    We relied on Docker containers to provide a working environment that is easier to replicate. Specifically, we configure the following containers:

    • data-analysis, an R-based Container we used to run our data analysis.
    • data-collection, a Python Container we used to collect Scikit's default arguments and detect them in client applications.
    • database, a Postgres Container we used to store clients' data, obtainer from Grotov et al.
    • storage, a directory used to store the data processed in data-analysis and data-collection. This directory is shared in both containers.
    • docker-compose.yml, the Docker file that configures all containers used in the package.

    In the remainder of this document, we describe how to set up each container properly.

    Using VSCode to Setup the Package

    We selected VSCode as the IDE of choice because its extensions allow us to implement our scripts directly inside the containers. In this package, we provide configuration parameters for both data-analysis and data-collection containers. This way you can directly access and run each container inside it without any specific configuration.

    You first need to set up the containers

    $ cd /replication/package/folder
    $ docker-compose build
    $ docker-compose up
    # Wait docker creating and running all containers
    

    Then, you can open them in Visual Studio Code:

    1. Open VSCode in project root folder
    2. Access the command palette and select "Dev Container: Reopen in Container"
      1. Select either Data Collection or Data Analysis.
    3. Start working

    If you want/need a more customized organization, the remainder of this file describes it in detail.

    Longest Road: Manual Package Setup

    Database Setup

    The database container will automatically restore the dump in dump_matroskin.tar in its first launch. To set up and run the container, you should:

    Build an image:

    $ cd ./database
    $ docker build --tag 'dabc-database' .
    $ docker image ls
    REPOSITORY  TAG    IMAGE ID    CREATED     SIZE
    dabc-database latest  b6f8af99c90d  50 minutes ago  18.5GB
    

    Create and enter inside the container:

    $ docker run -it --name dabc-database-1 dabc-database
    $ docker exec -it dabc-database-1 /bin/bash
    root# psql -U postgres -h localhost -d jupyter-notebooks
    jupyter-notebooks=# \dt
           List of relations
     Schema |    Name    | Type | Owner
    --------+-------------------+-------+-------
     public | Cell       | table | root
     public | Code_cell     | table | root
     public | Md_cell      | table | root
     public | Notebook     | table | root
     public | Notebook_features | table | root
     public | Notebook_metadata | table | root
     public | repository    | table | root
    

    If you got the tables list as above, your database is properly setup.

    It is important to mention that this database is extended from the one provided by Grotov et al.. Basically, we added three columns in the table Notebook_features (API_functions_calls, defined_functions_calls, andother_functions_calls) containing the function calls performed by each client in the database.

    Data Collection Setup

    This container is responsible for collecting the data to answer our research questions. It has the following structure:

    • dabcs.py, extract DABCs from Scikit Learn source code, and export them to a CSV file.
    • dabcs-clients.py, extract function calls from clients and export them to a CSV file. We rely on a modified version of Matroskin to leverage the function calls. You can find the tool's source code in the `matroskin`` directory.
    • Makefile, commands to set up and run both dabcs.py and dabcs-clients.py
    • matroskin, the directory containing the modified version of matroskin tool. We extended the library to collect the function calls performed on the client notebooks of Grotov's dataset.
    • storage, a docker volume where the data-collection should save the exported data. This data will be used later in Data Analysis.
    • requirements.txt, Python dependencies adopted in this module.

    Note that the container will automatically configure this module for you, e.g., install dependencies, configure matroskin, download scikit learn source code, etc. For this, you must run the following commands:

    $ cd ./data-collection
    $ docker build --tag "data-collection" .
    $ docker run -it -d --name data-collection-1 -v $(pwd)/:/data-collection -v $(pwd)/../storage/:/data-collection/storage/ data-collection
    $ docker exec -it data-collection-1 /bin/bash
    $ ls
    Dockerfile Makefile config.yml dabcs-clients.py dabcs.py matroskin storage requirements.txt utils.py
    

    If you see project files, it means the container is configured accordingly.

    Data Analysis Setup

    We use this container to conduct the analysis over the data produced by the Data Collection container. It has the following structure:

    • dependencies.R, an R script containing the dependencies used in our data analysis.
    • data-analysis.Rmd, the R notebook we used to perform our data analysis
    • datasets, a docker volume pointing to the storage directory.

    Execute the following commands to run this container:

    $ cd ./data-analysis
    $ docker build --tag "data-analysis" .
    $ docker run -it -d --name data-analysis-1 -v $(pwd)/:/data-analysis -v $(pwd)/../storage/:/data-collection/datasets/ data-analysis
    $ docker exec -it data-analysis-1 /bin/bash
    $ ls
    data-analysis.Rmd datasets dependencies.R Dockerfile figures Makefile
    

    If you see project files, it means the container is configured accordingly.

    A note on storage shared folder

    As mentioned, the storage folder is mounted as a volume and shared between data-collection and data-analysis containers. We compressed the content of this folder due to space constraints. Therefore, before starting working on Data Collection or Data Analysis, make sure you extracted the compressed files. You can do this by running the Makefile inside storage folder.

    $ make unzip # extract files
    $ ls
    clients-dabcs.csv clients-validation.csv dabcs.csv Makefile scikit-learn-versions.csv versions.csv
    $ make zip # compress files
    $ ls
    csv-files.tar.gz Makefile
  8. Orange dataset table

    • figshare.com
    xlsx
    Updated Mar 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rui Simões (2022). Orange dataset table [Dataset]. http://doi.org/10.6084/m9.figshare.19146410.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Mar 4, 2022
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Rui Simões
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The complete dataset used in the analysis comprises 36 samples, each described by 11 numeric features and 1 target. The attributes considered were caspase 3/7 activity, Mitotracker red CMXRos area and intensity (3 h and 24 h incubations with both compounds), Mitosox oxidation (3 h incubation with the referred compounds) and oxidation rate, DCFDA fluorescence (3 h and 24 h incubations with either compound) and oxidation rate, and DQ BSA hydrolysis. The target of each instance corresponds to one of the 9 possible classes (4 samples per class): Control, 6.25, 12.5, 25 and 50 µM for 6-OHDA and 0.03, 0.06, 0.125 and 0.25 µM for rotenone. The dataset is balanced, it does not contain any missing values and data was standardized across features. The small number of samples prevented a full and strong statistical analysis of the results. Nevertheless, it allowed the identification of relevant hidden patterns and trends.

    Exploratory data analysis, information gain, hierarchical clustering, and supervised predictive modeling were performed using Orange Data Mining version 3.25.1 [41]. Hierarchical clustering was performed using the Euclidean distance metric and weighted linkage. Cluster maps were plotted to relate the features with higher mutual information (in rows) with instances (in columns), with the color of each cell representing the normalized level of a particular feature in a specific instance. The information is grouped both in rows and in columns by a two-way hierarchical clustering method using the Euclidean distances and average linkage. Stratified cross-validation was used to train the supervised decision tree. A set of preliminary empirical experiments were performed to choose the best parameters for each algorithm, and we verified that, within moderate variations, there were no significant changes in the outcome. The following settings were adopted for the decision tree algorithm: minimum number of samples in leaves: 2; minimum number of samples required to split an internal node: 5; stop splitting when majority reaches: 95%; criterion: gain ratio. The performance of the supervised model was assessed using accuracy, precision, recall, F-measure and area under the ROC curve (AUC) metrics.

  9. S

    Sports Analytics Service Software Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Mar 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). Sports Analytics Service Software Report [Dataset]. https://www.marketresearchforecast.com/reports/sports-analytics-service-software-54878
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Mar 24, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global sports analytics service software market is experiencing robust growth, driven by the increasing adoption of data-driven strategies by sports teams, leagues, and broadcasters. The market's expansion is fueled by several key factors: the rising popularity of sports globally, the need for enhanced performance analysis, the proliferation of wearable technology generating vast amounts of data, and the increasing sophistication of analytical tools. The market is segmented by operating system (Android, iOS, Windows, and others) and application (basketball, football, and others), reflecting the diverse needs of different sports and technological preferences. North America currently holds a significant market share, owing to the established sports infrastructure and the early adoption of advanced analytics in professional leagues. However, regions like Asia-Pacific are showing promising growth potential due to the expanding sports industry and increasing investment in sports technology. The competitive landscape is characterized by a mix of established players and emerging technology companies, each offering unique solutions tailored to specific sports and analytical needs. Challenges include the high cost of implementation, data security concerns, and the need for skilled personnel to interpret and utilize the complex data generated by these systems. Despite these hurdles, the long-term outlook for the sports analytics service software market remains positive, with a projected CAGR indicating sustained growth over the forecast period. The forecast period (2025-2033) anticipates continuous growth, primarily driven by technological advancements and the increasing integration of analytics into sports decision-making at all levels. Advancements in artificial intelligence (AI) and machine learning (ML) are expected to further enhance the capabilities of these software solutions, providing more accurate predictions and actionable insights. The market's evolution will also be shaped by the ongoing development of new data sources and the increasing focus on personalized athlete development strategies. Furthermore, the growing interest in esports and the corresponding demand for performance analysis in virtual sports are also likely to contribute significantly to market growth. Competition will intensify, leading to innovation in software features, pricing strategies, and strategic partnerships to capture larger market shares. The global reach of major sports leagues will influence the expansion into new geographical markets, particularly in developing economies where the sports industry is rapidly expanding.

  10. m

    Data for "Direct and indirect Rod and Frame effect: A virtual reality study"...

    • data.mendeley.com
    Updated Feb 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michał Adamski (2025). Data for "Direct and indirect Rod and Frame effect: A virtual reality study" [Dataset]. http://doi.org/10.17632/pcf2n8b4rd.1
    Explore at:
    Dataset updated
    Feb 12, 2025
    Authors
    Michał Adamski
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the raw experimental data and supplementary materials for the "Asymmetry Effects in Virtual Reality Rod and Frame Test". The materials included are:

    •  Raw Experimental Data: older.csv and young.csv
    •  Mathematica Notebooks: a collection of Mathematica notebooks used for data analysis and visualization. These notebooks provide scripts for processing the experimental data, performing statistical analyses, and generating the figures used in the project.
    •  Unity Package: a Unity package featuring a sample scene related to the project. The scene was built using Unity’s Universal Rendering Pipeline (URP). To utilize this package, ensure that URP is enabled in your Unity project. Instructions for enabling URP can be found in the Unity URP Documentation.
    

    Requirements:

    •  For Data Files: software capable of opening CSV files (e.g., Microsoft Excel, Google Sheets, or any programming language that can read CSV formats).
    •  For Mathematica Notebooks: Wolfram Mathematica software to run and modify the notebooks.
    •  For Unity Package: Unity Editor version compatible with URP (2019.3 or later recommended). URP must be installed and enabled in your Unity project.
    

    Usage Notes:

    •  The dataset facilitates comparative studies between different age groups based on the collected variables.
    •  Users can modify the Mathematica notebooks to perform additional analyses.
    •  The Unity scene serves as a reference to the project setup and can be expanded or integrated into larger projects.
    

    Citation: Please cite this dataset when using it in your research or publications.

  11. G

    Rocket Engine Test Data Analytics Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Rocket Engine Test Data Analytics Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/rocket-engine-test-data-analytics-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Oct 6, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Rocket Engine Test Data Analytics Market Outlook



    According to our latest research, the global rocket engine test data analytics market size in 2024 stands at USD 1.42 billion. The market is experiencing robust expansion, with a compounded annual growth rate (CAGR) of 12.8% from 2025 to 2033. By 2033, the market is forecasted to reach a value of USD 4.19 billion. This growth is primarily fueled by the increasing demand for advanced data analytics to enhance the reliability, safety, and performance of rocket engines, as well as the rising frequency of space missions and test launches across both governmental and commercial sectors.




    One of the key factors propelling the growth of the rocket engine test data analytics market is the rapid technological advancement in data acquisition and processing systems. Modern rocket engine tests generate colossal volumes of data, encompassing parameters such as thrust, temperature, vibration, and fuel flow. The integration of sophisticated analytics platforms enables stakeholders to derive actionable insights from this data, facilitating real-time monitoring, anomaly detection, and root-cause analysis. This technological leap not only shortens development cycles but also significantly reduces the risk of catastrophic failures, making it indispensable for organizations aiming to maintain a competitive edge in the aerospace and defense sector.




    Another significant growth driver is the escalating investment in space exploration and commercial spaceflight activities. Both government agencies like NASA and ESA, as well as private players such as SpaceX and Blue Origin, are conducting more frequent and complex test campaigns. These organizations increasingly rely on data analytics to validate engine designs, optimize test procedures, and ensure compliance with stringent safety standards. The advent of reusable rocket technology further amplifies the need for predictive maintenance and performance analytics, as understanding wear and tear across multiple launches becomes critical to mission success and cost efficiency.




    The convergence of artificial intelligence (AI) and machine learning (ML) with rocket engine test data analytics is also catalyzing market expansion. Advanced algorithms are now capable of identifying subtle patterns and correlations within vast datasets, enabling predictive maintenance and early fault detection with unprecedented accuracy. This capability is particularly valuable for commercial space companies and research institutes seeking to maximize engine uptime and minimize unplanned downtimes. Moreover, the growing adoption of cloud-based analytics platforms is democratizing access to high-performance computing resources, allowing smaller organizations and emerging space nations to participate in the market and drive further innovation.




    From a regional perspective, North America continues to dominate the rocket engine test data analytics market, accounting for over 43% of the global revenue in 2024. This leadership is attributed to the presence of major aerospace companies, robust government funding, and a vibrant ecosystem of technology providers. However, Asia Pacific is emerging as the fastest-growing region, with countries like China and India ramping up their space programs and investing heavily in indigenous rocket engine development and testing infrastructure. Europe also remains a significant market, driven by collaborative initiatives and strong research capabilities. The Middle East & Africa and Latin America, while still nascent, are expected to witness steady growth as regional space ambitions intensify.





    Component Analysis



    The component segment of the rocket engine test data analytics market is categorized into software, hardware, and services. The software component is witnessing the highest growth, driven by the increasing demand for advanced analytics platforms capable of handling large-scale, high-velocity data streams generated during engine tests. These so

  12. Summary of references for enterprise development.

    • plos.figshare.com
    xls
    Updated May 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yaxuan Zhao (2024). Summary of references for enterprise development. [Dataset]. http://doi.org/10.1371/journal.pone.0303297.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 20, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Yaxuan Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The planning of human resources and the management of enterprises consider the organization’s size, the amount of effort put into operations, and the level of productivity. Inefficient allocation of resources in organizations due to skill-task misalignment lowers production and operational efficiency. This study addresses organizations’ poor resource allocation and use, which reduces productivity and the efficiency of operations, and inefficiency may adversely impact company production and finances. This research aims to develop and assess a Placement-Assisted Resource Management Scheme (PRMS) to improve resource allocation and usage and businesses’ operational efficiency and productivity. PRMS uses expertise, business requirements, and processes that are driven by data to match resources with activities that align with their capabilities and require them to perform promptly. The proposed system PRMS outperforms existing approaches on various performance metrics at two distinct levels of operations and operating levels, with a success rate of 0.9328% and 0.9302%, minimal swapping ratios of 12.052% and 11.658%, smaller resource mitigation ratios of 4.098% and 4.815%, mean decision times of 5.414s and 4.976s, and data analysis counts of 6387 and 6335 Success and data analysis increase by 9.98% and 8.2%, respectively, with the proposed strategy. This technique cuts the switching ratio, resource mitigation, and decision time by 6.52%, 13.84%, and 8.49%. The study concluded that PRMS is a solid, productivity-focused corporate improvement method that optimizes the allocation of resources and meets business needs.

  13. t

    Interpreting Cohort Data - The 3 Key Analyses for Measuring Your Startup’s...

    • tomtunguz.com
    Updated Sep 17, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tomasz Tunguz (2013). Interpreting Cohort Data - The 3 Key Analyses for Measuring Your Startup’s Product Performance - Data Analysis [Dataset]. https://tomtunguz.com/interpreting-cohort-data/
    Explore at:
    Dataset updated
    Sep 17, 2013
    Dataset provided by
    Theory Ventures
    Authors
    Tomasz Tunguz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Learn 3 essential cohort analysis methods to measure product performance: onboarding, longitudinal & cohort trends. Data-driven insights for SaaS startups.

  14. Cloud Analytics Market Analysis North America, Europe, APAC, Middle East and...

    • technavio.com
    pdf
    Updated Jul 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2024). Cloud Analytics Market Analysis North America, Europe, APAC, Middle East and Africa, South America - US, China, UK, Germany, Japan - Size and Forecast 2024-2028 [Dataset]. https://www.technavio.com/report/cloud-analytics-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jul 22, 2024
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2024 - 2028
    Description

    Snapshot img

    Cloud Analytics Market Size 2024-2028

    The cloud analytics market size is forecast to increase by USD 74.08 billion at a CAGR of 24.4% between 2023 and 2028.

    The market is experiencing significant growth due to several key trends. The adoption of hybrid and multi-cloud setups is on the rise, as these configurations enhance data connectivity and flexibility. Another trend driving market growth is the increasing use of cloud security applications to safeguard sensitive data.
    However, concerns regarding confidential data security and privacy remain a challenge for market growth. Organizations must ensure robust security measures are in place to mitigate risks and maintain trust with their customers. Overall, the market is poised for continued expansion as businesses seek to leverage the benefits of cloud technologies for data processing and data analytics.
    

    What will be the Size of the Cloud Analytics Market During the Forecast Period?

    Request Free Sample

    The market is experiencing significant growth due to the increasing volume of data generated by businesses and the demand for advanced analytics solutions. Cloud-based analytics enables organizations to process and analyze large datasets from various data sources, including unstructured data, in real-time. This is crucial for businesses looking to make data-driven decisions and gain valuable insights to optimize their operations and meet customer requirements. Key industries such as sales and marketing, customer service, and finance are adopting cloud analytics to improve key performance indicators and gain a competitive edge. Both Small and Medium-sized Enterprises (SMEs) and large enterprises are embracing cloud analytics, with solutions available on private, public, and multi-cloud platforms.
    Big data technology, such as machine learning and artificial intelligence, are integral to cloud analytics, enabling advanced data analytics and business intelligence. Cloud analytics provides businesses with the flexibility to store and process data In the cloud, reducing the need for expensive on-premises data storage and computation. Hybrid environments are also gaining popularity, allowing businesses to leverage the benefits of both private and public clouds. Overall, the market is poised for continued growth as businesses increasingly rely on data-driven insights to inform their decision-making processes.
    

    How is this Cloud Analytics Industry segmented and which is the largest segment?

    The cloud analytics industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD billion' for the period 2024-2028, as well as historical data from 2017-2022 for the following segments.

    Solution
    
      Hosted data warehouse solutions
      Cloud BI tools
      Complex event processing
      Others
    
    
    Deployment
    
      Public cloud
      Hybrid cloud
      Private cloud
    
    
    Geography
    
      North America
    
        US
    
    
      Europe
    
        Germany
        UK
    
    
      APAC
    
        China
        Japan
    
    
      Middle East and Africa
    
    
    
      South America
    

    By Solution Insights

    The hosted data warehouse solutions segment is estimated to witness significant growth during the forecast period.
    

    Hosted data warehouses enable organizations to centralize and analyze large datasets from multiple sources, facilitating advanced analytics solutions and real-time insights. By utilizing cloud-based infrastructure, businesses can reduce operational costs through eliminating licensing expenses, hardware investments, and maintenance fees. Additionally, cloud solutions offer network security measures, such as Software Defined Networking and Network integration, ensuring data protection. Cloud analytics caters to diverse industries, including SMEs and large enterprises, addressing requirements for sales and marketing, customer service, and key performance indicators. Advanced analytics capabilities, including predictive analytics, automated decision making, and fraud prevention, are essential for data-driven decision making and business optimization.

    Furthermore, cloud platforms provide access to specialized talent, big data technology, and AI, enhancing customer experiences and digital business opportunities. Data connectivity and data processing in real-time are crucial for network agility and application performance. Hosted data warehouses offer computational power and storage capabilities, ensuring efficient data utilization and enterprise information management. Cloud service providers offer various cloud environments, including private, public, multi-cloud, and hybrid, catering to diverse business needs. Compliance and security concerns are addressed through cybersecurity frameworks and data security measures, ensuring data breaches and thefts are minimized.

    Get a glance at the Cloud Analytics Industry report of share of various segments Request Free Sample

    The Hosted data warehouse solutions s

  15. d

    Data from: Granite Springs Valley, Nevada Play Fairway Analysis - Well data...

    • catalog.data.gov
    • data.openei.org
    • +3more
    Updated Jan 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nevada Bureau of Mines and Geology (2025). Granite Springs Valley, Nevada Play Fairway Analysis - Well data and Temperature Survey [Dataset]. https://catalog.data.gov/dataset/granite-springs-valley-nevada-play-fairway-analysis-well-data-and-temperature-survey-af905
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    Nevada Bureau of Mines and Geology
    Area covered
    Granite Springs Valley, Nevada
    Description

    This data is associated with the Nevada Play Fairway project and includes excel files containing raw 2-meter temperature data and corrections. GIS shapefiles and layer files contain ing location and attribute information for the data are included. Well data includes both deep and shallow TG holes, GIS shapefiles and layer files.

  16. Data_Sheet_1_NeuroDecodeR: a package for neural decoding in R.docx

    • frontiersin.figshare.com
    docx
    Updated Jan 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ethan M. Meyers (2024). Data_Sheet_1_NeuroDecodeR: a package for neural decoding in R.docx [Dataset]. http://doi.org/10.3389/fninf.2023.1275903.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jan 3, 2024
    Dataset provided by
    Frontiers Mediahttp://www.frontiersin.org/
    Authors
    Ethan M. Meyers
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Neural decoding is a powerful method to analyze neural activity. However, the code needed to run a decoding analysis can be complex, which can present a barrier to using the method. In this paper we introduce a package that makes it easy to perform decoding analyses in the R programing language. We describe how the package is designed in a modular fashion which allows researchers to easily implement a range of different analyses. We also discuss how to format data to be able to use the package, and we give two examples of how to use the package to analyze real data. We believe that this package, combined with the rich data analysis ecosystem in R, will make it significantly easier for researchers to create reproducible decoding analyses, which should help increase the pace of neuroscience discoveries.

  17. Data Science Tweets

    • figshare.com
    • datasetcatalog.nlm.nih.gov
    zip
    Updated May 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jesus Rogel-Salazar (2024). Data Science Tweets [Dataset]. http://doi.org/10.6084/m9.figshare.2062551.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 14, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Jesus Rogel-Salazar
    License

    https://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html

    Description

    Quantum Tunnel TweetsThe data set contains tweets sourced from @quantum_tunnel and @dt_science as a demo for classifying text using Naive Bayes. The demo is detailed in the book Data Science and Analytics with Python by Dr J Rogel-Salazar.Data contents:Train_QuantumTunnel_Tweets.csv: Labelled tweets for text related to "Data Science" with three features:DataScience: [0/1] indicating whether the text is about "Data Science" or not.Date: Date when the tweet was publishedTweet: Text of the tweetTest_QuantumTunnel_Tweets.csv: Testing data with twitter utterances withouth labels:id: A unique identifier for tweetsDate: Date when the tweet was publishedTweet: Text for the tweetFor further information, please get in touch with Dr J Rogel-Salazar.

  18. Google Analytics data of an E-commerce Company

    • kaggle.com
    zip
    Updated Oct 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    fehu.zone (2024). Google Analytics data of an E-commerce Company [Dataset]. https://www.kaggle.com/datasets/fehu94/google-analytics-data-of-an-e-commerce-company
    Explore at:
    zip(3156 bytes)Available download formats
    Dataset updated
    Oct 19, 2024
    Authors
    fehu.zone
    Description

    📊 Dataset Title: Daily Active Users Dataset

    📝 Description

    This dataset provides detailed insights into daily active users (DAU) of a platform or service, captured over a defined period of time. The dataset includes information such as the number of active users per day, allowing data analysts and business intelligence teams to track usage trends, monitor platform engagement, and identify patterns in user activity over time.

    The data is ideal for performing time series analysis, statistical analysis, and trend forecasting. You can utilize this dataset to measure the success of platform initiatives, evaluate user behavior, or predict future trends in engagement. It is also suitable for training machine learning models that focus on user activity prediction or anomaly detection.

    📂 Dataset Structure

    The dataset is structured in a simple and easy-to-use format, containing the following columns:

    • Date: The date on which the data was recorded, formatted as YYYYMMDD.
    • Number of Active Users: The number of users who were active on the platform on the corresponding date.

    Each row in the dataset represents a unique date and its corresponding number of active users. This allows for time-based analysis, such as calculating the moving average of active users, detecting seasonality, or spotting sudden spikes or drops in engagement.

    🧐 Key Use Cases

    This dataset can be used for a wide range of purposes, including:

    1. Time Series Analysis: Analyze trends and seasonality of user engagement.
    2. Trend Detection: Discover peaks and valleys in user activity.
    3. Anomaly Detection: Use statistical methods or machine learning algorithms to detect anomalies in user behavior.
    4. Forecasting User Growth: Build forecasting models to predict future platform usage.
    5. Seasonality Insights: Identify patterns like increased activity on weekends or holidays.

    📈 Potential Analysis

    Here are some specific analyses you can perform using this dataset:

    • Moving Average and Smoothing: Calculate the moving average over a 7-day or 30-day period.
    • Correlation with External Factors: Correlate daily active users with other datasets.
    • Statistical Hypothesis Testing: Perform t-tests or ANOVA to determine significant differences in user activity.
    • Machine Learning for Prediction: Train machine learning models to predict user engagement.

    🚀 Getting Started

    To get started with this dataset, you can load it into your preferred analysis tool. Here's how to do it using Python's pandas library:

    import pandas as pd
    
    # Load the dataset
    data = pd.read_csv('path_to_dataset.csv')
    
    # Display the first few rows
    print(data.head())
    
    # Basic statistics
    print(data.describe())
    
  19. d

    Hawaii Play Fairway Analysis: Deformation Data, Hawaii Island

    • catalog.data.gov
    • data.openei.org
    • +2more
    Updated Jan 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Hawaii (2025). Hawaii Play Fairway Analysis: Deformation Data, Hawaii Island [Dataset]. https://catalog.data.gov/dataset/hawaii-play-fairway-analysis-deformation-data-hawaii-island-14860
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    University of Hawaii
    Area covered
    Island of Hawai'i, Hawaii
    Description

    GPS-derived Horizontal Velocities on the Hawaii island, provided by James Foster of the Pacific GPS Facility.

  20. d

    Toward a Reproducible Research Data Repository

    • data.depositar.io
    mp4, pdf
    Updated Jan 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    depositar (2024). Toward a Reproducible Research Data Repository [Dataset]. https://data.depositar.io/dataset/reproducible-research-data-repository
    Explore at:
    pdf(627064), mp4(22141307), pdf(2586248), pdf(212638)Available download formats
    Dataset updated
    Jan 26, 2024
    Dataset provided by
    depositar
    Description

    Collected in this dataset are the slideset and abstract for a presentation on Toward a Reproducible Research Data Repository by the depositar team at International Symposium on Data Science 2023 (DSWS 2023), hosted by the Science Council of Japan in Tokyo on December 13-15, 2023. The conference was organized by the Joint Support-Center for Data Science Research (DS), Research Organization of Information and Systems (ROIS) and the Committee of International Collaborations on Data Science, Science Council of Japan. The conference programme is also included as a reference.

    Title

    Toward a Reproducible Research Data Repository

    Author(s)

    Cheng-Jen Lee, Chia-Hsun Ally Wang, Ming-Syuan Ho, and Tyng-Ruey Chuang

    Affiliation of presenter

    Institute of Information Science, Academia Sinica, Taiwan

    Summary of Abstract

    The depositar (https://data.depositar.io/) is a research data repository at Academia Sinica (Taiwan) open to researhers worldwide for the deposit, discovery, and reuse of datasets. The depositar software itself is open source and builds on top of CKAN. CKAN, an open source project initiated by the Open Knowledge Foundation and sustained by an active user community, is a leading data management system for building data hubs and portals. In addition to CKAN's out-of-the-box features such as JSON data API and in-browser preview of uploaded data, we have added several features to the depositar, including sourcing from Wikidata as dataset keywords, a citation snippet for datasets, in-browser Shapefile preview, and a persistent identifier system based on ARK (Archival Resource Keys). At the same time, the depositar team faces an increasing demand for interactive computing (e.g. Jupyter Notebook) which facilitates not just data analysis, but also for the replication and demonstration of scientific studies. Recently, we have provided a JupyterHub service (a multi-tenancy JupyterLab) to some of the depositar's users. However, it still requires users to first download the data files (or copy the URLs of the files) from the depositar, then upload the data files (or paste the URLs) to the Jupyter notebooks for analysis. Furthermore, a JupyterHub deployed on a single server is limited by its processing power which may lower the service level to the users. To address the above issues, we are integrating the BinderHub into the depositar. BinderHub (https://binderhub.readthedocs.io/) is a kubernetes-based service that allows users to create interactive computing environments from code repositories. Once the integration is completed, users will be able to launch Jupyter Notebooks to perform data analysis and vsualization without leaving the depositar by clicking the BinderHub buttons on the datasets. In this presentation, we will first make a brief introduction to the depositar and BinderHub along with their relationship, then we will share our experiences in incorporating interactive computation in a data repository. We shall also evaluate the possibility of integrating the depositar with other automation frameworks (e.g. the Snakemake workflow management system) in order to enable users to reproduce data analysis.

    Keywords

    BinderHub, CKAN, Data Repositories, Interactive Computing, Reproducible Research

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Juan Pablo Pinillos (2023). BigBasket Descriptive Analysis [Dataset]. https://www.kaggle.com/datasets/juanpablopinillos/bigbasket-descriptive-analysis
Organization logo

BigBasket Descriptive Analysis

A power BI field where I did a Descriptive Data analysis from BigBassket data.

Explore at:
zip(14644654 bytes)Available download formats
Dataset updated
Apr 29, 2023
Authors
Juan Pablo Pinillos
License

Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically

Description

Big Basket is an online grocery and food delivery platform based in India. It allows users to order a variety of products including fresh produce, packaged foods, household essentials, and more, which are then delivered to their doorstep.

I'm pleased to present this project focused on the analysis of sales, ratings, and pricing discounts of BigBasket. This storytelling will conduct a comprehensive analysis of BigBasket's sales data to identify key trends and patterns, as well as provide insights into customer behavior and preferences. My aim is to help the store optimize its performance and enhance its competitive edge in the online retail market.

Source & License

  • kaggle.com & bigbasket.com
  • CC BY-NC-SA 4.0

How to reach me: jppinillos2002@gmail.com

Search
Clear search
Close search
Google apps
Main menu