100+ datasets found
  1. o

    Getting Started with Excel

    • explore.openaire.eu
    Updated Jul 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr Jianzhou Zhao (2021). Getting Started with Excel [Dataset]. http://doi.org/10.5281/zenodo.6423544
    Explore at:
    Dataset updated
    Jul 1, 2021
    Authors
    Dr Jianzhou Zhao
    Description

    About this webinar We rarely receive the research data in an appropriate form. Often data is messy. Sometimes it is incomplete. And sometimes there’s too much of it. Frequently, it has errors. This webinar targets beginners and presents a quick demonstration of using the most widespread data wrangling tool, Microsoft Excel, to sort, filter, copy, protect, transform, aggregate, summarise, and visualise research data. Webinar Topics Introduction to Microsoft Excel user interface Interpret data using sorting, filtering, and conditional formatting Summarise data using functions Analyse data using pivot tables Manipulate and visualise data Handy tips to speed up your work Licence Copyright © 2021 Intersect Australia Ltd. All rights reserved.

  2. Data Science Jobs Analysis

    • kaggle.com
    Updated Feb 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Niyal Thakkar (2023). Data Science Jobs Analysis [Dataset]. https://www.kaggle.com/datasets/niyalthakkar/data-science-jobs-analysis
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 8, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Niyal Thakkar
    Description

    Data science is the domain of study that deals with vast volumes of data using modern tools and techniques to find unseen patterns, derive meaningful information, and make business decisions. Data science uses complex machine learning algorithms to build predictive models.

    The data used for analysis can come from many different sources and be presented in various formats. Data science is an essential part of many industries today, given the massive amounts of data that are produced, and is one of the most debated topics in IT circles.

  3. Z

    Assessing the impact of hints in learning formal specification: Research...

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Jan 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Macedo, Nuno (2024). Assessing the impact of hints in learning formal specification: Research artifact [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10450608
    Explore at:
    Dataset updated
    Jan 29, 2024
    Dataset provided by
    Sousa, Emanuel
    Campos, José Creissac
    Margolis, Iara
    Macedo, Nuno
    Cunha, Alcino
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This artifact accompanies the SEET@ICSE article "Assessing the impact of hints in learning formal specification", which reports on a user study to investigate the impact of different types of automated hints while learning a formal specification language, both in terms of immediate performance and learning retention, but also in the emotional response of the students. This research artifact provides all the material required to replicate this study (except for the proprietary questionnaires passed to assess the emotional response and user experience), as well as the collected data and data analysis scripts used for the discussion in the paper.

    Dataset

    The artifact contains the resources described below.

    Experiment resources

    The resources needed for replicating the experiment, namely in directory experiment:

    alloy_sheet_pt.pdf: the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment. The sheet was passed in Portuguese due to the population of the experiment.

    alloy_sheet_en.pdf: a version the 1-page Alloy sheet that participants had access to during the 2 sessions of the experiment translated into English.

    docker-compose.yml: a Docker Compose configuration file to launch Alloy4Fun populated with the tasks in directory data/experiment for the 2 sessions of the experiment.

    api and meteor: directories with source files for building and launching the Alloy4Fun platform for the study.

    Experiment data

    The task database used in our application of the experiment, namely in directory data/experiment:

    Model.json, Instance.json, and Link.json: JSON files with to populate Alloy4Fun with the tasks for the 2 sessions of the experiment.

    identifiers.txt: the list of all (104) available participant identifiers that can participate in the experiment.

    Collected data

    Data collected in the application of the experiment as a simple one-factor randomised experiment in 2 sessions involving 85 undergraduate students majoring in CSE. The experiment was validated by the Ethics Committee for Research in Social and Human Sciences of the Ethics Council of the University of Minho, where the experiment took place. Data is shared the shape of JSON and CSV files with a header row, namely in directory data/results:

    data_sessions.json: data collected from task-solving in the 2 sessions of the experiment, used to calculate variables productivity (PROD1 and PROD2, between 0 and 12 solved tasks) and efficiency (EFF1 and EFF2, between 0 and 1).

    data_socio.csv: data collected from socio-demographic questionnaire in the 1st session of the experiment, namely:

    participant identification: participant's unique identifier (ID);

    socio-demographic information: participant's age (AGE), sex (SEX, 1 through 4 for female, male, prefer not to disclosure, and other, respectively), and average academic grade (GRADE, from 0 to 20, NA denotes preference to not disclosure).

    data_emo.csv: detailed data collected from the emotional questionnaire in the 2 sessions of the experiment, namely:

    participant identification: participant's unique identifier (ID) and the assigned treatment (column HINT, either N, L, E or D);

    detailed emotional response data: the differential in the 5-point Likert scale for each of the 14 measured emotions in the 2 sessions, ranging from -5 to -1 if decreased, 0 if maintained, from 1 to 5 if increased, or NA denoting failure to submit the questionnaire. Half of the emotions are positive (Admiration1 and Admiration2, Desire1 and Desire2, Hope1 and Hope2, Fascination1 and Fascination2, Joy1 and Joy2, Satisfaction1 and Satisfaction2, and Pride1 and Pride2), and half are negative (Anger1 and Anger2, Boredom1 and Boredom2, Contempt1 and Contempt2, Disgust1 and Disgust2, Fear1 and Fear2, Sadness1 and Sadness2, and Shame1 and Shame2). This detailed data was used to compute the aggregate data in data_emo_aggregate.csv and in the detailed discussion in Section 6 of the paper.

    data_umux.csv: data collected from the user experience questionnaires in the 2 sessions of the experiment, namely:

    participant identification: participant's unique identifier (ID);

    user experience data: summarised user experience data from the UMUX surveys (UMUX1 and UMUX2, as a usability metric ranging from 0 to 100).

    participants.txt: the list of participant identifiers that have registered for the experiment.

    Analysis scripts

    The analysis scripts required to replicate the analysis of the results of the experiment as reported in the paper, namely in directory analysis:

    analysis.r: An R script to analyse the data in the provided CSV files; each performed analysis is documented within the file itself.

    requirements.r: An R script to install the required libraries for the analysis script.

    normalize_task.r: A Python script to normalize the task JSON data from file data_sessions.json into the CSV format required by the analysis script.

    normalize_emo.r: A Python script to compute the aggregate emotional response in the CSV format required by the analysis script from the detailed emotional response data in the CSV format of data_emo.csv.

    Dockerfile: Docker script to automate the analysis script from the collected data.

    Setup

    To replicate the experiment and the analysis of the results, only Docker is required.

    If you wish to manually replicate the experiment and collect your own data, you'll need to install:

    A modified version of the Alloy4Fun platform, which is built in the Meteor web framework. This version of Alloy4Fun is publicly available in branch study of its repository at https://github.com/haslab/Alloy4Fun/tree/study.

    If you wish to manually replicate the analysis of the data collected in our experiment, you'll need to install:

    Python to manipulate the JSON data collected in the experiment. Python is freely available for download at https://www.python.org/downloads/, with distributions for most platforms.

    R software for the analysis scripts. R is freely available for download at https://cran.r-project.org/mirrors.html, with binary distributions available for Windows, Linux and Mac.

    Usage

    Experiment replication

    This section describes how to replicate our user study experiment, and collect data about how different hints impact the performance of participants.

    To launch the Alloy4Fun platform populated with tasks for each session, just run the following commands from the root directory of the artifact. The Meteor server may take a few minutes to launch, wait for the "Started your app" message to show.

    cd experimentdocker-compose up

    This will launch Alloy4Fun at http://localhost:3000. The tasks are accessed through permalinks assigned to each participant. The experiment allows for up to 104 participants, and the list of available identifiers is given in file identifiers.txt. The group of each participant is determined by the last character of the identifier, either N, L, E or D. The task database can be consulted in directory data/experiment, in Alloy4Fun JSON files.

    In the 1st session, each participant was given one permalink that gives access to 12 sequential tasks. The permalink is simply the participant's identifier, so participant 0CAN would just access http://localhost:3000/0CAN. The next task is available after a correct submission to the current task or when a time-out occurs (5mins). Each participant was assigned to a different treatment group, so depending on the permalink different kinds of hints are provided. Below are 4 permalinks, each for each hint group:

    Group N (no hints): http://localhost:3000/0CAN

    Group L (error locations): http://localhost:3000/CA0L

    Group E (counter-example): http://localhost:3000/350E

    Group D (error description): http://localhost:3000/27AD

    In the 2nd session, likewise the 1st session, each permalink gave access to 12 sequential tasks, and the next task is available after a correct submission or a time-out (5mins). The permalink is constructed by prepending the participant's identifier with P-. So participant 0CAN would just access http://localhost:3000/P-0CAN. In the 2nd sessions all participants were expected to solve the tasks without any hints provided, so the permalinks from different groups are undifferentiated.

    Before the 1st session the participants should answer the socio-demographic questionnaire, that should ask the following information: unique identifier, age, sex, familiarity with the Alloy language, and average academic grade.

    Before and after both sessions the participants should answer the standard PrEmo 2 questionnaire. PrEmo 2 is published under an Attribution-NonCommercial-NoDerivatives 4.0 International Creative Commons licence (CC BY-NC-ND 4.0). This means that you are free to use the tool for non-commercial purposes as long as you give appropriate credit, provide a link to the license, and do not modify the original material. The original material, namely the depictions of the diferent emotions, can be downloaded from https://diopd.org/premo/. The questionnaire should ask for the unique user identifier, and for the attachment with each of the depicted 14 emotions, expressed in a 5-point Likert scale.

    After both sessions the participants should also answer the standard UMUX questionnaire. This questionnaire can be used freely, and should ask for the user unique identifier and answers for the standard 4 questions in a 7-point Likert scale. For information about the questions, how to implement the questionnaire, and how to compute the usability metric ranging from 0 to 100 score from the answers, please see the original paper:

    Kraig Finstad. 2010. The usability metric for user experience. Interacting with computers 22, 5 (2010), 323–327.

    Analysis of other applications of the experiment

    This section describes how to replicate the analysis of the data collected in an application of the experiment described in Experiment replication.

    The analysis script expects data in 4 CSV files,

  4. o

    Replication data for: Big Data: New Tricks for Econometrics

    • openicpsr.org
    Updated May 1, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hal R. Varian (2014). Replication data for: Big Data: New Tricks for Econometrics [Dataset]. http://doi.org/10.3886/E113925V1
    Explore at:
    Dataset updated
    May 1, 2014
    Dataset provided by
    American Economic Association
    Authors
    Hal R. Varian
    Time period covered
    May 1, 2014
    Description

    Computers are now involved in many economic transactions and can capture data associated with these transactions, which can then be manipulated and analyzed. Conventional statistical and econometric techniques such as regression often work well, but there are issues unique to big datasets that may require different tools. First, the sheer size of the data involved may require more powerful data manipulation tools. Second, we may have more potential predictors than appropriate for estimation, so we need to do some kind of variable selection. Third, large datasets may allow for more flexible relationships than simple linear models. Machine learning techniques such as decision trees, support vector machines, neural nets, deep learning, and so on may allow for more effective ways to model complex relationships. In this essay, I will describe a few of these tools for manipulating and analyzing big data. I believe that these methods have a lot to offer and should be more widely known and used by economists.

  5. S

    Global Exploratory Data Analysis (EDA) Tools Market Revenue Forecasts...

    • statsndata.org
    excel, pdf
    Updated May 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stats N Data (2025). Global Exploratory Data Analysis (EDA) Tools Market Revenue Forecasts 2025-2032 [Dataset]. https://www.statsndata.org/report/exploratory-data-analysis-eda-tools-market-313301
    Explore at:
    excel, pdfAvailable download formats
    Dataset updated
    May 2025
    Dataset authored and provided by
    Stats N Data
    License

    https://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order

    Area covered
    Global
    Description

    Exploratory Data Analysis (EDA) Tools play a pivotal role in the modern data-driven landscape, transforming raw data into actionable insights. As businesses increasingly recognize the value of data in informing decisions, the market for EDA tools has witnessed substantial growth, driven by the rapid expansion of dat

  6. f

    Data_Sheet_1_Advanced large language models and visualization tools for data...

    • frontiersin.figshare.com
    txt
    Updated Aug 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jorge Valverde-Rebaza; Aram González; Octavio Navarro-Hinojosa; Julieta Noguez (2024). Data_Sheet_1_Advanced large language models and visualization tools for data analytics learning.csv [Dataset]. http://doi.org/10.3389/feduc.2024.1418006.s001
    Explore at:
    txtAvailable download formats
    Dataset updated
    Aug 8, 2024
    Dataset provided by
    Frontiers
    Authors
    Jorge Valverde-Rebaza; Aram González; Octavio Navarro-Hinojosa; Julieta Noguez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionIn recent years, numerous AI tools have been employed to equip learners with diverse technical skills such as coding, data analysis, and other competencies related to computational sciences. However, the desired outcomes have not been consistently achieved. This study aims to analyze the perspectives of students and professionals from non-computational fields on the use of generative AI tools, augmented with visualization support, to tackle data analytics projects. The focus is on promoting the development of coding skills and fostering a deep understanding of the solutions generated. Consequently, our research seeks to introduce innovative approaches for incorporating visualization and generative AI tools into educational practices.MethodsThis article examines how learners perform and their perspectives when using traditional tools vs. LLM-based tools to acquire data analytics skills. To explore this, we conducted a case study with a cohort of 59 participants among students and professionals without computational thinking skills. These participants developed a data analytics project in the context of a Data Analytics short session. Our case study focused on examining the participants' performance using traditional programming tools, ChatGPT, and LIDA with GPT as an advanced generative AI tool.ResultsThe results shown the transformative potential of approaches based on integrating advanced generative AI tools like GPT with specialized frameworks such as LIDA. The higher levels of participant preference indicate the superiority of these approaches over traditional development methods. Additionally, our findings suggest that the learning curves for the different approaches vary significantly. Since learners encountered technical difficulties in developing the project and interpreting the results. Our findings suggest that the integration of LIDA with GPT can significantly enhance the learning of advanced skills, especially those related to data analytics. We aim to establish this study as a foundation for the methodical adoption of generative AI tools in educational settings, paving the way for more effective and comprehensive training in these critical areas.DiscussionIt is important to highlight that when using general-purpose generative AI tools such as ChatGPT, users must be aware of the data analytics process and take responsibility for filtering out potential errors or incompleteness in the requirements of a data analytics project. These deficiencies can be mitigated by using more advanced tools specialized in supporting data analytics tasks, such as LIDA with GPT. However, users still need advanced programming knowledge to properly configure this connection via API. There is a significant opportunity for generative AI tools to improve their performance, providing accurate, complete, and convincing results for data analytics projects, thereby increasing user confidence in adopting these technologies. We hope this work underscores the opportunities and needs for integrating advanced LLMs into educational practices, particularly in developing computational thinking skills.

  7. D

    Data Mining Tools Market Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Feb 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). Data Mining Tools Market Report [Dataset]. https://www.marketresearchforecast.com/reports/data-mining-tools-market-1722
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    Feb 3, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Data Mining Tools Market size was valued at USD 1.01 USD billion in 2023 and is projected to reach USD 1.99 USD billion by 2032, exhibiting a CAGR of 10.2 % during the forecast period. The growing adoption of data-driven decision-making and the increasing need for business intelligence are major factors driving market growth. Data mining refers to filtering, sorting, and classifying data from larger datasets to reveal subtle patterns and relationships, which helps enterprises identify and solve complex business problems through data analysis. Data mining software tools and techniques allow organizations to foresee future market trends and make business-critical decisions at crucial times. Data mining is an essential component of data science that employs advanced data analytics to derive insightful information from large volumes of data. Businesses rely heavily on data mining to undertake analytics initiatives in the organizational setup. The analyzed data sourced from data mining is used for varied analytics and business intelligence (BI) applications, which consider real-time data analysis along with some historical pieces of information. Recent developments include: May 2023 – WiMi Hologram Cloud Inc. introduced a new data interaction system developed by combining neural network technology and data mining. Using real-time interaction, the system can offer reliable and safe information transmission., May 2023 – U.S. Data Mining Group, Inc., operating in bitcoin mining site, announced a hosting contract to deploy 150,000 bitcoins in partnership with major companies such as TeslaWatt, Sphere 3D, Marathon Digital, and more. The company is offering industry turn-key solutions for curtailment, accounting, and customer relations., April 2023 – Artificial intelligence and single-cell biotech analytics firm, One Biosciences, launched a single cell data mining algorithm called ‘MAYA’. The algorithm is for cancer patients to detect therapeutic vulnerabilities., May 2022 – Europe-based Solarisbank, a banking-as-a-service provider, announced its partnership with Snowflake to boost its cloud data strategy. Using the advanced cloud infrastructure, the company can enhance data mining efficiency and strengthen its banking position.. Key drivers for this market are: Increasing Focus on Customer Satisfaction to Drive Market Growth. Potential restraints include: Requirement of Skilled Technical Resources Likely to Hamper Market Growth. Notable trends are: Incorporation of Data Mining and Machine Learning Solutions to Propel Market Growth.

  8. 2025 Green Card Report for International Studies With Emphasis In...

    • myvisajobs.com
    Updated Jan 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MyVisaJobs (2025). 2025 Green Card Report for International Studies With Emphasis In Quantitative Spatial Data Analysis [Dataset]. https://www.myvisajobs.com/reports/green-card/major/international-studies-with-emphasis-in-quantitative-spatial-data-analysis
    Explore at:
    Dataset updated
    Jan 16, 2025
    Dataset provided by
    MyVisaJobs.com
    Authors
    MyVisaJobs
    License

    https://www.myvisajobs.com/terms-of-service/https://www.myvisajobs.com/terms-of-service/

    Variables measured
    Major, Salary, Petitions Filed
    Description

    A dataset that explores Green Card sponsorship trends, salary data, and employer insights for international studies with emphasis in quantitative spatial data analysis in the U.S.

  9. US Clinical Trials Market Analysis - Size and Forecast 2025-2029

    • technavio.com
    Updated Feb 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). US Clinical Trials Market Analysis - Size and Forecast 2025-2029 [Dataset]. https://www.technavio.com/report/us-clinical-trials-market-analysis
    Explore at:
    Dataset updated
    Feb 15, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2021 - 2025
    Area covered
    United States
    Description

    Snapshot img

    US Clinical Trials Market Size 2025-2029

    The us clinical trials market size is forecast to increase by USD 6.5 billion, at a CAGR of 5.3% between 2024 and 2029.

    The Clinical Trials Market in the US is witnessing significant growth, driven by the increasing number of clinical trials for drugs and advancements in technology and scientific research. The rise in clinical trials is attributed to the development of new therapies and treatments across various therapeutic areas, leading to a surge in demand for clinical trial services. However, this market faces challenges, including the escalating costs of clinical trials. The complexity and intricacy of clinical trials have resulted in increased expenses, making it essential for market participants to optimize their resources and processes. Another challenge is the regulatory landscape, which is constantly evolving, necessitating clinical trial sponsors to stay updated and adapt to new regulations to ensure compliance. To capitalize on market opportunities and navigate challenges effectively, companies must focus on implementing innovative solutions, improving operational efficiency, and maintaining regulatory compliance.

    What will be the size of the US Clinical Trials Market during the forecast period?

    Explore in-depth regional segment analysis with market size data - historical 2019-2023 and forecasts 2025-2029 - in the full report.
    Request Free Sample

    The clinical trials market in the US is characterized by continuous advancements in drug development, driven by the integration of technology and data-driven insights. In vitro studies and preclinical research pave the way for efficacy trials in neurological, cancer, cardiovascular, and other therapeutic areas. Big data analytics plays a pivotal role in drug metabolism studies, enabling the optimization of phase III trials through precision medicine and biomarker discovery. Wearable devices and mobile health (mHealth) facilitate real-time monitoring in clinical pharmacology, while cloud computing streamlines clinical trial software and dose-finding studies. Gene therapy and regenerative medicine are gaining traction in orphan drug development, with animal studies and target validation shaping the landscape. Safety trials in phase I and II are complemented by digital health solutions, while phase IV trials ensure long-term safety monitoring. Drug interactions and phase I trials are addressed through device development and clinical trial software, respectively. Overall, the US clinical trials market is dynamic, with innovation at the forefront of drug development, from first-in-human studies to phase iv trials.

    How is this market segmented?

    The market research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD billion' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments. TypePhase IIIPhase IPhase IIPhase IVService TypeInterventional studiesObservational studiesExpanded access studiesIndicationOncologyCNSAutoimmune/inflammationOthersGeographyNorth AmericaUS

    By Type Insights

    The phase iii segment is estimated to witness significant growth during the forecast period.

    The clinical trials market in the US is characterized by the involvement of various entities in the intricate process of testing new drugs and medical treatments for public use. In the final phase of this process, the phase III clinical trials play a pivotal role in assessing the safety and efficacy of investigational treatments on a larger population. This data-intensive stage is crucial for determining the potential benefits and risks before regulatory approval. Personalized medicine and adaptive designs have become integral to clinical trials, enabling customized treatment plans and flexible trial designs. Medical device companies and diagnostic firms collaborate to integrate devices and diagnostics into clinical trials, enhancing data collection and analysis. Data privacy and security are paramount, with stringent regulations ensuring patient data confidentiality and integrity. Pharmaceutical companies invest heavily in clinical trials, collaborating with academic research centers, biotechnology firms, and venture capitalists to share resources and expertise. Informed consent, ethical considerations, and regulatory submissions are critical components of the clinical trial process. Machine learning and artificial intelligence are increasingly used for data analysis, clinical trial optimization, and patient recruitment. Government funding and patient advocacy also play significant roles in advancing clinical trials. Real-world evidence and observational studies provide valuable insights into the effectiveness and safety of treatments in diverse populations. Biomarker analysis and interim analysis help monitor treatment progress and adjust trial designs accordingly. Q

  10. Preventive Maintenance for Marine Engines

    • kaggle.com
    Updated Feb 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fijabi J. Adekunle (2025). Preventive Maintenance for Marine Engines [Dataset]. https://www.kaggle.com/datasets/jeleeladekunlefijabi/preventive-maintenance-for-marine-engines
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 13, 2025
    Dataset provided by
    Kaggle
    Authors
    Fijabi J. Adekunle
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Preventive Maintenance for Marine Engines: Data-Driven Insights

    Introduction:

    Marine engine failures can lead to costly downtime, safety risks and operational inefficiencies. This project leverages machine learning to predict maintenance needs, helping ship operators prevent unexpected breakdowns. Using a simulated dataset, we analyze key engine parameters and develop predictive models to classify maintenance status into three categories: Normal, Requires Maintenance, and Critical.

    Overview This project explores preventive maintenance strategies for marine engines by analyzing operational data and applying machine learning techniques.

    Key steps include: 1. Data Simulation: Creating a realistic dataset with engine performance metrics. 2. Exploratory Data Analysis (EDA): Understanding trends and patterns in engine behavior. 3. Model Training & Evaluation: Comparing machine learning models (Decision Tree, Random Forest, XGBoost) to predict maintenance needs. 4. Hyperparameter Tuning: Using GridSearchCV to optimize model performance.

    Tools Used 1. Python: Data processing, analysis and modeling 2. Pandas & NumPy: Data manipulation 3. Scikit-Learn & XGBoost: Machine learning model training 4. Matplotlib & Seaborn: Data visualization

    Skills Demonstrated ✔ Data Simulation & Preprocessing ✔ Exploratory Data Analysis (EDA) ✔ Feature Engineering & Encoding ✔ Supervised Machine Learning (Classification) ✔ Model Evaluation & Hyperparameter Tuning

    Key Insights & Findings 📌 Engine Temperature & Vibration Level: Strong indicators of potential failures. 📌 Random Forest vs. XGBoost: After hyperparameter tuning, both models achieved comparable performance, with Random Forest performing slightly better. 📌 Maintenance Status Distribution: Balanced dataset ensures unbiased model training. 📌 Failure Modes: The most common issues were Mechanical Wear & Oil Leakage, aligning with real-world engine failure trends.

    Challenges Faced 🚧 Simulating Realistic Data: Ensuring the dataset reflects real-world marine engine behavior was a key challenge. 🚧 Model Performance: The accuracy was limited (~35%) due to the complexity of failure prediction. 🚧 Feature Selection: Identifying the most impactful features required extensive analysis.

    Call to Action 🔍 Explore the Dataset & Notebook: Try running different models and tweaking hyperparameters. 📊 Extend the Analysis: Incorporate additional sensor data or alternative machine learning techniques. 🚀 Real-World Application: This approach can be adapted for industrial machinery, aircraft engines, and power plants.

  11. i

    Climate Data Analysis Market - In-Depth Analysis by Size

    • imrmarketreports.com
    Updated Sep 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Swati Kalagate; Akshay Patil; Vishal Kumbhar (2023). Climate Data Analysis Market - In-Depth Analysis by Size [Dataset]. https://www.imrmarketreports.com/reports/climate-data-analysis-market
    Explore at:
    Dataset updated
    Sep 22, 2023
    Authors
    Swati Kalagate; Akshay Patil; Vishal Kumbhar
    License

    https://www.imrmarketreports.com/privacy-policy/https://www.imrmarketreports.com/privacy-policy/

    Description

    Report of Climate Data Analysis Market is currently supplying a comprehensive analysis of many things which are liable for economy growth and factors which could play an important part in the increase of the marketplace in the prediction period. The record of Climate Data Analysis Industry is providing the thorough study on the grounds of market revenue discuss production and price happened. The report also provides the overview of the segmentation on the basis of area, contemplating the particulars of earnings and sales pertaining to marketplace.

  12. m

    Data for "Direct and indirect Rod and Frame effect: A virtual reality study"...

    • data.mendeley.com
    Updated Feb 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michał Adamski (2025). Data for "Direct and indirect Rod and Frame effect: A virtual reality study" [Dataset]. http://doi.org/10.17632/pcf2n8b4rd.1
    Explore at:
    Dataset updated
    Feb 12, 2025
    Authors
    Michał Adamski
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the raw experimental data and supplementary materials for the "Asymmetry Effects in Virtual Reality Rod and Frame Test". The materials included are:

    •  Raw Experimental Data: older.csv and young.csv
    •  Mathematica Notebooks: a collection of Mathematica notebooks used for data analysis and visualization. These notebooks provide scripts for processing the experimental data, performing statistical analyses, and generating the figures used in the project.
    •  Unity Package: a Unity package featuring a sample scene related to the project. The scene was built using Unity’s Universal Rendering Pipeline (URP). To utilize this package, ensure that URP is enabled in your Unity project. Instructions for enabling URP can be found in the Unity URP Documentation.
    

    Requirements:

    •  For Data Files: software capable of opening CSV files (e.g., Microsoft Excel, Google Sheets, or any programming language that can read CSV formats).
    •  For Mathematica Notebooks: Wolfram Mathematica software to run and modify the notebooks.
    •  For Unity Package: Unity Editor version compatible with URP (2019.3 or later recommended). URP must be installed and enabled in your Unity project.
    

    Usage Notes:

    •  The dataset facilitates comparative studies between different age groups based on the collected variables.
    •  Users can modify the Mathematica notebooks to perform additional analyses.
    •  The Unity scene serves as a reference to the project setup and can be expanded or integrated into larger projects.
    

    Citation: Please cite this dataset when using it in your research or publications.

  13. Clinical Genomic Data Analysis Market Report | Global Forecast From 2025 To...

    • dataintelo.com
    csv, pdf, pptx
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Clinical Genomic Data Analysis Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-clinical-genomic-data-analysis-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Clinical Genomic Data Analysis Market Outlook



    The global clinical genomic data analysis market size was valued at USD 1.5 billion in 2023 and is projected to reach USD 6.3 billion by 2032, growing at a compound annual growth rate (CAGR) of 17.2% during the forecast period. This market growth is driven by the increasing adoption of genomic sequencing technologies, advancements in bioinformatics, and the rising prevalence of chronic diseases that necessitate personalized medicine and targeted therapies.



    A major growth factor for the clinical genomic data analysis market is the exponential increase in the volume of genomic data being generated. With the cost of sequencing dropping and the speed of sequencing increasing, more genomic data is being produced than ever before. This abundance of data requires sophisticated analysis tools and software to interpret and derive meaningful insights, driving the demand for advanced genomic data analysis solutions. Additionally, the integration of artificial intelligence and machine learning algorithms in genomics is further enhancing the capabilities of these analysis tools, enabling more accurate and faster data interpretation.



    Another significant factor contributing to market growth is the rising incidence of genetic disorders and cancers, which necessitates comprehensive genomic analysis for accurate diagnosis and personalized treatment plans. Personalized medicine, which tailors medical treatment to the individual characteristics of each patient, relies heavily on the insights gained from genomic data analysis. As the understanding of the genetic basis of diseases deepens, the demand for clinical genomic data analysis is expected to surge, further propelling market growth.



    The integration of NGS Informatics and Clinical Genomics is revolutionizing the field of personalized medicine. By leveraging next-generation sequencing (NGS) technologies, researchers and clinicians can now analyze vast amounts of genomic data with unprecedented speed and accuracy. This integration enables the identification of genetic variants that may contribute to disease, allowing for more precise diagnosis and the development of targeted therapies. As the capabilities of NGS technologies continue to expand, the role of informatics in managing and interpreting this data becomes increasingly critical. The seamless integration of NGS Informatics and Clinical Genomics is paving the way for more effective and personalized healthcare solutions, ultimately improving patient outcomes.



    Government initiatives and funding in genomics research also play a crucial role in the expansion of the clinical genomic data analysis market. Many governments around the world are investing heavily in genomic research projects and infrastructure to advance medical research and improve public health outcomes. For instance, initiatives like the 100,000 Genomes Project in the UK and the All of Us Research Program in the US underscore the importance of genomics in understanding human health and disease, thereby boosting the demand for genomic data analysis tools and services.



    Regional outlook reveals significant growth opportunities in emerging markets, particularly in the Asia Pacific region. Countries like China, India, and Japan are witnessing rapid advancements in healthcare infrastructure and increasing investments in genomics research. Additionally, favorable government policies and the presence of a large patient pool make this region a lucrative market for clinical genomic data analysis. North America continues to dominate the market due to high healthcare spending, advanced research facilities, and the early adoption of new technologies. Europe also shows steady growth with significant contributions from countries like the UK, Germany, and France.



    Component Analysis



    The component segment of the clinical genomic data analysis market is divided into software and services. The software segment encompasses various bioinformatics tools and platforms used for genomic data analysis. These tools are essential for the effective management, storage, and interpretation of the massive amounts of genomic data generated. The growing complexity of genomic data necessitates the use of robust software solutions that can handle large datasets and provide accurate insights. As a result, the software segment is expected to witness significant growth during the forecast period.



    The services segment includes

  14. B

    Easing into Excellent Excel Practices Learning Series / Série...

    • borealisdata.ca
    • search.dataone.org
    Updated Nov 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Julie Marcoux (2023). Easing into Excellent Excel Practices Learning Series / Série d'apprentissages en route vers des excellentes pratiques Excel [Dataset]. http://doi.org/10.5683/SP3/WZYO1F
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 15, 2023
    Dataset provided by
    Borealis
    Authors
    Julie Marcoux
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    With a step-by-step approach, learn to prepare Excel files, data worksheets, and individual data columns for data analysis; practice conditional formatting and creating pivot tables/charts; go over basic principles of Research Data Management as they might apply to an Excel project. Avec une approche étape par étape, apprenez à préparer pour l’analyse des données des fichiers Excel, des feuilles de calcul de données et des colonnes de données individuelles; pratiquez la mise en forme conditionnelle et la création de tableaux croisés dynamiques ou de graphiques; passez en revue les principes de base de la gestion des données de recherche tels qu’ils pourraient s’appliquer à un projet Excel.

  15. d

    Job Postings Dataset for Labour Market Research and Insights

    • datarade.ai
    Updated Sep 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oxylabs (2023). Job Postings Dataset for Labour Market Research and Insights [Dataset]. https://datarade.ai/data-products/job-postings-dataset-for-labour-market-research-and-insights-oxylabs
    Explore at:
    .json, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Sep 20, 2023
    Dataset authored and provided by
    Oxylabs
    Area covered
    Anguilla, British Indian Ocean Territory, Kyrgyzstan, Tajikistan, Luxembourg, Jamaica, Togo, Sierra Leone, Switzerland, Zambia
    Description

    Introducing Job Posting Datasets: Uncover labor market insights!

    Elevate your recruitment strategies, forecast future labor industry trends, and unearth investment opportunities with Job Posting Datasets.

    Job Posting Datasets Source:

    1. Indeed: Access datasets from Indeed, a leading employment website known for its comprehensive job listings.

    2. Glassdoor: Receive ready-to-use employee reviews, salary ranges, and job openings from Glassdoor.

    3. StackShare: Access StackShare datasets to make data-driven technology decisions.

    Job Posting Datasets provide meticulously acquired and parsed data, freeing you to focus on analysis. You'll receive clean, structured, ready-to-use job posting data, including job titles, company names, seniority levels, industries, locations, salaries, and employment types.

    Choose your preferred dataset delivery options for convenience:

    Receive datasets in various formats, including CSV, JSON, and more. Opt for storage solutions such as AWS S3, Google Cloud Storage, and more. Customize data delivery frequencies, whether one-time or per your agreed schedule.

    Why Choose Oxylabs Job Posting Datasets:

    1. Fresh and accurate data: Access clean and structured job posting datasets collected by our seasoned web scraping professionals, enabling you to dive into analysis.

    2. Time and resource savings: Focus on data analysis and your core business objectives while we efficiently handle the data extraction process cost-effectively.

    3. Customized solutions: Tailor our approach to your business needs, ensuring your goals are met.

    4. Legal compliance: Partner with a trusted leader in ethical data collection. Oxylabs is a founding member of the Ethical Web Data Collection Initiative, aligning with GDPR and CCPA best practices.

    Pricing Options:

    Standard Datasets: choose from various ready-to-use datasets with standardized data schemas, priced from $1,000/month.

    Custom Datasets: Tailor datasets from any public web domain to your unique business needs. Contact our sales team for custom pricing.

    Experience a seamless journey with Oxylabs:

    • Understanding your data needs: We work closely to understand your business nature and daily operations, defining your unique data requirements.
    • Developing a customized solution: Our experts create a custom framework to extract public data using our in-house web scraping infrastructure.
    • Delivering data sample: We provide a sample for your feedback on data quality and the entire delivery process.
    • Continuous data delivery: We continuously collect public data and deliver custom datasets per the agreed frequency.

    Effortlessly access fresh job posting data with Oxylabs Job Posting Datasets.

  16. w

    Global Data Element Market Research Report: By Data Source (Relational...

    • wiseguyreports.com
    Updated Jul 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Data Element Market Research Report: By Data Source (Relational Databases, NoSQL Databases, Big Data Platforms, Cloud-based Data Warehouses), By Type (Structured Data, Unstructured Data, Semi-Structured Data), By Format (XML, JSON, CSV, Parquet), By Purpose (Data Analysis, Machine Learning, Data Visualization, Data Governance), By Deployment Model (On-premises, Cloud-based, Hybrid) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/reports/data-element-market
    Explore at:
    Dataset updated
    Jul 23, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 7, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20237.6(USD Billion)
    MARKET SIZE 20248.66(USD Billion)
    MARKET SIZE 203224.7(USD Billion)
    SEGMENTS COVEREDData Source ,Type ,Format ,Purpose ,Deployment Model ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICSAIdriven data element management Data privacy and regulations Cloudbased data element platforms Data sharing and collaboration Increasing demand for realtime data
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDInformatica ,Micro Focus ,IBM ,SAS ,Denodo ,Oracle ,TIBCO ,Talend ,SAP
    MARKET FORECAST PERIOD2024 - 2032
    KEY MARKET OPPORTUNITIES1 Adoption of AI and ML 2 Growing demand for data analytics 3 Increasing cloud adoption 4 Data privacy and security concerns 5 Integration with emerging technologies
    COMPOUND ANNUAL GROWTH RATE (CAGR) 13.99% (2024 - 2032)
  17. Big Data Analysis Platform Market Report | Global Forecast From 2025 To 2033...

    • dataintelo.com
    csv, pdf, pptx
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Big Data Analysis Platform Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-big-data-analysis-platform-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Big Data Analysis Platform Market Outlook



    The global market size for Big Data Analysis Platforms is projected to grow from USD 35.5 billion in 2023 to an impressive USD 110.7 billion by 2032, reflecting a CAGR of 13.5%. This substantial growth can be attributed to the increasing adoption of data-driven decision-making processes across various industries, the rapid proliferation of IoT devices, and the ever-growing volumes of data generated globally.



    One of the primary growth factors for the Big Data Analysis Platform market is the escalating need for businesses to derive actionable insights from complex and voluminous datasets. With the advent of technologies such as artificial intelligence and machine learning, organizations are increasingly leveraging big data analytics to enhance their operational efficiency, customer experience, and competitiveness. The ability to process vast amounts of data quickly and accurately is proving to be a game-changer, enabling businesses to make more informed decisions, predict market trends, and optimize their supply chains.



    Another significant driver is the rise of digital transformation initiatives across various sectors. Companies are increasingly adopting digital technologies to improve their business processes and meet changing customer expectations. Big Data Analysis Platforms are central to these initiatives, providing the necessary tools to analyze and interpret data from diverse sources, including social media, customer transactions, and sensor data. This trend is particularly pronounced in sectors such as retail, healthcare, and BFSI (banking, financial services, and insurance), where data analytics is crucial for personalizing customer experiences, managing risks, and improving operational efficiencies.



    Moreover, the growing adoption of cloud computing is significantly influencing the market. Cloud-based Big Data Analysis Platforms offer several advantages over traditional on-premises solutions, including scalability, flexibility, and cost-effectiveness. Businesses of all sizes are increasingly turning to cloud-based analytics solutions to handle their data processing needs. The ability to scale up or down based on demand, coupled with reduced infrastructure costs, makes cloud-based solutions particularly appealing to small and medium-sized enterprises (SMEs) that may not have the resources to invest in extensive on-premises infrastructure.



    Data Science and Machine-Learning Platforms play a pivotal role in the evolution of Big Data Analysis Platforms. These platforms provide the necessary tools and frameworks for processing and analyzing vast datasets, enabling organizations to uncover hidden patterns and insights. By integrating data science techniques with machine learning algorithms, businesses can automate the analysis process, leading to more accurate predictions and efficient decision-making. This integration is particularly beneficial in sectors such as finance and healthcare, where the ability to quickly analyze complex data can lead to significant competitive advantages. As the demand for data-driven insights continues to grow, the role of data science and machine-learning platforms in enhancing big data analytics capabilities is becoming increasingly critical.



    From a regional perspective, North America currently holds the largest market share, driven by the presence of major technology companies, high adoption rates of advanced technologies, and substantial investments in data analytics infrastructure. Europe and the Asia Pacific regions are also experiencing significant growth, fueled by increasing digitalization efforts and the rising importance of data analytics in business strategy. The Asia Pacific region, in particular, is expected to witness the highest CAGR during the forecast period, propelled by rapid economic growth, a burgeoning middle class, and increasing internet and smartphone penetration.



    Component Analysis



    The Big Data Analysis Platform market can be broadly categorized into three components: Software, Hardware, and Services. The software segment includes analytics software, data management software, and visualization tools, which are crucial for analyzing and interpreting large datasets. This segment is expected to dominate the market due to the continuous advancements in analytics software and the increasing need for sophisticated data analysis tools. Analytics software enables organizations to process and analyze data from multiple sources,

  18. f

    Data from: Inflect: Optimizing Computational Workflows for Thermal Proteome...

    • acs.figshare.com
    xlsx
    Updated Jun 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neil A. McCracken; Sarah A. Peck Justice; Aruna B. Wijeratne; Amber L. Mosley (2023). Inflect: Optimizing Computational Workflows for Thermal Proteome Profiling Data Analysis [Dataset]. http://doi.org/10.1021/acs.jproteome.0c00872.s002
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 7, 2023
    Dataset provided by
    ACS Publications
    Authors
    Neil A. McCracken; Sarah A. Peck Justice; Aruna B. Wijeratne; Amber L. Mosley
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    The CETSA and Thermal Proteome Profiling (TPP) analytical methods are invaluable for the study of protein–ligand interactions and protein stability in a cellular context. These tools have increasingly been leveraged in work ranging from understanding signaling paradigms to drug discovery. Consequently, there is an important need to optimize the data analysis pipeline that is used to calculate protein melt temperatures (Tm) and relative melt shifts from proteomics abundance data. Here, we report a user-friendly analysis of the melt shift calculation workflow where we describe the impact of each individual calculation step on the final output list of stabilized and destabilized proteins. This report also includes a description of how key steps in the analysis workflow quantitatively impact the list of stabilized/destabilized proteins from an experiment. We applied our findings to develop a more optimized analysis workflow that illustrates the dramatic sensitivity of chosen calculation steps on the final list of reported proteins of interest in a study and have made the R based program Inflect available for research community use through the CRAN repository [McCracken, N. Inflect: Melt Curve Fitting and Melt Shift Analysis. R package version 1.0.3, 2021]. The Inflect outputs include melt curves for each protein which passes filtering criteria in addition to a data matrix which is directly compatible with downstream packages such as UpsetR for replicate comparisons and identification of biologically relevant changes. Overall, this work provides an essential resource for scientists as they analyze data from TPP and CETSA experiments and implement their own analysis pipelines geared toward specific applications.

  19. Data from: Secondary Data Analysis of the Socio-Economic Panel Study and the...

    • beta.ukdataservice.ac.uk
    • datacatalogue.cessda.eu
    Updated 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laura Langner (2021). Secondary Data Analysis of the Socio-Economic Panel Study and the Cross-National Equivalent File, 2016-2020 [Dataset]. http://doi.org/10.5255/ukda-sn-854591
    Explore at:
    Dataset updated
    2021
    Dataset provided by
    UK Data Servicehttps://ukdataservice.ac.uk/
    DataCitehttps://www.datacite.org/
    Authors
    Laura Langner
    Description

    The data comprises three of the Cross-National Equivalent Files. The Panel Study of Income Dynamics (1970-2013) ; the German Socio-Economic Panel Study (1984-2015) and the UKHLS (2009-2014) and the British Household Panel Study (1991-2009). The following variables were extracted: personal identifier (x11101LL), household identifier (x11102), survey year (year), sex (d11101LL), marital status (d11104), income (i11110), employment status (e11101), hour worked (e11101), education (d11108/9), partner identifier (d11105), household size (d11106) and number of children (d11107). The data came in a harmonized form from the data providers. For the papers on Germany, in addition to the variables described above, life satisfaction, work hour flexibility, caregiving, housework hours, widowhood status and carer ID were further extracted from the original German Socio-Economic Panel Study.

  20. Model-predicted compositional mean and total discretionary time at follow-up...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Louise Foley; Dorothea Dumuid; Andrew J. Atkin; Katrien Wijndaele; David Ogilvie; Timothy Olds (2023). Model-predicted compositional mean and total discretionary time at follow-up in the longitudinal sample (n = 4,323). [Dataset]. http://doi.org/10.1371/journal.pone.0216650.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Louise Foley; Dorothea Dumuid; Andrew J. Atkin; Katrien Wijndaele; David Ogilvie; Timothy Olds
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Model-predicted compositional mean and total discretionary time at follow-up in the longitudinal sample (n = 4,323).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Dr Jianzhou Zhao (2021). Getting Started with Excel [Dataset]. http://doi.org/10.5281/zenodo.6423544

Getting Started with Excel

Explore at:
57 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jul 1, 2021
Authors
Dr Jianzhou Zhao
Description

About this webinar We rarely receive the research data in an appropriate form. Often data is messy. Sometimes it is incomplete. And sometimes there’s too much of it. Frequently, it has errors. This webinar targets beginners and presents a quick demonstration of using the most widespread data wrangling tool, Microsoft Excel, to sort, filter, copy, protect, transform, aggregate, summarise, and visualise research data. Webinar Topics Introduction to Microsoft Excel user interface Interpret data using sorting, filtering, and conditional formatting Summarise data using functions Analyse data using pivot tables Manipulate and visualise data Handy tips to speed up your work Licence Copyright © 2021 Intersect Australia Ltd. All rights reserved.

Search
Clear search
Close search
Google apps
Main menu