44 datasets found
  1. f

    UC_vs_US Statistic Analysis.xlsx

    • figshare.com
    xlsx
    Updated Jul 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    F. (Fabiano) Dalpiaz (2020). UC_vs_US Statistic Analysis.xlsx [Dataset]. http://doi.org/10.23644/uu.12631628.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jul 9, 2020
    Dataset provided by
    Utrecht University
    Authors
    F. (Fabiano) Dalpiaz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.

    Tagging scheme:
    Aligned (AL) - A concept is represented as a class in both models, either
    

    with the same name or using synonyms or clearly linkable names; Wrongly represented (WR) - A class in the domain expert model is incorrectly represented in the student model, either (i) via an attribute, method, or relationship rather than class, or (ii) using a generic term (e.g., user'' instead ofurban planner''); System-oriented (SO) - A class in CM-Stud that denotes a technical implementation aspect, e.g., access control. Classes that represent legacy system or the system under design (portal, simulator) are legitimate; Omitted (OM) - A class in CM-Expert that does not appear in any way in CM-Stud; Missing (MI) - A class in CM-Stud that does not appear in any way in CM-Expert.

    All the calculations and information provided in the following sheets
    

    originate from that raw data.

    Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,
    

    including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.

    Sheet 3 (Size-Ratio):
    

    The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.

    Sheet 4 (Overall):
    

    Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.

    For sheet 4 as well as for the following four sheets, diverging stacked bar
    

    charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:

    Sheet 5 (By-Notation):
    

    Model correctness and model completeness is compared by notation - UC, US.

    Sheet 6 (By-Case):
    

    Model correctness and model completeness is compared by case - SIM, HOS, IFA.

    Sheet 7 (By-Process):
    

    Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.

    Sheet 8 (By-Grade):
    

    Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.

  2. m

    Data from: Automating Knowledge: A Case Study of Library Automation in of...

    • data.mendeley.com
    Updated Mar 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    RUCHI SINHA (2025). Automating Knowledge: A Case Study of Library Automation in of College Libraries of Dadra and Nagar Haveli [Dataset]. http://doi.org/10.17632/h2c2w5sgbx.1
    Explore at:
    Dataset updated
    Mar 10, 2025
    Authors
    RUCHI SINHA
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Dadra and Nagar Haveli
    Description

    Research Design: Mixed-methods approach, combining quantitative and qualitative methods. Data Collection: - Survey questionnaire (Google Forms) with 500 respondents from 10 college libraries. - In-depth interviews with 20 librarians and library administrators. - Observational studies in 5 college libraries. Data Analysis: - Descriptive statistics (mean, median, mode, standard deviation). - Inferential statistics (t-tests, ANOVA). - Thematic analysis for qualitative data. Instruments and Software: - Google Forms - Microsoft Excel - SPSS - NVivo Protocols: - Survey protocol: pilot-tested with a small group. - Interview protocol: used an interview guide. Workflows: - Data cleaning and validation.

  3. Case Study Cyclistic Bike_share

    • kaggle.com
    Updated Dec 13, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chris Palmer (2022). Case Study Cyclistic Bike_share [Dataset]. https://www.kaggle.com/datasets/chriscpalmer/casestudy
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 13, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Chris Palmer
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    The dataset you see was my project for a case study provided by Google via their Google Analytics course via Coursera. This was a case study option to help a proxy bike share company, Google provided the data through a partnership with the City of Chigaco public transportation,* DiVy*, and the data has been made available by Motivate International Inc. This was a case study option to help a bike share company convert casual riders to annual members. My task was to explore the following question:

    How do annual members and casual riders use Cyclistic (fictional company) bikes differently?

    A changelog, excel data for the 12 months of 2021, pivot charts, and my presentation (if I were to present in front of stakeholders) are provided to show the skills acquired for my certification as a data analyst.

    For transparency and to give credit to the provider of the original raw data:

    Motivate International Inc. provided the data for this case study under this license

    12 months of trip data used for cleaning, analysis, and identifying trends (dataset is public use and used for purposes of this case study to answer the business question).

  4. f

    Repeated Measures data files

    • auckland.figshare.com
    zip
    Updated Nov 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gavin T. L. Brown (2020). Repeated Measures data files [Dataset]. http://doi.org/10.17608/k6.auckland.13211120.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 9, 2020
    Dataset provided by
    The University of Auckland
    Authors
    Gavin T. L. Brown
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This zip file contains data files for 3 activities described in the accompanying PPT slides 1. an excel spreadsheet for analysing gain scores in a 2 group, 2 times data array. this activity requires access to –https://campbellcollaboration.org/research-resources/effect-size-calculator.html to calculate effect size.2. an AMOS path model and SPSS data set for an autoregressive, bivariate path model with cross-lagging. This activity is related to the following article: Brown, G. T. L., & Marshall, J. C. (2012). The impact of training students how to write introductions for academic essays: An exploratory, longitudinal study. Assessment & Evaluation in Higher Education, 37(6), 653-670. doi:10.1080/02602938.2011.5632773. an AMOS latent curve model and SPSS data set for a 3-time latent factor model with an interaction mixed model that uses GPA as a predictor of the LCM start and slope or change factors. This activity makes use of data reported previously and a published data analysis case: Peterson, E. R., Brown, G. T. L., & Jun, M. C. (2015). Achievement emotions in higher education: A diary study exploring emotions across an assessment event. Contemporary Educational Psychology, 42, 82-96. doi:10.1016/j.cedpsych.2015.05.002andBrown, G. T. L., & Peterson, E. R. (2018). Evaluating repeated diary study responses: Latent curve modeling. In SAGE Research Methods Cases Part 2. Retrieved from http://methods.sagepub.com/case/evaluating-repeated-diary-study-responses-latent-curve-modeling doi:10.4135/9781526431592

  5. Bellabeat Case Study

    • kaggle.com
    Updated Feb 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brian Schuman (2022). Bellabeat Case Study [Dataset]. https://www.kaggle.com/datasets/brianschuman/bellabeat-case-study
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 8, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Brian Schuman
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    About the company Urška Sršen and Sando Mur founded Bellabeat, a high-tech company that manufactures health-focused smart products. Sršen used her background as an artist to develop beautifully designed technology that informs and inspires women around the world. Collecting data on activity, sleep, stress, and reproductive health has allowed Bellabeat to empower women with knowledge about their own health and habits. Since it was founded in 2013, Bellabeat has grown rapidly and quickly positioned itself as a tech-driven wellness company for women.

    Characters ○ Urška Sršen: Bellabeat’s cofounder and Chief Creative Officer ○ Sando Mur: Mathematician and Bellabeat’s cofounder; key member of the Bellabeat executive team ○ Bellabeat marketing analytics team: A team of data analysts responsible for collecting, analyzing, and reporting data that helps guide Bellabeat’s marketing strategy. You joined this team six months ago and have been busy learning about Bellabeat’’s mission and business goals — as well as how you, as a junior data analyst, can help Bellabeat achieve them.

    Content

    I cleaned and analyzed the data via Google Sheets and imported it as an Excel file and created visualizations in Tableau Public. I found trends between Calories, Total Steps, Total Distance and Sedentary Minutes. All is explained in my word document case study.

    Acknowledgements

    I used data from "Fitbit Fitness Tracker Data" by the user Mobius.

    Feel free to comment any mistakes made or things you would have done differently. This is my first case study and first time analyzing data. Any feedback will be gladly appreciated.

  6. Enhanced Pizza Sales Data (2024–2025)

    • kaggle.com
    Updated May 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    akshay gaikwad (2025). Enhanced Pizza Sales Data (2024–2025) [Dataset]. https://www.kaggle.com/datasets/akshaygaikwad448/pizza-delivery-data-with-enhanced-features
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 12, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    akshay gaikwad
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This is a realistic and structured pizza sales dataset covering the time span from **2024 to 2025. ** Whether you're a beginner in data science, a student working on a machine learning project, or an experienced analyst looking to test out time series forecasting and dashboard building, this dataset is for you.

    📁 What’s Inside? The dataset contains rich details from a pizza business including:

    ✅ Order Dates & Times ✅ Pizza Names & Categories (Veg, Non-Veg, Classic, Gourmet, etc.) ✅ Sizes (Small, Medium, Large, XL) ✅ Prices ✅ Order Quantities ✅ Customer Preferences & Trends

    It is neatly organized in Excel format and easy to use with tools like Python (Pandas), Power BI, Excel, or Tableau.

    💡** Why Use This Dataset?** This dataset is ideal for:

    📈 Sales Analysis & Reporting 🧠 Machine Learning Models (demand forecasting, recommendations) 📅 Time Series Forecasting 📊 Data Visualization Projects 🍽️ Customer Behavior Analysis 🛒 Market Basket Analysis 📦 Inventory Management Simulations

    🧠 Perfect For: Data Science Beginners & Learners BI Developers & Dashboard Designers MBA Students (Marketing, Retail, Operations) Hackathons & Case Study Competitions

    pizza, sales data, excel dataset, retail analysis, data visualization, business intelligence, forecasting, time series, customer insights, machine learning, pandas, beginner friendly

  7. m

    Data for: A Prioritization-based Analysis of Open Data Portals: The Case...

    • data.mendeley.com
    Updated Oct 16, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Di Wang (2018). Data for: A Prioritization-based Analysis of Open Data Portals: The Case study of Chinese Local Governments [Dataset]. http://doi.org/10.17632/ykdbpdmspy.1
    Explore at:
    Dataset updated
    Oct 16, 2018
    Authors
    Di Wang
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Area covered
    China
    Description

    We have used Analytic Hierarchy Process (AHP) to derive the priorities of all the factors in the evaluation framework for open government data (OGD) portals. The results of AHP process were shown in the uploaded pdf file. We have collected 2635 open government datasets of 15 different subject categories (local statistics, health, education, cultural activity, transportation, map, public safety, policies and legislation, weather, environment quality, registration, credit records, international trade, budget and spend, and government bid) from 9 OGD portals in China (Beijing, Zhejiang, Shanghai, Guangdong, Guizhou, Sichuan, XInjiang, Hong Kong and Taiwan). These datasets were used for the evaluation of these portals in our study. The records of the quality and open access of these datasets could be found in the uploaded Excel file.

  8. m

    Spatial Analysis of Bicycle Safety in urban transport using GIS: A Case...

    • data.mendeley.com
    Updated May 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kinga Romanczukiewicz (2025). Spatial Analysis of Bicycle Safety in urban transport using GIS: A Case Study of Wrocław [Dataset]. http://doi.org/10.17632/5r3vxrhtb9.1
    Explore at:
    Dataset updated
    May 14, 2025
    Authors
    Kinga Romanczukiewicz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Wrocław
    Description

    The raw data used to prepare a research paper. Data description: - 2024_detail_incidents - data on traffic incidents in 2024 in Wroclaw, shp format - cycling density, format Excel - traffic density on roads, format Excel - załącznik 4.1 formularz pomiaru ruchu drogowego, Appendix 4.1 traffic measurement form, format Excel - załącznik 5.1 formularz pomiaru ruchu drogowego, Appendix 5.1 traffic measurement form, format Excel Data source: https://bip.um.wroc.pl/artykul/565/70659/kompleksowe-badania-ruchu-we-wroclawiu-i-otoczeniu-kbr-2024 https://geoportal.wroclaw.pl/en/resources/?zasob=trasy_rowerowe

  9. Replication Package for "Leveraging Large Language Models for Preliminary...

    • zenodo.org
    Updated Feb 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous; Anonymous (2024). Replication Package for "Leveraging Large Language Models for Preliminary Security Risk Analysis: A Mission-Critical Case Study" [Dataset]. http://doi.org/10.5281/zenodo.10501336
    Explore at:
    Dataset updated
    Feb 17, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anonymous; Anonymous
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This replication package includes the raw data, questionnaire answers, and a Python notebook needed for reproducing the results detailed in the paper titled "Leveraging Large Language Models for Preliminary Security Risk Analysis: A Mission-Critical Case Study."

    Repository Structure

    1. Scenarios: Contains an Excel file encompassing all 141 scenarios collected (in Italian).
    2. Training and Validation Messages: Includes the jsonl files necessary for fine-tuning the model.
    3. Testing Messages and Ground Truth: Contains the messages utilized for testing the models.
    4. Results: Contains the responses from the 7 human experts and the outputs of both the base model and the fine-tuned one.

    Replication Process

    To replicate the results of our study, open the provided Python Notebook in Google Colab and follow the instructions to seamlessly reproduce the results.

    Instructions for Use

    To utilize this replicability package, refer to the steps outlined in the notebook file.

    Remarks

    If you encounter any issues or have any questions, please reach out to the authors of the paper. We will be glad to assist you!

  10. COVID-19 Case Surveillance Public Use Data

    • data.cdc.gov
    • paperswithcode.com
    • +5more
    application/rdfxml +5
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CDC Data, Analytics and Visualization Task Force (2024). COVID-19 Case Surveillance Public Use Data [Dataset]. https://data.cdc.gov/Case-Surveillance/COVID-19-Case-Surveillance-Public-Use-Data/vbim-akqf
    Explore at:
    application/rdfxml, tsv, csv, json, xml, application/rssxmlAvailable download formats
    Dataset updated
    Jul 9, 2024
    Dataset provided by
    Centers for Disease Control and Preventionhttp://www.cdc.gov/
    Authors
    CDC Data, Analytics and Visualization Task Force
    License

    https://www.usa.gov/government-workshttps://www.usa.gov/government-works

    Description

    Note: Reporting of new COVID-19 Case Surveillance data will be discontinued July 1, 2024, to align with the process of removing SARS-CoV-2 infections (COVID-19 cases) from the list of nationally notifiable diseases. Although these data will continue to be publicly available, the dataset will no longer be updated.

    Authorizations to collect certain public health data expired at the end of the U.S. public health emergency declaration on May 11, 2023. The following jurisdictions discontinued COVID-19 case notifications to CDC: Iowa (11/8/21), Kansas (5/12/23), Kentucky (1/1/24), Louisiana (10/31/23), New Hampshire (5/23/23), and Oklahoma (5/2/23). Please note that these jurisdictions will not routinely send new case data after the dates indicated. As of 7/13/23, case notifications from Oregon will only include pediatric cases resulting in death.

    This case surveillance public use dataset has 12 elements for all COVID-19 cases shared with CDC and includes demographics, any exposure history, disease severity indicators and outcomes, presence of any underlying medical conditions and risk behaviors, and no geographic data.

    CDC has three COVID-19 case surveillance datasets:

    The following apply to all three datasets:

    Overview

    The COVID-19 case surveillance database includes individual-level data reported to U.S. states and autonomous reporting entities, including New York City and the District of Columbia (D.C.), as well as U.S. territories and affiliates. On April 5, 2020, COVID-19 was added to the Nationally Notifiable Condition List and classified as “immediately notifiable, urgent (within 24 hours)” by a Council of State and Territorial Epidemiologists (CSTE) Interim Position Statement (Interim-20-ID-01). CSTE updated the position statement on August 5, 2020, to clarify the interpretation of antigen detection tests and serologic test results within the case classification (Interim-20-ID-02). The statement also recommended that all states and territories enact laws to make COVID-19 reportable in their jurisdiction, and that jurisdictions conducting surveillance should submit case notifications to CDC. COVID-19 case surveillance data are collected by jurisdictions and reported voluntarily to CDC.

    For more information: NNDSS Supports the COVID-19 Response | CDC.

    The deidentified data in the “COVID-19 Case Surveillance Public Use Data” include demographic characteristics, any exposure history, disease severity indicators and outcomes, clinical data, laboratory diagnostic test results, and presence of any underlying medical conditions and risk behaviors. All data elements can be found on the COVID-19 case report form located at www.cdc.gov/coronavirus/2019-ncov/downloads/pui-form.pdf.

    COVID-19 Case Reports

    COVID-19 case reports have been routinely submitted using nationally standardized case reporting forms. On April 5, 2020, CSTE released an Interim Position Statement with national surveillance case definitions for COVID-19 included. Current versions of these case definitions are available here: https://ndc.services.cdc.gov/case-definitions/coronavirus-disease-2019-2021/.

    All cases reported on or after were requested to be shared by public health departments to CDC using the standardized case definitions for laboratory-confirmed or probable cases. On May 5, 2020, the standardized case reporting form was revised. Case reporting using this new form is ongoing among U.S. states and territories.

    Data are Considered Provisional

    • The COVID-19 case surveillance data are dynamic; case reports can be modified at any time by the jurisdictions sharing COVID-19 data with CDC. CDC may update prior cases shared with CDC based on any updated information from jurisdictions. For instance, as new information is gathered about previously reported cases, health departments provide updated data to CDC. As more information and data become available, analyses might find changes in surveillance data and trends during a previously reported time window. Data may also be shared late with CDC due to the volume of COVID-19 cases.
    • Annual finalized data: To create the final NNDSS data used in the annual tables, CDC works carefully with the reporting jurisdictions to reconcile the data received during the year until each state or territorial epidemiologist confirms that the data from their area are correct.
    • Access Addressing Gaps in Public Health Reporting of Race and Ethnicity for COVID-19, a report from the Council of State and Territorial Epidemiologists, to better understand the challenges in completing race and ethnicity data for COVID-19 and recommendations for improvement.

    Data Limitations

    To learn more about the limitations in using case surveillance data, visit FAQ: COVID-19 Data and Surveillance.

    Data Quality Assurance Procedures

    CDC’s Case Surveillance Section routinely performs data quality assurance procedures (i.e., ongoing corrections and logic checks to address data errors). To date, the following data cleaning steps have been implemented:

    • Questions that have been left unanswered (blank) on the case report form are reclassified to a Missing value, if applicable to the question. For example, in the question “Was the individual hospitalized?” where the possible answer choices include “Yes,” “No,” or “Unknown,” the blank value is recoded to Missing because the case report form did not include a response to the question.
    • Logic checks are performed for date data. If an illogical date has been provided, CDC reviews the data with the reporting jurisdiction. For example, if a symptom onset date in the future is reported to CDC, this value is set to null until the reporting jurisdiction updates the date appropriately.
    • Additional data quality processing to recode free text data is ongoing. Data on symptoms, race and ethnicity, and healthcare worker status have been prioritized.

    Data Suppression

    To prevent release of data that could be used to identify people, data cells are suppressed for low frequency (<5) records and indirect identifiers (e.g., date of first positive specimen). Suppression includes rare combinations of demographic characteristics (sex, age group, race/ethnicity). Suppressed values are re-coded to the NA answer option; records with data suppression are never removed.

    For questions, please contact Ask SRRG (eocevent394@cdc.gov).

    Additional COVID-19 Data

    COVID-19 data are available to the public as summary or aggregate count files, including total counts of cases and deaths by state and by county. These

  11. D

    Marlies Schillings - PhD project data for study 4

    • dataverse.nl
    7z, docx, xlsx
    Updated Mar 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marlies Schillings; Marlies Schillings (2022). Marlies Schillings - PhD project data for study 4 [Dataset]. http://doi.org/10.34894/OAJPEF
    Explore at:
    docx(15205), docx(15211), docx(17253), 7z(363295), xlsx(18304), 7z(400153476)Available download formats
    Dataset updated
    Mar 28, 2022
    Dataset provided by
    DataverseNL
    Authors
    Marlies Schillings; Marlies Schillings
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Title: Face-to-face Peer Dialogue: Students Talking about Feedback (submitted March 2021) A short description of the study set-up: 35 second-year university students were split into 12 groups. Students wrote a scientific report and gave written peer feedback. This was followed by face-to-face peer dialogue on the feedback without teacher facilitation. Dialogues were coded and analysed at the utterance level. Analysis For data analysis, we used the coding scheme by Visschers-Pleijers et al. (2006), which focuses on the analysis of verbal interactions in tutorial groups. To assess the dialogue, the verbal interactions in the discourses were scored at the utterance level as ‘Learning-oriented interaction’, ‘Procedural interaction’ or ‘Irrelevant interaction’ (Visschers-Pleijers et al. 2006). The Learning-oriented interactions were further subdivided in five subcategories: Opening statement, Question (open, critical or verification question), Cumulative reasoning (elaboration, offering suggestion, confirmation or intention to improve), Disagreement (counter argument, doubt, disagreement or no intention to improve) and Lessons learned (an adapted version of the coding scheme used by Visschers-Pleijers et al. 2006). The first and second authors, and a research assistant coded the first four transcripts and discussed their codes in three rounds until they reached consensus. See Appendix A for a description of the coding scheme. After reaching consensus on the coding, the first author and the research assistant, individually coded four new transcripts. For these four transcripts, interrater reliability analysis was performed using percent agreement according Gisev, Bell, and Chen (2013). The percent agreement between the first author and the research assistant ranged from 80 to 92. The first author then coded the remaining eight transcripts individually. Eventually, all transcripts were analysed according to the first author’s classification. For each single group session, the codes for each (sub)category of verbal interaction were counted and percentages were calculated for the number of utterances. The median (Mdn) and interquartile range (IQR) of percentage of utterances for each (sub)category of code were computed per coding category for all groups together. Explanation of all the instruments used in the data collection (including phrasing of items in surveys): This was a discourse analysis (see final coding scheme: separate file). Explanation of the data files: what data is stored in what file? • Final coding scheme (in Word). • Audiotapes (in MP3) and transcripts of 12 groups (in Word). • Data study 4 (in Excel). • Resulting data in table (in Word). In case of quantitative data: meaning and ranges or codings of all columns: • Data study 4 (in Excel): numbers and percentages of interactions. • Resulting data (Table in Word): per group (n=12) in percentages and medians In case of qualitative data: description of the structure of the data files: Not applicable

  12. H

    Data from: An analysis of why rehabilitation and balancing programs for...

    • beta.hydroshare.org
    • hydroshare.org
    zip
    Updated Mar 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Masouemeh Hashemi; Hamed MazandaraniZadeh; Mahdi Zarghami; Betelhem W. Demeke; Razieh Taraghi (2023). An analysis of why rehabilitation and balancing programs for aquifers do not meet water organizations' targets (a case study of the Qazvin aquifer in Iran) [Dataset]. http://doi.org/10.4211/hs.879debb0bc524817a150355c359f435f
    Explore at:
    zip(74.7 KB)Available download formats
    Dataset updated
    Mar 20, 2023
    Dataset provided by
    HydroShare
    Authors
    Masouemeh Hashemi; Hamed MazandaraniZadeh; Mahdi Zarghami; Betelhem W. Demeke; Razieh Taraghi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Iran experiences insufficient precipitation, resulting in groundwater resources being used in excess, leading to negative water balances in many plains. As a result, the Ministry of Energy began implementing the Groundwater Rehabilitation and Balancing Plan (GRBP) in 2006 to replenish the aquifers. The plan includes measures such as Blocking Illegal Wells (BIW), Equipping Wells with Volumetric Meters (EWVM), Increasing Patrol and Control (IPC) and inspection of the degree of exploitation of groundwater using wells, etc. Researchers examined the level of social agreement between farmers and experts on the effectiveness of the Ministry of Energy's policies for the GRBP and assessed the farmers' response to droughts in this descriptive-analytic study. The data were collected using questionnaires designed in Likert scale, and they were analyzed in R programming language using the T-test, independent-sample T-test, and Friedman test. The excel file contains data from two different questionnaires collected from farmers and experts. An analysis of the data has been published in a paper "Hashemi, M., Zadeh, H. M., Zarghami, M., Demeke, B. W., & Delgarm, R. T. (2023). An analysis of why rehabilitation and balancing programs for aquifers do not meet water organizations' targets (a case study of the Qazvin aquifer in Iran). Agricultural Water Management, 281, 108258." Other studies can use the data only if they cite it.

  13. u

    Learning opportunity for Euclidean geometry in Further Education and...

    • researchdata.up.ac.za
    xlsx
    Updated Jul 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tinevimbo Zhou (2025). Learning opportunity for Euclidean geometry in Further Education and Training mathematics textbooks [Dataset]. http://doi.org/10.25403/UPresearchdata.29424047.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jul 1, 2025
    Dataset provided by
    University of Pretoria
    Authors
    Tinevimbo Zhou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Objective: This study investigates the Euclidean geometry learning opportunities presented in Further Education and Training (FET) mathematics textbooks. Specifically, it examines the alignment of textbook content with the Curriculum and Assessment Policy Statement (CAPS) curriculum, levels of geometric thinking promoted, representational forms, contextual features, and expected responses.Methodology: The research analyzed three FET mathematics textbook series to identify strengths and weaknesses in Euclidean geometry content. This study adopted the interpretivist paradigm. The study used a qualitative research approach and a case study research design. Purposive sampling techniques were used to select the textbooks currently used for teaching. This study used textbook analysis as the data collection method. Deductive content analysis was used as a data analysis strategy. In this study, interrater reliability was used to preserve the quality of data coding and reporting among coders as a percentage of agreement between three coders (Belur et al., 2021).Data collectionThis study employed various textbook analysis instruments that were specifically designed within its framework, including the content coverage instrument, mathematical activity instrument, geometric thinking levels instrument, representation forms instrument, contextual features instrument, and answer forms instrument. 1.1.1 Content coverage instrumentThe study employed a content coverage instrument as a data collection tool, with a focus on textbook topics and subtopics. The content coverage instrument, in the form of a checklist, listed all the topics and subtopics of Euclidean geometry in the grade 10–12 curriculum and assessed whether each content was covered in the respective textbooks based on their corresponding grade levels. The aim was to provide a comprehensive assessment of the extensive range of content knowledge that students are required to acquire at each school level, specifically Grades 10-12, using a rubric. The rubric for assessment was designed to gather data and emphasised the extent of Euclidean geometry content coverage. The rubric focused on content coverage and provided a space to indicate if a subtopic was covered (by ticking) or not covered (-).A checklist form was used to gather data from the textbook tasks by indicating the topics and subtopics covered in each textbook series. The checklist was developed from the CAPS guideline document for Grades 10–12. This instrument was used to examine the selected textbook content coverage to determine the extent to which the textbooks align with the CAPS Mathematics guideline document. This instrument divided the Euclidean geometry content into three categories: Grade 10, Grade 11, and Grade 12, as stipulated in the CAPS Mathematics guideline document for FET-level mathematics. To bolster results objectivity, all CAPS checklist items were quantified using dichotomous (yes/no) responses, summarised by scoring rubrics to justify different responses. A mathematical activity form tool was developed to collect data regarding the nature of mathematical activities in both worked examples and exercise tasks within each textbook. The form was designed in the format of a rubric based on Gracin’s (2018) mathematical activity framework: representation and modelling, calculation and operation, interpretation, and argumentation and reasoning. The rubric consists of five major sections, with the first section focusing on the nature of the mathematical activities required to successfully engage with geometry questions. A rubric was provided for the nature of mathematical activities for each geometry task, which was broken down into four categories to explore the nature of tasks more clearly. The categories of mathematical activities focused on representation and modelling, calculation and operation, interpretation, and argumentation and reasoning.As this study intended to investigate the students’ OTL afforded by textbooks, an evaluation form was used to gather data. A form containing the four kinds of Euclidean geometry task types was included in the evaluation form used to examine the nature of each Euclidean geometry task. This form consisted of a list of the characteristics of each mathematical activity required to carry out the geometry tasks: Representation and modelling (R), Calculation and operation (C), Interpretation (I), and Argumentation and reasoning (A).” This form serves as a classification template, categorising tasks according to the competence the tasks demand of the students. Table 4.5 presents exemplary geometric tasks, categorised by skill, alongside corresponding evaluation indicators used to assess mathematical proficiency. A representation form instrument was utilised as a data collection instrument regarding the type type of representation used in presenting of the geometry ideas in each textbook sries (see section 3.3). A rubric was utilised to capture the type of representation, with a designated space for each task. This rubric provided a space for documenting the representation format for the tasks. To make the captured data clear, we divided the rubric into four distinct sections: pure mathematics, verbal, visual, and combined forms of problem presentation.Data analysisThis study used a qualitative deductive content analysis (QDCA) approach to analyse the collected data. In a DCA, research findings are allowed to emerge from the textbooks examined (Pertiwi & Wahidin, 2020). A deductive approach was appropriate because the codes and categories were drawn from theoretical considerations, not from the text itself (Islam & Asadullah, 2018).The researcher created nine Excel files, each with a four-column table, as shown in the figure below. Every column represents the type of mathematical activity category: Representation (R), Calculation (C), Interpretation (I), and Argumentation (A). Based on the Gracin (2018) framework, the researcher and two scorers read every worked example task and exercise task in each textbook examined in this study, extracted the mathematical activity required to complete the task successfully, and recorded it in the corresponding Excel file. If the tasks required more than one activity, the researcher considered the one that was dominantly required by the task author. The figure below shows the Excel sheet used to score the mathematical tasks for this study. To examine the geometric thinking embedded in textbook tasks, a comprehensive analysis framework was employed. This involved utilising a rubric to categorise tasks according to their corresponding geometric thinking levels, spanning from Level 0 to Level 4. For instance, tasks requiring students to define properties of a geometric figure were classified as informal deduction, whereas tasks demanding formal proofs were coded as formal deduction.The analysis process commenced with a meticulous review of worked examples and exercise tasks to identify the embedded level of geometric thinking. Subsequently, Excel tables were utilised to record the geometry levels present in Euclidean geometry tasks, and their frequencies were calculated. The results, which highlighted the predominant levels in the textbook series, were then subjected to in-depth analysis. This study classified each task based on the dimensions of Zhu and Fan's (2006) answer forms and subsequently coded the problem as depicted in Figure 4.13. In this study, the researcher conducted the process of classifying the tasks based on the answers to the question forms by reading the task questions and coding them as either open-ended or closed-ended problems.The researcher examined the types of tasks within the Euclidean geometry content in terms of their representation form and contextual features. This study used Zhu and Fan's (2006) framework to classify and code Euclidean geometry tasks found in textbooks. This study analysed the following classification of tasks: "Pure mathematical (R1), verbal (R2), visual (R3), and combined form (R4), based on Zhu and Fan's (2006) theoretical framework. In particular, each task was analysed against these representation-type categories in each textbook. An Excel table, as shown in the figure above, recorded the analysis of the representation forms.To investigate the contextual features of mathematical tasks, the researcher systematically collected tasks from each textbook and created an Excel sheet to score the type of context presented in each problem. Zhu and Fan's (2006) theoretical framework provided the foundation for categorising and coding tasks, enabling a comprehensive analysis. This study classified the tasks into two distinct categories: Zhu and Fan (2006) define application problems (C1) as tasks presented in real-life situations, illustrating practical applications of mathematical concepts. Non-application problems (C2) are tasks that lack context and solely concentrate on mathematical procedures and calculations. We coded tasks presented in situations mirroring real-life scenarios as application tasks and tasks lacking context as non-application tasks. The coded data was meticulously counted, and the frequencies were recorded in tables using Microsoft Excel, as depicted in Figure 4.13. This systematic analysis facilitated a nuanced understanding of the contextual features of mathematical tasks across the examined textbooks. This study used the CAPS Mathematics guidelines as the foundation for developing an OTL analytical tool to classify the mathematical content. The CAPS Mathematics analytical tool encompasses the content areas that students should master in all grades. Next, I outlined the OTL categories, offering comprehensive details on the interpretation and analysis of the data. To analyse the data, I used a rubric for each textbook series. The researchers conducted a thorough review of each textbook task, utilising the CAPS Mathematics document as a benchmark to

  14. r

    Coral restoration database – Dataset from Bostrom-Einarsson et al 2019 (NESP...

    • researchdata.edu.au
    bin
    Updated 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bostrom-Einarsson, Lisa, Dr.; Ceccarelli, Daniela, Dr.; Cook, Nathan, Mr.; Hein, Margaux, Dr.; Smith, Adam, Dr.; McLeod, Ian M, Dr. (2019). Coral restoration database – Dataset from Bostrom-Einarsson et al 2019 (NESP TWQ 4.3, JCU) [Dataset]. https://researchdata.edu.au/coral-restoration-database-43-jcu/1425277
    Explore at:
    binAvailable download formats
    Dataset updated
    2019
    Dataset provided by
    eAtlas
    Authors
    Bostrom-Einarsson, Lisa, Dr.; Ceccarelli, Daniela, Dr.; Cook, Nathan, Mr.; Hein, Margaux, Dr.; Smith, Adam, Dr.; McLeod, Ian M, Dr.
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2017 - Jan 31, 2019
    Description

    This dataset consists of a review of case studies and descriptions of coral restoration methods from four sources: 1) the primary literature (i.e. published peer-reviewed scientific literature), 2) grey literature (e.g. scientific reports and technical summaries from experts in the field), 3) online descriptions (e.g. blogs and online videos describing projects), and 4) an online survey targeting restoration practitioners (doi:10.5061/dryad.p6r3816).

    Included are only those case studies which actively conducted coral restoration (i.e. at least one stage of scleractinian coral life-history was involved). This excludes indirect coral restoration projects, such as disturbance mitigation (e.g. predator removal, disease control etc.) and passive restoration interventions (e.g. enforcement of control against dynamite fishing or water quality improvement). It also excludes many artificial reefs, in particular if the aim was fisheries enhancement (i.e. fish aggregation devices), and if corals were not included in the method. To the best of our abilities, duplication of case studies was avoided across the four separate sources, so that each case in the review and database represents a separate project.

    This dataset is currently under embargo until the publication of review manuscript is made available.

    Methods: More than 40 separate categories of data were recorded from each case study and entered into a database. These included data on (1) the information source, (2) the case study particulars (e.g. location, duration, spatial scale, objectives, etc.), (3) specific details about the methods, (4) coral details (e.g. genus, species, morphology), (5) monitoring details, and (6) the outcomes and conclusions.

    Primary literature Multiple search engines were used to achieve the most complete coverage of the scientific literature. First, the scientific literature was searched using Google Scholar with the keywords “coral* + restoration”. Because the field (and therefore search results) are dominated by transplantation studies, separate searches were then conducted for other common techniques using “coral* + restoration + [technique name]”. This search was further complemented by using the same keywords in ISI Web of Knowledge (search yield n=738). Studies were then manually selected that fulfilled our criteria for active coral restoration described above (final yield n= 221). In those cases where a single paper describes several different projects or methods, these were split into separate case studies. Finally, prior reviews of coral restoration were consulted to obtain case studies from their reference lists.

    Grey literature While many reports appeared in the Google Scholar literature searches, The Nature Conservancy (TNC) database of reports for North American coastal restoration projects (http://projects.tnc.org/coastal/) was also conducted. This was supplemented with reports listed in the reference lists of other papers, reports and reviews, and during the online searches (n=30).

    Online records Small-scale projects conducted without substantial input from researchers, academics, non-governmental organisations (NGO) or coral reef managers often do not result in formal written accounts of methods. To access this information, we conducted online searches of YouTube, Facebook and Google, using the search terms “Coral restoration”. The information provided in videos, blog posts and websites to describe further projects (n=48) was also used. Due to the unverified nature of such accounts, the data collected from these online-only records was limited compared to peer reviewed literature and surveys. At the minimum, the location, the methods used and reported outcomes or lessons learned were included in this review.

    Online survey To access information from projects not published elsewhere, an online survey targeting restoration practitioners was designed. The survey consisted of 25 questions querying restoration practitioners regarding projects they had undertaken under JCU human ethics H7218 (following the Australian National Statement on Ethical Conduct in Human Research, 2007). These data (n=63) are included in all calculations within this review, but are not publicly available to preserve the anonymity of participants. Although we encouraged participants to fill out a separate survey for each case study, it is possible that participants included multiple separate projects in a single survey, which may reduce the real number of case studies reported.

    Data analysis Percentages, counts and other quantifications from the database refer to the total number of case studies with data in that category. Case studies where data were lacking for the category in question, or lack appropriate detail (e.g. reporting ‘mixed’ for coral genera) are not included in calculations. Many categories allowed multiple answers (e.g. coral species); these were split into separate records for calculations (e.g. coral species n). For this reason, absolute numbers may exceed the number of case studies in the database. However, percentages reflect the proportion of case studies in each category. We used the seven objectives outlined in [1] to classify the objective of each case study, with an additional two categories (‘scientific research’ and ‘ecological engineering’). We used Tableau to visualise and analyse the database (Desktop Professional Edition, version 10.5, Tableau Software). The data have been made available following the FAIR Guiding Principles for scientific data management and stewardship [2]. Data available from the Dryad Digital Repository downloaded here (https://doi.org/10.5061/dryad.p6r3816), and visually explored: https://public.tableau.com/views/CoralRestorationDatabase-Visualisation/Coralrestorationmethods?:embed=y&:display_count=yes&publish=yes&:showVizHome=no#1.

    Limitations: While our expanded search enabled us to avoid the bias from the more limited published literature, we acknowledge that using sources that have not undergone rigorous peer-review potentially introduces another bias. Many government reports undergo an informal peer-review; however, survey results and online descriptions may present a subjective account of restoration outcomes. To reduce subjective assessment of case studies, we opted not to interpret results or survey answers, instead only recording what was explicitly stated in each document [3, 4].

    Defining restoration In this review, active restoration methods are methods which reintroduce coral (e.g. coral fragment transplantation, or larval enhancement) or augment coral assemblages (e.g. substrate stabilisation, or algal removal), for the purposes of restoring the reef ecosystem. In the published literature and elsewhere, there are many terms that describe the same intervention. For clarity, we provide the terms we have used in the review, their definitions and alternative terms (see references). Passive restoration methods such as predator removal (e.g. crown-of-thorns starfish and Drupella control) have been excluded, unless they were conducted in conjunction with active restoration (e.g. macroalgal removal combined with transplantation).

    Format: The data is supplied as an excel file with three separate tabs for 1) peer reviewed literature 2) grey literature, and 3) a description of the objectives form Hein et al. 2017. Survey responses have been excluded to preserve the anonymity of the respondents.

    This dataset is a database that underpins a 2018 report and 2019 published review of coral restoration methods from around the world. - Bostrom-Einarsson L, Ceccarelli D, Babcock R.C., Bayraktarov E, Cook N, Harrison P, Hein M, Shaver E, Smith A, Stewart-Sinclair P.J, Vardi T, McLeod I.M. 2018 - Coral restoration in a changing world - A global synthesis of methods and techniques, report to the National Environmental Science Program. Reef and Rainforest Research Centre Ltd, Cairns (63pp.). - Review manuscript is currently under review.

    Data Dictionary: The Data Dictionary is emended in the excel spreadsheet. Comments are included in the column titles to aid interpretation, and/or refer to additional information tabs. For more information on each column, open the red triangle [located top right of cell].

    References: 1. Hein MY, Willis BL, Beeden R, Birtles A. The need for broader ecological and socioeconomic tools to evaluate the effectiveness of coral restoration programs. Restoration Ecology. Wiley/Blackwell (10.1111); 2017;25: 873–883. doi:10.1111/rec.12580 2. Wilkinson MD, Dumontier M, Aalbersberg IJ, Appleton G, Axton M, Baak A, et al. The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data 2016 3. Nature Publishing Group; 2016;3: 160018. doi:10.1038/sdata.2016.18 3.Miller RL, Marsh H, Cottrell A, Hamann M. Protecting Migratory Species in the Australian Marine Environment: A Cross-Jurisdictional Analysis of Policy and Management Plans. Front Mar Sci. Frontiers; 2018;5: 211. doi:10.3389/fmars.2018.00229 4. Ortega-Argueta A, Baxter G, Hockings M. Compliance of Australian threatened species recovery plans with legislative requirements. Journal of Environmental Management. Elsevier; 2011;92: 2054–2060.

    Data Location:

    This dataset is filed in the eAtlas enduring data repository at: data\2018-2021-NESP-TWQ-4\4.3_Best-practice-coral-restoration

  15. u

    Data from: Dataset: Critically examining the knowledge base required to...

    • fdr.uni-hamburg.de
    xlsx
    Updated Apr 29, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ignacio A. Catalán; Dominik Auch; Pauline Kamermans; Beatriz Morales‐Nin; Natalie V. Angelopoulos; Patricia Reglero; Tina Sandersfeld; Myron A. Peck; Dominik Auch; Pauline Kamermans; Beatriz Morales‐Nin; Natalie V. Angelopoulos; Patricia Reglero; Tina Sandersfeld; Myron A. Peck (2019). Dataset: Critically examining the knowledge base required to mechanistically project climate impacts: A case study of Europe's fish and shellfish [Dataset]. http://doi.org/10.25592/uhhfdm.117
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Apr 29, 2019
    Dataset provided by
    Hull International Fisheries Institute, School of Environmental Sciences, University of Hull, Hull, UK
    Centre Oceanogràfic de les Balears, IEO, Palma, Balearic Islands, Spain
    Institute of Marine Ecosystem and Fisheries Science (IMF), Center for Earth System Research and Sustainability (CEN), University of Hamburg, Hamburg, Germany
    Wageningen Marine Research (WMR), Wageningen University and Research, Yerseke, The Netherlands
    Mediterranean Institute for Advanced Studies (IMEDEA, CSIC‐UIB), Esporles, Balearic Islands, Spain
    Authors
    Ignacio A. Catalán; Dominik Auch; Pauline Kamermans; Beatriz Morales‐Nin; Natalie V. Angelopoulos; Patricia Reglero; Tina Sandersfeld; Myron A. Peck; Dominik Auch; Pauline Kamermans; Beatriz Morales‐Nin; Natalie V. Angelopoulos; Patricia Reglero; Tina Sandersfeld; Myron A. Peck
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Europe
    Description

    The dataset (Excel) corresponds to the data used to generate the gap analysis of the published paper "Critically examining the knowledge base required to mechanistically project climate impacts: A case study of Europe's fish and shellfish" with DOI: 10.1111/faf.12359

    It contains 245 cases and 14 variables. The explanation of the variables is contained in the paper.

    Funding Information: the project CERES - Climate change and European Aquatic RESources leading to this results has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 678193 in H2020‐EU.3.2. ‐ SOCIETAL CHALLENGES ‐ Food security, sustainable agriculture and forestry, marine, maritime and inland water research, and the bioeconomy.

  16. r

    Co-Creation of New Knowledge: Increasing the Research Capacity of Third...

    • researchdata.edu.au
    Updated Mar 26, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wayland Sarah; Maple Myfanwy; Pearce Tania; Tania Pearce; Tania Pearce; Tania Pearce; Sarah Wayland; Myfanwy Maple (2020). Co-Creation of New Knowledge: Increasing the Research Capacity of Third Sector Organisations (TSO) - Dataset [Dataset]. http://doi.org/10.25952/SPGB-5A73
    Explore at:
    Dataset updated
    Mar 26, 2020
    Dataset provided by
    University of New England, Australia
    University of New England
    Authors
    Wayland Sarah; Maple Myfanwy; Pearce Tania; Tania Pearce; Tania Pearce; Tania Pearce; Sarah Wayland; Myfanwy Maple
    Description

    The files contained in the zip folder relates to raw data collected for the content analysis of papers using co-creation-related terminology. Each file has been labelled using a year/month/date/filename format and includes both excel (.xls) and word (.doc) documents.

    Dataset 1:The files range from results of academic searches and analysis of those records using NVIVO and excel, coding sheets by blind reviewers, development of tables and figures to help define the co-creation of new knowledge framework and Prisma flow chart to demonstrate the process of screening of records. This data was used as evidence to formulate a proposed definition of co-creation of new knowledge.

    Dataset 2: This data set represents raw data for a systematic review on multi-sectoral collaborations in the field of mental health and suicide prevention. The data set contains files relating to the screening of full-text records, quality assessment of included records and a development log of tables and figures used in the analysis of the data.

    Dataset 3: This data set represents research materials relating to the analysis of qualitative data for an analysis of social policy. The data set contains audio recordings and tables.

    Dataset 4: This data set represents research materials relating to the analysis of qualitative data for an applied case study. The data set contains audio recordings, transcripts, coding frameworks and tables.

  17. f

    Additional file 1 of Evidencing the impact of cancer trials: insights from...

    • springernature.figshare.com
    xlsx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Catherine R. Hanna; Lauren P. Gatting; Kathleen Anne Boyd; Kathryn A. Robb; Rob J. Jones (2023). Additional file 1 of Evidencing the impact of cancer trials: insights from the 2014 UK Research Excellence Framework [Dataset]. http://doi.org/10.6084/m9.figshare.12440696.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Authors
    Catherine R. Hanna; Lauren P. Gatting; Kathleen Anne Boyd; Kathryn A. Robb; Rob J. Jones
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    United Kingdom
    Description

    Additional file 1: Supplementary material 1: Excel spreadsheet.

  18. f

    Risk factors associated with 30-day in-hospital stroke case fatality in...

    • plos.figshare.com
    xls
    Updated Jan 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin Ackah; Louise Ameyaw; Richard Appiah; David Owiredu; Hosea Boakye; Webster Donaldy; Comos Yarfi; Ulric S. Abonie (2024). Risk factors associated with 30-day in-hospital stroke case fatality in Sub-Saharan Africa. [Dataset]. http://doi.org/10.1371/journal.pgph.0002769.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 19, 2024
    Dataset provided by
    PLOS Global Public Health
    Authors
    Martin Ackah; Louise Ameyaw; Richard Appiah; David Owiredu; Hosea Boakye; Webster Donaldy; Comos Yarfi; Ulric S. Abonie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Sub-Saharan Africa
    Description

    Risk factors associated with 30-day in-hospital stroke case fatality in Sub-Saharan Africa.

  19. h

    Supporting data for "Reducing Embodied Carbon of High-rise Modular...

    • datahub.hku.hk
    zip
    Updated Mar 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yang Zhang (2025). Supporting data for "Reducing Embodied Carbon of High-rise Modular Residential Buildings by Systematic Structural and Material Optimization" [Dataset]. http://doi.org/10.25442/hku.22337104.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 27, 2025
    Dataset provided by
    HKU Data Repository
    Authors
    Yang Zhang
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    The contents in these files are the dataset examples for supporting the PhD thesis "Reducing Embodied Carbon of High-rise Modular Residential Buildings by Systematic Structural and Material Optimization".

    The folder "Chapter 4" presents part of the calculated embodied carbon (EC) results and datasets of the selected concrete MiC high-rise residential building project case. The SimaPro software calculated the results based on the collected building project data and software-embedded Ecoinvent database. EC results of the building components and the typical floor area from six EC sources were shown in the file.

    In the folder “Chapter 5”, part of the collected datasets showing the case study results on using low carbon structural design (LCSD) measures in buildings are presented. The data from the identified case study articles regarding each defined system boundary for analyzing LCSD is presented. These data were used as the foundation for the corresponding analysis in the thesis.

    In the folder "Chapter 6", the initial structural analysis results of a MiC building case design scenario are presented in the Excel file. These results were obtained from ETABS software based on the structural design data of the MiC building case, and used for EC reduction analysis of LCSD measures in the thesis. The EC reduction potentials of various LCSD measures can be analyzed based on the results and data of other design scenarios with same format. The EC results and corresponding design variables of the 200 feasible design scenarios under Layout Scenario L1 of the MiC building case are summarized and shown in the Origin project file (C45 L1.opju).

  20. 4

    Data for Chapter 3 of PhD thesis: Bridging Technology and Society (BTS)...

    • data.4tu.nl
    zip
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sivaramakrishnan Chandrasekaran; Patricia Osseweijer; John Posada (2025). Data for Chapter 3 of PhD thesis: Bridging Technology and Society (BTS) Towards context-specific, inclusive, and sustainable design of bio-based value chains for marine biofuels [Dataset]. http://doi.org/10.4121/c9dace5b-64d8-4ee5-889f-2e556e9d8791.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 4, 2025
    Dataset provided by
    4TU.ResearchData
    Authors
    Sivaramakrishnan Chandrasekaran; Patricia Osseweijer; John Posada
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    2024
    Dataset funded by
    Dutch Research Council (NWO)
    Description

    This dataset belongs to the PhD thesis of Sivaramakrishnan Chandrasekaran titled "Bridging Technology and Society (BTS)- Towards context-specific, inclusive, and sustainable design of bio-based value chains for marine biofuels".

    Specifically, the dataset belongs to Chapter 3 titled "Agrarian Biohubs for drop-in marine biofuels: A techno-economic and environmental assessment for Spain, Colombia, and Namibia using field residues".


    Authors: Sivaramakrishnan Chandrasekaran, Patricia Osseweijer, and John Posada

    Corresponding authors: Sivaramakrishnan Chandrasekaran and Patricia Osseweijer

    Contact information: S.Chandrasekaran@tudelft.nl and P.Osseweijer@tudelft.nl


    This dataset contains data collected during simulations as part of Sivaramakrishnan's PhD project. The data was collected from 2021-2024.


    Aspen Plus simulations for three different Case studies (Spain, Colombia, and Namibia)

    Excel sheets for mass balances, energy balances, techno-economic, and environmental assessment


    All data processing and analysis steps are described in detail in the Methods section of the publication.


    The data is grouped into two zip files:

    i) Aspen Plus simulation files

    Files are named after the case study location and processing capacities


    ii) Excel sheets for mass balances, energy balances, techno-economic, and environmental assessment

    Files are named after the case study location and processing capacities


Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
F. (Fabiano) Dalpiaz (2020). UC_vs_US Statistic Analysis.xlsx [Dataset]. http://doi.org/10.23644/uu.12631628.v1

UC_vs_US Statistic Analysis.xlsx

Explore at:
xlsxAvailable download formats
Dataset updated
Jul 9, 2020
Dataset provided by
Utrecht University
Authors
F. (Fabiano) Dalpiaz
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Sheet 1 (Raw-Data): The raw data of the study is provided, presenting the tagging results for the used measures described in the paper. For each subject, it includes multiple columns: A. a sequential student ID B an ID that defines a random group label and the notation C. the used notation: user Story or use Cases D. the case they were assigned to: IFA, Sim, or Hos E. the subject's exam grade (total points out of 100). Empty cells mean that the subject did not take the first exam F. a categorical representation of the grade L/M/H, where H is greater or equal to 80, M is between 65 included and 80 excluded, L otherwise G. the total number of classes in the student's conceptual model H. the total number of relationships in the student's conceptual model I. the total number of classes in the expert's conceptual model J. the total number of relationships in the expert's conceptual model K-O. the total number of encountered situations of alignment, wrong representation, system-oriented, omitted, missing (see tagging scheme below) P. the researchers' judgement on how well the derivation process explanation was explained by the student: well explained (a systematic mapping that can be easily reproduced), partially explained (vague indication of the mapping ), or not present.

Tagging scheme:
Aligned (AL) - A concept is represented as a class in both models, either

with the same name or using synonyms or clearly linkable names; Wrongly represented (WR) - A class in the domain expert model is incorrectly represented in the student model, either (i) via an attribute, method, or relationship rather than class, or (ii) using a generic term (e.g., user'' instead ofurban planner''); System-oriented (SO) - A class in CM-Stud that denotes a technical implementation aspect, e.g., access control. Classes that represent legacy system or the system under design (portal, simulator) are legitimate; Omitted (OM) - A class in CM-Expert that does not appear in any way in CM-Stud; Missing (MI) - A class in CM-Stud that does not appear in any way in CM-Expert.

All the calculations and information provided in the following sheets

originate from that raw data.

Sheet 2 (Descriptive-Stats): Shows a summary of statistics from the data collection,

including the number of subjects per case, per notation, per process derivation rigor category, and per exam grade category.

Sheet 3 (Size-Ratio):

The number of classes within the student model divided by the number of classes within the expert model is calculated (describing the size ratio). We provide box plots to allow a visual comparison of the shape of the distribution, its central value, and its variability for each group (by case, notation, process, and exam grade) . The primary focus in this study is on the number of classes. However, we also provided the size ratio for the number of relationships between student and expert model.

Sheet 4 (Overall):

Provides an overview of all subjects regarding the encountered situations, completeness, and correctness, respectively. Correctness is defined as the ratio of classes in a student model that is fully aligned with the classes in the corresponding expert model. It is calculated by dividing the number of aligned concepts (AL) by the sum of the number of aligned concepts (AL), omitted concepts (OM), system-oriented concepts (SO), and wrong representations (WR). Completeness on the other hand, is defined as the ratio of classes in a student model that are correctly or incorrectly represented over the number of classes in the expert model. Completeness is calculated by dividing the sum of aligned concepts (AL) and wrong representations (WR) by the sum of the number of aligned concepts (AL), wrong representations (WR) and omitted concepts (OM). The overview is complemented with general diverging stacked bar charts that illustrate correctness and completeness.

For sheet 4 as well as for the following four sheets, diverging stacked bar

charts are provided to visualize the effect of each of the independent and mediated variables. The charts are based on the relative numbers of encountered situations for each student. In addition, a "Buffer" is calculated witch solely serves the purpose of constructing the diverging stacked bar charts in Excel. Finally, at the bottom of each sheet, the significance (T-test) and effect size (Hedges' g) for both completeness and correctness are provided. Hedges' g was calculated with an online tool: https://www.psychometrica.de/effect_size.html. The independent and moderating variables can be found as follows:

Sheet 5 (By-Notation):

Model correctness and model completeness is compared by notation - UC, US.

Sheet 6 (By-Case):

Model correctness and model completeness is compared by case - SIM, HOS, IFA.

Sheet 7 (By-Process):

Model correctness and model completeness is compared by how well the derivation process is explained - well explained, partially explained, not present.

Sheet 8 (By-Grade):

Model correctness and model completeness is compared by the exam grades, converted to categorical values High, Low , and Medium.

Search
Clear search
Close search
Google apps
Main menu