100+ datasets found
  1. c

    Data sets for a quantitative dye tracer test conducted at the Savoy...

    • s.cnmilf.com
    • data.usgs.gov
    • +1more
    Updated Jul 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Data sets for a quantitative dye tracer test conducted at the Savoy Experimental Watershed, November 13-December 2, 2017, Savoy, Arkansas [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/data-sets-for-a-quantitative-dye-tracer-test-conducted-at-the-savoy-experimental-watershed
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Savoy
    Description

    These are the data sets in machine readable files from a quantitative dye tracer test conducted at Langle Spring November 13-December 2, 2017 as part of the USGS training class, GW2227 Advanced Field Methods in Karst Terrains, held at the Savoy Experimental Watershed, Savoy Arkansas. Langle Spring is NWIS site 71948218, latitude 36.11896886, longitude -94.34548871. One pound of RhodamineWT dye was injected into a sinking stream at latitude 36.116772 longitude -94.341883 NAD83 on November 13, 2017 at 22:50. The data sets include original fluorimeter data logger files from Langle and Copperhead Springs, Laboratory Sectra-fluorometer files from standards and grab samples, and processed input and output files from the breakthrough curve analysis program Qtracer2 (Field, USEPA, 2002 EPA/600/R-02/001).

  2. d

    Replication Data for: A Three-Year Mixed Methods Study of Undergraduates’...

    • dataone.org
    • dataverse.azure.uit.no
    • +1more
    Updated Oct 9, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nierenberg, Ellen (2024). Replication Data for: A Three-Year Mixed Methods Study of Undergraduates’ Information Literacy Development: Knowing, Doing, and Feeling [Dataset]. http://doi.org/10.18710/SK0R1N
    Explore at:
    Dataset updated
    Oct 9, 2024
    Dataset provided by
    DataverseNO
    Authors
    Nierenberg, Ellen
    Time period covered
    Aug 8, 2019 - Jun 10, 2022
    Description

    This data set contains the replication data and supplements for the article "Knowing, Doing, and Feeling: A three-year, mixed-methods study of undergraduates’ information literacy development." The survey data is from two samples: - cross-sectional sample (different students at the same point in time) - longitudinal sample (the same students and different points in time)Surveys were distributed via Qualtrics during the students' first and sixth semesters. Quantitative and qualitative data were collected and used to describe students' IL development over 3 years. Statistics from the quantitative data were analyzed in SPSS. The qualitative data was coded and analyzed thematically in NVivo. The qualitative, textual data is from semi-structured interviews with sixth-semester students in psychology at UiT, both focus groups and individual interviews. All data were collected as part of the contact author's PhD research on information literacy (IL) at UiT. The following files are included in this data set: 1. A README file which explains the quantitative data files. (2 file formats: .txt, .pdf)2. The consent form for participants (in Norwegian). (2 file formats: .txt, .pdf)3. Six data files with survey results from UiT psychology undergraduate students for the cross-sectional (n=209) and longitudinal (n=56) samples, in 3 formats (.dat, .csv, .sav). The data was collected in Qualtrics from fall 2019 to fall 2022. 4. Interview guide for 3 focus group interviews. File format: .txt5. Interview guides for 7 individual interviews - first round (n=4) and second round (n=3). File format: .txt 6. The 21-item IL test (Tromsø Information Literacy Test = TILT), in English and Norwegian. TILT is used for assessing students' knowledge of three aspects of IL: evaluating sources, using sources, and seeking information. The test is multiple choice, with four alternative answers for each item. This test is a "KNOW-measure," intended to measure what students know about information literacy. (2 file formats: .txt, .pdf)7. Survey questions related to interest - specifically students' interest in being or becoming information literate - in 3 parts (all in English and Norwegian): a) information and questions about the 4 phases of interest; b) interest questionnaire with 26 items in 7 subscales (Tromsø Interest Questionnaire - TRIQ); c) Survey questions about IL and interest, need, and intent. (2 file formats: .txt, .pdf)8. Information about the assignment-based measures used to measure what students do in practice when evaluating and using sources. Students were evaluated with these measures in their first and sixth semesters. (2 file formats: .txt, .pdf)9. The Norwegain Centre for Research Data's (NSD) 2019 assessment of the notification form for personal data for the PhD research project. In Norwegian. (Format: .pdf)

  3. B

    Open Data Training Workshop: Case Studies in Open Data for Qualitative and...

    • borealisdata.ca
    • search.dataone.org
    Updated Apr 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Srinvivas Murthy; Maggie Woo Kinshella; Jessica Trawin; Teresa Johnson; Niranjan Kissoon; Matthew Wiens; Gina Ogilvie; Gurm Dhugga; J Mark Ansermino (2023). Open Data Training Workshop: Case Studies in Open Data for Qualitative and Quantitative Clinical Research [Dataset]. http://doi.org/10.5683/SP3/BNNAE7
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 18, 2023
    Dataset provided by
    Borealis
    Authors
    Srinvivas Murthy; Maggie Woo Kinshella; Jessica Trawin; Teresa Johnson; Niranjan Kissoon; Matthew Wiens; Gina Ogilvie; Gurm Dhugga; J Mark Ansermino
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Dataset funded by
    Digital Research Alliance of Canada
    Description

    Objective(s): Momentum for open access to research is growing. Funding agencies and publishers are increasingly requiring researchers make their data and research outputs open and publicly available. However, clinical researchers struggle to find real-world examples of Open Data sharing. The aim of this 1 hr virtual workshop is to provide real-world examples of Open Data sharing for both qualitative and quantitative data. Specifically, participants will learn: 1. Primary challenges and successes when sharing quantitative and qualitative clinical research data. 2. Platforms available for open data sharing. 3. Ways to troubleshoot data sharing and publish from open data. Workshop Agenda: 1. “Data sharing during the COVID-19 pandemic” - Speaker: Srinivas Murthy, Clinical Associate Professor, Department of Pediatrics, Faculty of Medicine, University of British Columbia. Investigator, BC Children's Hospital 2. “Our experience with Open Data for the 'Integrating a neonatal healthcare package for Malawi' project.” - Speaker: Maggie Woo Kinshella, Global Health Research Coordinator, Department of Obstetrics and Gynaecology, BC Children’s and Women’s Hospital and University of British Columbia This workshop draws on work supported by the Digital Research Alliance of Canada. Data Description: Presentation slides, Workshop Video, and Workshop Communication Srinivas Murthy: Data sharing during the COVID-19 pandemic presentation and accompanying PowerPoint slides. Maggie Woo Kinshella: Our experience with Open Data for the 'Integrating a neonatal healthcare package for Malawi' project presentation and accompanying Powerpoint slides. This workshop was developed as part of Dr. Ansermino's Data Champions Pilot Project supported by the Digital Research Alliance of Canada. NOTE for restricted files: If you are not yet a CoLab member, please complete our membership application survey to gain access to restricted files within 2 business days. Some files may remain restricted to CoLab members. These files are deemed more sensitive by the file owner and are meant to be shared on a case-by-case basis. Please contact the CoLab coordinator on this page under "collaborate with the pediatric sepsis colab."

  4. f

    Table_1_Raw Data Visualization for Common Factorial Designs Using SPSS: A...

    • frontiersin.figshare.com
    xlsx
    Updated Jun 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Florian Loffing (2023). Table_1_Raw Data Visualization for Common Factorial Designs Using SPSS: A Syntax Collection and Tutorial.XLSX [Dataset]. http://doi.org/10.3389/fpsyg.2022.808469.s002
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 15, 2023
    Dataset provided by
    Frontiers
    Authors
    Florian Loffing
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Transparency in data visualization is an essential ingredient for scientific communication. The traditional approach of visualizing continuous quantitative data solely in the form of summary statistics (i.e., measures of central tendency and dispersion) has repeatedly been criticized for not revealing the underlying raw data distribution. Remarkably, however, systematic and easy-to-use solutions for raw data visualization using the most commonly reported statistical software package for data analysis, IBM SPSS Statistics, are missing. Here, a comprehensive collection of more than 100 SPSS syntax files and an SPSS dataset template is presented and made freely available that allow the creation of transparent graphs for one-sample designs, for one- and two-factorial between-subject designs, for selected one- and two-factorial within-subject designs as well as for selected two-factorial mixed designs and, with some creativity, even beyond (e.g., three-factorial mixed-designs). Depending on graph type (e.g., pure dot plot, box plot, and line plot), raw data can be displayed along with standard measures of central tendency (arithmetic mean and median) and dispersion (95% CI and SD). The free-to-use syntax can also be modified to match with individual needs. A variety of example applications of syntax are illustrated in a tutorial-like fashion along with fictitious datasets accompanying this contribution. The syntax collection is hoped to provide researchers, students, teachers, and others working with SPSS a valuable tool to move towards more transparency in data visualization.

  5. Wiki-Quantities and Wiki-Measurements: Datasets of Quantities and their...

    • zenodo.org
    bin, zip
    Updated Feb 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jan Göpfert; Jan Göpfert; Patrick Kuckertz; Patrick Kuckertz; Jann M. Weinand; Jann M. Weinand; Detlef Stolten; Detlef Stolten (2025). Wiki-Quantities and Wiki-Measurements: Datasets of Quantities and their Measurement Context from Wikipedia [Dataset]. http://doi.org/10.5281/zenodo.14858280
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    Feb 12, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jan Göpfert; Jan Göpfert; Patrick Kuckertz; Patrick Kuckertz; Jann M. Weinand; Jann M. Weinand; Detlef Stolten; Detlef Stolten
    Description

    The task of measurement extraction is typically approached in a pipeline manner, where 1) quantities are identified before 2) their individual measurement context is extracted (see our review paper). To support the development and evaluation of systems for measurement extraction, we present two large datasets that correspond to the two tasks:

    • Wiki-Quantities, a dataset for identifying quantities, and
    • Wiki-Measurements, a dataset for extracting measurement context for given quantities.

    The datasets are heuristically generated from Wikipedia articles and Wikidata facts. For a detailed description of the datasets, please refer to the upcoming corresponding paper:

    Wiki-Quantities and Wiki-Measurements: Datasets of Quantities and their Measurement Context from Wikipedia. 2025. Jan Göpfert, Patrick Kuckertz, Jann M. Weinand, and Detlef Stolten.

    Versions

    The datasets are released in different versions:

    • Processing level: the pre-processed versions can be used directly for training and evaluating models, while the raw versions can be used to create custom pre-processed versions or for other purposes. Wiki-Quantities is pre-processed for IOB sequence labeling, while Wiki-Measurements is pre-processed for SQuAD-style generative question answering.
    • Filtering level:
      • Wiki-Quantities is available in a raw, large, small, and tiny version: The raw version is the original version, which includes all the samples originally obtained. In the large version, all duplicates and near duplicates present in the raw version are removed. The small and tiny versions are subsets of the large version which are additionally filtered to balance the data with respect to units, properties, and topics.
      • Wiki-Measurements is available in a large`, small, large_strict, small_strict, small_context, and large_strict_context version: The large version contains all examples minus a few duplicates. The small version is a subset of the large version with very similar examples removed. In the context versions, additional sentences are added around the annotated sentence. In the strict versions, the quantitative facts are more strictly aligned with the text.
    • Quality: all data has been automatically annotated using heuristics. In contrast to the silver data, the gold data has been manually curated.

    Format

    The datasets are stored in JSON format. The pre-processed versions are formatted for direct use for IOB sequence labeling or SQuAD-style generative question answering in NLP frameworks such as Huggingface Transformers. In the not pre-processed versions of the datasets, annotations are visualized using emojis to facilitate curation. For example:

    • Wiki-Quantities (only quantities annotated):
      • "In a 🍏100-gram🍏 reference amount, almonds supply 🍏579 kilocalories🍏 of food energy."
      • "Extreme heat waves can raise readings to around and slightly above 🍏38 °C🍏, and arctic blasts can drop lows to 🍏−23 °C to 0 °F🍏."
      • "This sail added another 🍏0.5 kn🍏."
    • Wiki-Measurements (measurement context for a single quantity; qualifiers and quantity modifiers are only sparsely annotated):
      • "The 🔭French national census🔭 of 📆2018📆 estimated the 🍊population🍊 of 🌶️Metz🌶️ to be 🍐116,581🍐, while the population of Metz metropolitan area was about 368,000."
      • "The 🍊surface temperature🍊 of 🌶️Triton🌶️ was 🔭recorded by Voyager 2🔭 as 🍐-235🍐 🍓°C🍓 (-391 °F)."
      • "🙋The Babylonians🙋 were able to find that the 🍊value🍊 of 🌶️pi🌶️ was ☎️slightly greater than☎️ 🍐3🍐, by simply 🔭making a big circle and then sticking a piece of rope onto the circumference and the diameter, taking note of their distances, and then dividing the circumference by the diameter🔭."

    The mapping of annotation types to emojis is as follows:

    • Basic quantitative statement:
      • Entity: 🌶️
      • Property: 🍊
      • Quantity: 🍏
      • Value: 🍐
      • Unit: 🍓
      • Quantity modifier: ☎️
    • Qualifier:
      • Temporal scope: 📆
      • Start time: ⏱️
      • End time: ⏰️
      • Location: 📍
      • Reference: 🙋
      • Determination method: 🔭
      • Criterion used: 📏
      • Applies to part: 🦵
      • Scope: 🔎
      • Some qualifier: 🛁

    Note that for each version of Wiki-Measurements sample IDs are randomly assigned. Therefore, they are not consistent, e.g., between silver small and silver large. The proportions of train, dev, and test sets are unusual because Wiki-Quantities and Wiki-Measurements are intended to be used in conjunction with other non-heuristically generated data.

    Evaluation

    The evaluation directories contain the manually validated random samples used for evaluation. The evaluation is based on the large versions of the datasets. Manual validation of 100 samples each of Wiki-Quantities and Wiki-Measurements showed that 100% of the Wiki-Quantities samples and 94% (or 84% if strictly scored) of the Wiki-Measurements samples were correct.

    License

    In accordance with Wikipedia's and Wikidata's licensing terms, the datasets are released under the CC BY-SA 4.0 license, except for Wikidata facts in ./Wiki-Measurements/raw/additional_data.json, which are released under the CC0 1.0 license (the texts are still CC BY-SA 4.0).

    About Us

    We are the Institute of Climate and Energy Systems (ICE) - Jülich Systems Analysis belonging to the Forschungszentrum Jülich. Our interdisciplinary department's research is focusing on energy-related process and systems analyses. Data searches and system simulations are used to determine energy and mass balances, as well as to evaluate performance, emissions and costs of energy systems. The results are used for performing comparative assessment studies between the various systems. Our current priorities include the development of energy strategies, in accordance with the German Federal Government’s greenhouse gas reduction targets, by designing new infrastructures for sustainable and secure energy supply chains and by conducting cost analysis studies for integrating new technologies into future energy market frameworks.

    Acknowledgements

    The authors would like to thank the German Federal Government, the German State Governments, and the Joint Science Conference (GWK) for their funding and support as part of the NFDI4Ing consortium. Funded by the German Research Foundation (DFG) – project number: 442146713. Furthermore, this work was supported by the Helmholtz Association under the program "Energy System Design".

  6. Predictive Validity Data Set

    • figshare.com
    txt
    Updated Dec 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antonio Abeyta (2022). Predictive Validity Data Set [Dataset]. http://doi.org/10.6084/m9.figshare.17030021.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Dec 18, 2022
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Antonio Abeyta
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Verbal and Quantitative Reasoning GRE scores and percentiles were collected by querying the student database for the appropriate information. Any student records that were missing data such as GRE scores or grade point average were removed from the study before the data were analyzed. The GRE Scores of entering doctoral students from 2007-2012 were collected and analyzed. A total of 528 student records were reviewed. Ninety-six records were removed from the data because of a lack of GRE scores. Thirty-nine of these records belonged to MD/PhD applicants who were not required to take the GRE to be reviewed for admission. Fifty-seven more records were removed because they did not have an admissions committee score in the database. After 2011, the GRE’s scoring system was changed from a scale of 200-800 points per section to 130-170 points per section. As a result, 12 more records were removed because their scores were representative of the new scoring system and therefore were not able to be compared to the older scores based on raw score. After removal of these 96 records from our analyses, a total of 420 student records remained which included students that were currently enrolled, left the doctoral program without a degree, or left the doctoral program with an MS degree. To maintain consistency in the participants, we removed 100 additional records so that our analyses only considered students that had graduated with a doctoral degree. In addition, thirty-nine admissions scores were identified as outliers by statistical analysis software and removed for a final data set of 286 (see Outliers below). Outliers We used the automated ROUT method included in the PRISM software to test the data for the presence of outliers which could skew our data. The false discovery rate for outlier detection (Q) was set to 1%. After removing the 96 students without a GRE score, 432 students were reviewed for the presence of outliers. ROUT detected 39 outliers that were removed before statistical analysis was performed. Sample See detailed description in the Participants section. Linear regression analysis was used to examine potential trends between GRE scores, GRE percentiles, normalized admissions scores or GPA and outcomes between selected student groups. The D’Agostino & Pearson omnibus and Shapiro-Wilk normality tests were used to test for normality regarding outcomes in the sample. The Pearson correlation coefficient was calculated to determine the relationship between GRE scores, GRE percentiles, admissions scores or GPA (undergraduate and graduate) and time to degree. Candidacy exam results were divided into students who either passed or failed the exam. A Mann-Whitney test was then used to test for statistically significant differences between mean GRE scores, percentiles, and undergraduate GPA and candidacy exam results. Other variables were also observed such as gender, race, ethnicity, and citizenship status within the samples. Predictive Metrics. The input variables used in this study were GPA and scores and percentiles of applicants on both the Quantitative and Verbal Reasoning GRE sections. GRE scores and percentiles were examined to normalize variances that could occur between tests. Performance Metrics. The output variables used in the statistical analyses of each data set were either the amount of time it took for each student to earn their doctoral degree, or the student’s candidacy examination result.

  7. w

    Bangladesh - Handbook on Impact Evaluation: Quantitative Methods and...

    • wbwaterdata.org
    Updated Mar 16, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). Bangladesh - Handbook on Impact Evaluation: Quantitative Methods and Practices - Exercises 2009 - Dataset - waterdata [Dataset]. https://wbwaterdata.org/dataset/bangladesh-handbook-impact-evaluation-quantitative-methods-and-practices-exercises-2009
    Explore at:
    Dataset updated
    Mar 16, 2020
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Bangladesh
    Description

    This exercise dataset was created for researchers interested in learning how to use the models described in the "Handbook on Impact Evaluation: Quantitative Methods and Practices" by S. Khandker, G. Koolwal and H. Samad, World Bank, October 2009 (permanent URL http://go.worldbank.org/FE8098BI60). Public programs are designed to reach certain goals and beneficiaries. Methods to understand whether such programs actually work, as well as the level and nature of impacts on intended beneficiaries, are main themes of this book. Has the Grameen Bank, for example, succeeded in lowering consumption poverty among the rural poor in Bangladesh? Can conditional cash transfer programs in Mexico and Latin America improve health and schooling outcomes for poor women and children? Does a new road actually raise welfare in a remote area in Tanzania, or is it a "highway to nowhere?" This handbook reviews quantitative methods and models of impact evaluation. It begings by reviewing the basic issues pertaining to an evaluation of an intervention to reach certain targets and goals. It then focuses on the experimental design of an impact evaluation, highlighting its strengths and shortcomings, followed by discussions on various non-experimental methods. The authors also cover methods to shed light on the nature and mechanisms by which different participants are benefiting from the program. The handbook provides STATA exercises in the context of evaluating major microcredit programs in Bangladesh, such as the Grameen Bank. This dataset provides both the related Stata data files and the Stata programs.

  8. c

    Standardization in Quantitative Imaging: A Multi-center Comparison of...

    • cancerimagingarchive.net
    • stage.cancerimagingarchive.net
    n/a, nifti and zip +1
    Updated Jun 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Cancer Imaging Archive, Standardization in Quantitative Imaging: A Multi-center Comparison of Radiomic Feature Values [Dataset]. http://doi.org/10.7937/tcia.2020.9era-gg29
    Explore at:
    xlsx, n/a, nifti and zipAvailable download formats
    Dataset updated
    Jun 9, 2020
    Dataset authored and provided by
    The Cancer Imaging Archive
    License

    https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/

    Time period covered
    Jun 9, 2020
    Dataset funded by
    National Cancer Institutehttp://www.cancer.gov/
    Description

    This dataset was used by the NCI's Quantitative Imaging Network (QIN) PET-CT Subgroup for their project titled: Multi-center Comparison of Radiomic Features from Different Software Packages on Digital Reference Objects and Patient Datasets. The purpose of this project was to assess the agreement among radiomic features when computed by several groups by using different software packages under very tightly controlled conditions, which included common image data sets and standardized feature definitions. The image datasets (and Volumes of Interest – VOIs) provided here are the same ones used in that project and reported in the publication listed below (ISSN 2379-1381 https://doi.org/10.18383/j.tom.2019.00031). In addition, we have provided detailed information about the software packages used (Table 1 in that publication) as well as the individual feature value results for each image dataset and each software package that was used to create the summary tables (Tables 2, 3 and 4) in that publication. For that project, nine common quantitative imaging features were selected for comparison including features that describe morphology, intensity, shape, and texture and that are described in detail in the International Biomarker Standardisation Initiative (IBSI, https://arxiv.org/abs/1612.07003 and publication (Zwanenburg A. Vallières M, et al, The Image Biomarker Standardization Initiative: Standardized Quantitative Radiomics for High-Throughput Image-based Phenotyping. Radiology. 2020 May;295(2):328-338. doi: https://doi.org/10.1148/radiol.2020191145). There are three datasets provided – two image datasets and one dataset consisting of four excel spreadsheets containing feature values.

    1. The first image dataset is a set of three Digital Reference Objects (DROs) used in the project, which are: (a) a sphere with uniform intensity, (b) a sphere with intensity variation (c) a nonspherical (but mathematically defined) object with uniform intensity. These DROs were created by the team at Stanford University and are described in (Jaggi A, Mattonen SA, McNitt-Gray M, Napel S. Stanford DRO Toolkit: digital reference objects for standardization of radiomic features. Tomography. 2019;6:–.) and are a subset of the DROs described in DRO Toolkit. Each DRO is represented in both DICOM and NIfTI format and the VOI was provided in each format as well (DICOM Segmentation Object (DSO) as well as NIfTI segmentation boundary).
    2. The second image dataset is the set of 10 patient CT scans, originating from the LIDC-IDRI dataset, that were used in the QIN multi-site collection of Lung CT data with Nodule Segmentations project ( https://doi.org/10.7937/K9/TCIA.2015.1BUVFJR7 ). In that QIN study, a single lesion from each case was identified for analysis and then nine VOIs were generated using three repeat runs of three segmentation algorithms (one from each of three academic institutions) on each lesion. To eliminate one source of variability in our project, only one of the VOIs previously created for each lesion was identified and all sites used that same VOI definition. The specific VOI chosen for each lesion was the first run of the first algorithm (algorithm 1, run 1). DICOM images were provided for each dataset and the VOI was provided in both DICOM Segmentation Object (DSO) and NIfTI segmentation formats.
    3. The third dataset is a collection of four excel spreadsheets, each of which contains detailed information corresponding to each of the four tables in the publication. For example, the raw feature values and the summary tables for Tables 2,3 and 4 reported in the publication cited (https://doi.org/10.18383/j.tom.2019.00031). These tables are:
    Software Package details : This table contains detailed information about the software packages used in the study (and listed in Table 1 in the publication) including version number and any parameters specified in the calculation of the features reported. DRO results : This contains the original feature values obtained for each software package for each DRO as well as the table summarizing results across software packages (Table 2 in the publication) . Patient Dataset results: This contains the original feature values for each software package for each patient dataset (1 lesion per case) as well as the table summarizing results across software packages and patient datasets (Table 3 in the publication). Harmonized GLCM Entropy Results : This contains the values for the “Harmonized” GLCM Entropy feature for each patient dataset and each software package as well as the summary across software packages (Table 4 in the publication).

  9. s

    MINUTE-ChIP example data

    • figshare.scilifelab.se
    txt
    Updated Jan 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carmen Navarro Luzon; Simon Elsässer (2025). MINUTE-ChIP example data [Dataset]. http://doi.org/10.17044/scilifelab.25348405.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 15, 2025
    Dataset provided by
    Karolinska Institutet
    Authors
    Carmen Navarro Luzon; Simon Elsässer
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    This collection contains an example MINUTE-ChIP dataset to run minute pipeline on, provided as supporting material to help users understand the results of a MINUTE-ChIP experiment from raw data to a primary analysis that yields the relevant files for downstream analysis along with summarized QC indicators. Example primary non-demultiplexed FASTQ files provided here were used to generate GSM5493452-GSM5493463 (H3K27m3) and GSM5823907-GSM5823918 (Input), deposited on GEO with the minute pipeline all together under series GSE181241. For more information about MINUTE-ChIP, you can check the publication relevant to this dataset: Kumar, Banushree, et al. "Polycomb repressive complex 2 shields naïve human pluripotent cells from trophectoderm differentiation." Nature Cell Biology 24.6 (2022): 845-857. If you want more information about the minute pipeline, there is a public biorXiv and a GitHub repository and official documentation.

  10. f

    Assessment and Improvement of Statistical Tools for Comparative Proteomics...

    • acs.figshare.com
    txt
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Veit Schwämmle; Ileana Rodríguez León; Ole Nørregaard Jensen (2023). Assessment and Improvement of Statistical Tools for Comparative Proteomics Analysis of Sparse Data Sets with Few Experimental Replicates [Dataset]. http://doi.org/10.1021/pr400045u.s003
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    ACS Publications
    Authors
    Veit Schwämmle; Ileana Rodríguez León; Ole Nørregaard Jensen
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.

  11. w

    Handbook on Impact Evaluation: Quantitative Methods and Practices -...

    • microdata.worldbank.org
    • catalog.ihsn.org
    • +1more
    Updated Nov 20, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    S. Khandker, G. Koolwal and H. Samad (2013). Handbook on Impact Evaluation: Quantitative Methods and Practices - Exercises 2009 - Bangladesh [Dataset]. https://microdata.worldbank.org/index.php/catalog/436
    Explore at:
    Dataset updated
    Nov 20, 2013
    Dataset authored and provided by
    S. Khandker, G. Koolwal and H. Samad
    Time period covered
    2009
    Area covered
    Bangladesh
    Description

    Abstract

    This exercise dataset was created for researchers interested in learning how to use the models described in the "Handbook on Impact Evaluation: Quantitative Methods and Practices" by S. Khandker, G. Koolwal and H. Samad, World Bank, October 2009 (permanent URL http://go.worldbank.org/FE8098BI60).

    Public programs are designed to reach certain goals and beneficiaries. Methods to understand whether such programs actually work, as well as the level and nature of impacts on intended beneficiaries, are main themes of this book. Has the Grameen Bank, for example, succeeded in lowering consumption poverty among the rural poor in Bangladesh? Can conditional cash transfer programs in Mexico and Latin America improve health and schooling outcomes for poor women and children? Does a new road actually raise welfare in a remote area in Tanzania, or is it a "highway to nowhere?"

    This handbook reviews quantitative methods and models of impact evaluation. It begings by reviewing the basic issues pertaining to an evaluation of an intervention to reach certain targets and goals. It then focuses on the experimental design of an impact evaluation, highlighting its strengths and shortcomings, followed by discussions on various non-experimental methods. The authors also cover methods to shed light on the nature and mechanisms by which different participants are benefiting from the program.

    The handbook provides STATA exercises in the context of evaluating major microcredit programs in Bangladesh, such as the Grameen Bank. This dataset provides both the related Stata data files and the Stata programs.

  12. d

    Replication Data for: The Choice of Aspect in the Russian Modal Construction...

    • search.dataone.org
    • dataverse.no
    Updated Jan 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bernasconi, Beatrice (2024). Replication Data for: The Choice of Aspect in the Russian Modal Construction with prixodit'sja/prijtis' [Dataset]. http://doi.org/10.18710/KR5RRK
    Explore at:
    Dataset updated
    Jan 5, 2024
    Dataset provided by
    DataverseNO
    Authors
    Bernasconi, Beatrice
    Time period covered
    Jan 1, 1950 - Jan 1, 2020
    Description

    This dataset includes all the data files that were used for the studies in my Master Thesis: "The Choice of Aspect in the Russian Modal Construction with prixodit'sja/prijtis'". The data files are numbered so that they are shown in the same order as they are presented in the thesis. They include the database and the code used for the statistical analysis. Their contents are described in the ReadMe files. The core of the work is a quantitative and empirical study on the choice of aspect by Russian native speakers in the modal construction prixodit’sja/prijtis’ + inf. The hypothesis is that in the modal construction prixodit’sja/prijtis’ + inf the aspect of the infinitive is not fully determined by grammatical context but, to some extent, open to construal. A preliminary analysis was carried out on data gathered from the Russian National Corpus (www.ruscorpora.ru). Four hundred and forty-seven examples with the verb prijtis' were annotated manually for several factors and a statistical test (CART) was run. Results demonstrated that no grammatical factor plays a big role in the use of one aspect rather than the other. Data for this study can be consulted in the files from 01 to 03 and include a ReadMe file, the database in .csv format and the code used for the statistical test. An experiment with native speakers was then carried out. A hundred and ten native speakers of Russian were surveyed and asked to evaluate the acceptability of the infinitive in examples with prixodit’sja/prijtis’ delat’/sdelat’ šag/vid/vybor. The survey presented seventeen examples from the Russian National Corpus that were submitted two times: the first time with the same aspect as in the original version, the second time with the other aspect. Participants had to evaluate each case by choosing among “Impossible”, “Acceptable” and “Excellent” ratings. They were also allowed to give their opinion about the difference between aspects in each example. A Logistic Regression with Mixed Effects was run on the answers. Data for this study can be consulted in the files from 04 to 010 and include a ReadMe file, the text and the answers of the questionnaire, the database in .csv, .txt and pdf formats and the code used for the statistical test. Results showed that prijtis’ often admits both aspects in the infinitive, while prixodit’sja is more restrictive and prefers imperfective. Overall, “Acceptable” and “Excellent” responses were higher than “Impossible” responses for both aspects, even when the aspect evaluated didn’t match with the original. Personal opinions showed that the choice of aspect often depends on the meaning the speaker wants to convey. Only in very few cases the grammatical context was considered to be a constraint on the choice.

  13. Data from: Tutorials in Analytical Chemistry

    • res1catalogd-o-tdatad-o-tgov.vcapture.xyz
    • catalog.data.gov
    • +1more
    Updated Jul 29, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2022). Tutorials in Analytical Chemistry [Dataset]. https://res1catalogd-o-tdatad-o-tgov.vcapture.xyz/dataset/tutorials-in-analytical-chemistry-f3c10
    Explore at:
    Dataset updated
    Jul 29, 2022
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    The data consists of video files and associated abstracts for training and education in chemical metrology. The videos are focused on laboratory operations for quantitative analysis of complex matrix samples. Topics include theory and practice of liquid chromatography, sample extraction and processing, and aspects of quantitative analysis.

  14. d

    Data from: An Interactive R-Based Custom Quantification Program for...

    • catalog.data.gov
    • agdatacommons.nal.usda.gov
    Updated Apr 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Data from: An Interactive R-Based Custom Quantification Program for Quantitative Analysis of Triacylglycerols in Bovine Milk [Dataset]. https://catalog.data.gov/dataset/byrdwell-bovine-milk-dataset-061021-dfef7
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    Agricultural Research Service
    Description

    [Note: Title updated 2024-04-23]Liquid chromatography-mass spectrometry (LC-MS) experiment data files for bovine milk lipid extracts, standards, and blanks, in mzML format. For use with "An Open-Source R-Based Workflow for Qualitative and Quantitative Lipidomics of Bovine Milk". Chromatography used a fast (10 minute) non-aqueous reversed-phase UHPLC separation. MS analysis was performed on a ThermoScientific QExactive Orbitrap high-resolution, accurate-mass mass spectrometer operated in electrospray ionization (ESI) mode.Resources in this dataset:Resource Title: Bovine Milk Data acquired 06/10/21 File Name: ByrdwellData_Milk_061021.zip Resource Description: Sequence of runs containing 10 Blanks, 30 Standards (6 Levels x 5 replicates), and 48 Bovine Milk extracts, as follows: 2 Cows, 3 feeding periods, 2 days (samples) per feeding period, 4 replicates for 24 samples per cow x 2 cows. 88 runs (separate data files) altogether. All files originally in proprietary .RAW format converted to .mzML. Data obtained on ThermoScientific QExactive orbitrap high-resolution, accurate-mass mass spectrometer.

  15. Z

    Datasets for Evaluation of Multimodal Image Registration

    • data.niaid.nih.gov
    • zenodo.org
    Updated Oct 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Öfverstedt, Johan (2021). Datasets for Evaluation of Multimodal Image Registration [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4587902
    Explore at:
    Dataset updated
    Oct 10, 2021
    Dataset provided by
    Lu, Jiahao
    Sladoje, Nataša
    Lindblad, Joakim
    Öfverstedt, Johan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Description

    Aerial data

    The Aerial dataset is divided into 3 sub-groups by IDs: {7, 9, 20, 3, 15, 18}, {10, 1, 13, 4, 11, 6, 16}, {14, 8, 17, 5, 19, 12, 2}. Since the images vary in size, each image is subdivided into the maximal number of equal-sized non-overlapping regions such that each region can contain exactly one 300x300 px image patch. Then one 300x300 px image patch is extracted from the centre of each region. The particular 3-folded grouping followed by splitting leads to that each evaluation fold contains 72 test samples.

    Modality A: Near-Infrared (NIR)

    Modality B: three colour channels (in B-G-R order)

    Cytological data

    The Cytological data contains images from 3 different cell lines; all images from one cell line is treated as one fold in 3-folded cross-validation. Each image in the dataset is subdivided from 600x600 px into 2x2 patches of size 300x300 px, so that there are 420 test samples in each evaluation fold.

    Modality A: Fluorescence Images

    Modality B: Quantitative Phase Images (QPI)

    Histological dataset

    For the Histological data, to avoid too easy registration relying on the circular border of the TMA cores, the evaluation images are created by cutting 834x834 px patches from the centres of the original 134 TMA image pairs.

    Modality A: Second Harmonic Generation (SHG)

    Modality B: Bright-Field (BF)

    The evaluation set created from the above three publicly available datasets consists of images undergone 4 levels of (rigid) transformations of increasing size of displacement. The level of transformations is determined by the size of the rotation angle θ and the displacement tx & ty, detailed in this table. Each image sample is transformed exactly once at each transformation level so that all levels have the same number of samples.

    In total, it contains 864 image pairs created from the aerial dataset, 5040 image pairs created from the cytological dataset, and 536 image pairs created from the histological dataset. Each image pair consists of a reference patch (I^{\text{Ref}}) and its corresponding initial transformed patch (I^{\text{Init}}) in both modalities, along with the ground-truth transformation parameters to recover it.

    Scripts to calculate the registration performance and to plot the overall results can be found in https://github.com/MIDA-group/MultiRegEval, and instructions to generate more evaluation data with different settings can be found in https://github.com/MIDA-group/MultiRegEval/tree/master/Datasets#instructions-for-customising-evaluation-data.

    Metadata

    In the *.zip files, each row in {Zurich,Balvan}_patches/fold[1-3]/patch_tlevel[1-4]/info_test.csv or Eliceiri_patches/patch_tlevel[1-4]/info_test.csv provides the information of an image pair as follow:

    Filename: identifier(ID) of the image pair

    X1_Ref: x-coordinate of the upper-left corner of reference patch IRef

    Y1_Ref: y-coordinate of the upper-left corner of reference patch IRef

    X2_Ref: x-coordinate of the lower-left corner of reference patch IRef

    Y2_Ref: y-coordinate of the lower-left corner of reference patch IRef

    X3_Ref: x-coordinate of the lower-right corner of reference patch IRef

    Y3_Ref: y-coordinate of the lower-right corner of reference patch IRef

    X4_Ref: x-coordinate of the upper-right corner of reference patch IRef

    Y4_Ref: y-coordinate of the upper-right corner of reference patch IRef

    X1_Trans: x-coordinate of the upper-left corner of transformed patch IInit

    Y1_Trans: y-coordinate of the upper-left corner of transformed patch IInit

    X2_Trans: x-coordinate of the lower-left corner of transformed patch IInit

    Y2_Trans: y-coordinate of the lower-left corner of transformed patch IInit

    X3_Trans: x-coordinate of the lower-right corner of transformed patch IInit

    Y3_Trans: y-coordinate of the lower-right corner of transformed patch IInit

    X4_Trans: x-coordinate of the upper-right corner of transformed patch IInit

    Y4_Trans: y-coordinate of the upper-right corner of transformed patch IInit

    Displacement: mean Euclidean distance between reference corner points and transformed corner points

    RelativeDisplacement: the ratio of displacement to the width/height of image patch

    Tx: randomly generated translation in the x-direction to synthesise the transformed patch IInit

    Ty: randomly generated translation in the y-direction to synthesise the transformed patch IInit

    AngleDegree: randomly generated rotation in degrees to synthesise the transformed patch IInit

    AngleRad: randomly generated rotation in radian to synthesise the transformed patch IInit

    Naming convention

    Aerial Data

    zh{ID}_{iRow}_{iCol}_{ReferenceOrTransformed}.png

    Example: zh5_03_02_R.png indicates the Reference patch of the 3rd row and 2nd column cut from the image with ID zh5.

    Cytological data

    {{cellline}_{treatment}_{fieldofview}_{iFrame}}_{iRow}_{iCol}_{ReferenceOrTransformed}.png

    Example: PNT1A_do_1_f15_02_01_T.png indicates the Transformed patch of the 2nd row and 1st column cut from the image with ID PNT1A_do_1_f15.

    Histological data

    {ID}_{ReferenceOrTransformed}.tif

    Example: 1B_A4_T.tif indicates the Transformed patch cut from the image with ID 1B_A4.

    This dataset was originally produced by the authors of Is Image-to-Image Translation the Panacea for Multimodal Image Registration? A Comparative Study.

  16. Example datasets for testing the GWASTic software

    • zenodo.org
    zip
    Updated Sep 5, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefanie Lueck; Stefanie Lueck (2024). Example datasets for testing the GWASTic software [Dataset]. http://doi.org/10.5281/zenodo.13695229
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 5, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Stefanie Lueck; Stefanie Lueck
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The first dataset, called barley_set, contains barley data to validate the peak associations of two-row barley genes previously identified in Milner, Jost, and Taketa (2019). To replicate the Genome-Wide Association Study (GWAS) results, please use the pre-filtered and formatted genotypic file ‘WGS300_005_0020.bed’. The corresponding phenotype data for row type, pre-formatted for direct use with the GWAStic software, is available in ‘bridge_row_type_GWAS.txt’. To reproduce the genomic prediction experiments, please use the same files: ‘WGS300_005_0020.bed’ for the genotypic data and ‘bridge_row_type_GP.txt’ for the phenotypes. The file ‘validation_set.txt’ contains a set of 30 genotypes that have been excluded from the training data for use as a validation set.

    Additionally, a minimalistic dataset called small_dataset is provided to facilitate quick testing of the GWAStic software. This dataset includes:

    • ‘example.vcf.gz’ to test the VCF to BED conversion.
    • ‘example.bed’, a filtered genotypic file ready for use.
    • ‘pheno_gwas.csv’ as a phenotypic file for GWAS.
    • ‘pheno_gp.csv’ as a phenotypic file for genomic prediction.

    We generated two synthetic datasets using PLINK software, one with binary and one with quantitative phenotypes. Each synthetic dataset contains 2,000 samples—1,000 cases and 1,000 controls—with a total of 90,010 SNPs. For the dataset with binary phenotypes (called synthetic_binary), the SNPs were categorized into three groups: nullA, nullB, and nullC, each containing 30,000 SNPs not associated with the disease phenotype. Additionally, we included 5 SNPs labeled diseaseA and 5 labeled diseaseB, designed to mimic disease-associated loci. The diseaseA SNPs had allele frequencies between 0.1 and 0.2, with a relative risk of 2.5 under a multiplicative model, while the diseaseB SNPs had allele frequencies between 0.2 and 0.25, with a relative risk of 3.0. The remaining SNPs had a relative risk of 1.0, indicating no effect.

    For the dataset with quantitative phenotypes (called synthetic_qt), we followed a similar structure. The SNPs were again divided into nullA, nullB, and nullC categories, with 30,000 SNPs each. We also included 5 SNPs labeled qtlA and 5 labeled qtlB, representing quantitative trait loci. The qtlA SNPs had allele frequencies from 0.1 to 0.2, with an effect size of 0.02, while qtlB SNPs had allele frequencies from 0.2 to 0.25, with an effect size of 0.03. These effect sizes indicate the SNPs' impact on the quantitative trait variance.

  17. Z

    Data from: Supplementary Materials: A primer on gathering and analysing...

    • data.niaid.nih.gov
    • researchportal.scu.edu.au
    • +1more
    Updated Jan 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kieran Balloo (2021). Supplementary Materials: A primer on gathering and analysing multi-level quantitative evidence for differential student outcomes in higher education [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4115263
    Explore at:
    Dataset updated
    Jan 27, 2021
    Dataset provided by
    Kieran Balloo
    Naomi E. Winstone
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Example data sets, syntax files and macros for the tutorials in: Balloo, K., & Winstone, N. E. (2021). A primer on gathering and analysing multi-level quantitative evidence for differential student outcomes in higher education. Frontline Learning Research. https://doi.org/10.14786/flr.v9i2.675

    The data for all examples are fictional, and have only been designed to simulate the possible behaviour of institutional data for the purposes of demonstrating the analytical approaches in the primer. No inferences or conclusions should be drawn from the findings of these examples, because the results are not real.

    We anticipate that readers can use the example data sets as templates and substitute in their own data.

  18. Data from: De identified data set.csv

    • figshare.com
    txt
    Updated Nov 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anne-Marie Russell; Malik Althobiani (2023). De identified data set.csv [Dataset]. http://doi.org/10.6084/m9.figshare.24569851.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Nov 27, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Anne-Marie Russell; Malik Althobiani
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A team of expert clinicians and patient partners interested in self-management approaches designed a 48-question cross-sectional electronic survey; specifically targeted at individuals diagnosed with ILD. T. The survey was open for participation between September 2021 and December 2022, and responses were collected anonymously. Data were analysed descriptively for quantitative aspects and through thematic analysis for qualitative inputResults: 104 patients accessed the survey and 89/104 (86%) reported a diagnosis of lung fibrosis, including 46/89 (52%) idiopathic pulmonary fibrosis (IPF) with 57/89 (64%) of participants diagnosed >3 years and 59/89 (66%) female. 52/65(80%) were in the UK; 33/65 (51%) reported severe breathlessness medical research council MRC grade 3-4 and 32/65 (49%) disclosed co-morbid arthritis or joint problems. Of these, 18/83 (22%) used a hand- held spirometer, with only 6/17 (35%) advised on how to interpret the readings. Pulse oximetry devices were the most frequently used device by 35/71 (49%) and 20/64 (31%) measured their saturations more than once daily. 29/63 (46%) of respondents reported home-monitoring brought reassurance; of these, for 25/63 (40%) a feeling of control. 10/57 (18%) felt it had a negative effect, citing fluctuating readings as causing stress and ‘paranoia’. Nurse specialists were the most frequent source of help 24/63 (38%).

  19. d

    Replication Data for: Knowing and doing: The development of information...

    • search.dataone.org
    • dataverse.no
    • +1more
    Updated Jul 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nierenberg, Ellen; Låg, Torstein; Dahl, Tove I. (2024). Replication Data for: Knowing and doing: The development of information literacy measures to assess knowledge and practice [Dataset]. http://doi.org/10.18710/L60VDI
    Explore at:
    Dataset updated
    Jul 29, 2024
    Dataset provided by
    DataverseNO
    Authors
    Nierenberg, Ellen; Låg, Torstein; Dahl, Tove I.
    Time period covered
    Jan 1, 2019 - Jun 30, 2020
    Description

    This data set contains the replication data for the article "Knowing and doing: The development of information literacy measures to assess knowledge and practice." This article was published in the Journal of Information Literacy, in June 2021. The data was collected as part of the contact author's PhD research on information literacy (IL). One goal of this study is to assess students' levels of IL using three measures: 1) a 21-item IL test for assessing students' knowledge of three aspects of IL: evaluating sources, using sources, and seeking information. The test is multiple choice, with four alternative answers for each item. This test is a "KNOW-measure," intended to measure what students know. 2) a source-evaluation measure to assess students' abilities to critically evaluate information sources in practice. This is a "DO-measure," intended to measure what students do in practice, in actual assignments. 3) a source-use measure to assess students' abilities to use sources correctly when writing. This is a "DO-measure," intended to measure what students do in practice, in actual assignments. The data set contains survey results from 626 Norwegian and international students at three levels of higher education: bachelor, master's and PhD. The data was collected in Qualtrics from fall 2019 to spring 2020. In addition to the data set and this README file, two other files are available here: 1) test questions in the survey, including answer alternatives (IL_knowledge_tests.txt) 2) details of the assignment-based measures for assessing source evaluation and source use (Assignment_based_measures_assessing_IL_skills.txt) Publication abstract: This study touches upon three major themes in the field of information literacy (IL): the assessment of IL, the association between IL knowledge and skills, and the dimensionality of the IL construct. Three quantitative measures were developed and tested with several samples of university students to assess knowledge and skills for core facets of IL. These measures are freely available, applicable across disciplines, and easy to administer. Results indicate they are likely to be reliable and support valid interpretations. By measuring both knowledge and practice, the tools indicated low to moderate correlations between what students know about IL, and what they actually do when evaluating and using sources in authentic, graded assignments. The study is unique in using actual coursework to compare knowing and doing regarding students’ evaluation and use of sources. It provides one of the most thorough documentations of the development and testing of IL assessment measures to date. Results also urge us to ask whether the source-focused components of IL – information seeking, source evaluation and source use – can be considered unidimensional constructs or sets of disparate and more loosely related components, and findings support their heterogeneity.

  20. c

    Sea lamprey quantitative environmental DNA surveillance - Data Release

    • s.cnmilf.com
    • data.usgs.gov
    • +2more
    Updated Jul 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Sea lamprey quantitative environmental DNA surveillance - Data Release [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/sea-lamprey-quantitative-environmental-dna-surveillance-data-release
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    The data set is six separate csv files. Four of which contain the quantity of DNA copy numbers and fluor used to analyze the DNA quantities collected from water samples from four separate portions of the study (adult SL field, adult SL lab, larval SL field, larval SL lab) and need to be in their own csv file. Also included is a csv with adult SL trapping data, a csv for larval SL shocking data, and a csv with the volume that was filtered for our DNA extractions.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. Geological Survey (2024). Data sets for a quantitative dye tracer test conducted at the Savoy Experimental Watershed, November 13-December 2, 2017, Savoy, Arkansas [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/data-sets-for-a-quantitative-dye-tracer-test-conducted-at-the-savoy-experimental-watershed

Data sets for a quantitative dye tracer test conducted at the Savoy Experimental Watershed, November 13-December 2, 2017, Savoy, Arkansas

Explore at:
Dataset updated
Jul 6, 2024
Dataset provided by
United States Geological Surveyhttp://www.usgs.gov/
Area covered
Savoy
Description

These are the data sets in machine readable files from a quantitative dye tracer test conducted at Langle Spring November 13-December 2, 2017 as part of the USGS training class, GW2227 Advanced Field Methods in Karst Terrains, held at the Savoy Experimental Watershed, Savoy Arkansas. Langle Spring is NWIS site 71948218, latitude 36.11896886, longitude -94.34548871. One pound of RhodamineWT dye was injected into a sinking stream at latitude 36.116772 longitude -94.341883 NAD83 on November 13, 2017 at 22:50. The data sets include original fluorimeter data logger files from Langle and Copperhead Springs, Laboratory Sectra-fluorometer files from standards and grab samples, and processed input and output files from the breakthrough curve analysis program Qtracer2 (Field, USEPA, 2002 EPA/600/R-02/001).

Search
Clear search
Close search
Google apps
Main menu