100+ datasets found
  1. MIPS Data Validation Criteria

    • johnsnowlabs.com
    csv
    Updated Jan 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Snow Labs (2021). MIPS Data Validation Criteria [Dataset]. https://www.johnsnowlabs.com/marketplace/mips-data-validation-criteria/
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jan 20, 2021
    Dataset authored and provided by
    John Snow Labs
    Time period covered
    2017 - 2020
    Area covered
    United States
    Description

    This dataset includes the MIPS Data Validation Criteria. The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) streamlines a patchwork collection of programs with a single system where provider can be rewarded for better care. Providers will be able to practice as they always have, but they may receive higher Medicare payments based on their performance.

  2. f

    Data from: Development and validation of HBV surveillance models using big...

    • tandf.figshare.com
    docx
    Updated Dec 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Weinan Dong; Cecilia Clara Da Roza; Dandan Cheng; Dahao Zhang; Yuling Xiang; Wai Kay Seto; William C. W. Wong (2024). Development and validation of HBV surveillance models using big data and machine learning [Dataset]. http://doi.org/10.6084/m9.figshare.25201473.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Dec 3, 2024
    Dataset provided by
    Taylor & Francis
    Authors
    Weinan Dong; Cecilia Clara Da Roza; Dandan Cheng; Dahao Zhang; Yuling Xiang; Wai Kay Seto; William C. W. Wong
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The construction of a robust healthcare information system is fundamental to enhancing countries’ capabilities in the surveillance and control of hepatitis B virus (HBV). Making use of China’s rapidly expanding primary healthcare system, this innovative approach using big data and machine learning (ML) could help towards the World Health Organization’s (WHO) HBV infection elimination goals of reaching 90% diagnosis and treatment rates by 2030. We aimed to develop and validate HBV detection models using routine clinical data to improve the detection of HBV and support the development of effective interventions to mitigate the impact of this disease in China. Relevant data records extracted from the Family Medicine Clinic of the University of Hong Kong-Shenzhen Hospital’s Hospital Information System were structuralized using state-of-the-art Natural Language Processing techniques. Several ML models have been used to develop HBV risk assessment models. The performance of the ML model was then interpreted using the Shapley value (SHAP) and validated using cohort data randomly divided at a ratio of 2:1 using a five-fold cross-validation framework. The patterns of physical complaints of patients with and without HBV infection were identified by processing 158,988 clinic attendance records. After removing cases without any clinical parameters from the derivation sample (n = 105,992), 27,392 cases were analysed using six modelling methods. A simplified model for HBV using patients’ physical complaints and parameters was developed with good discrimination (AUC = 0.78) and calibration (goodness of fit test p-value >0.05). Suspected case detection models of HBV, showing potential for clinical deployment, have been developed to improve HBV surveillance in primary care setting in China. (Word count: 264) This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections.We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China. This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections. We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China.

  3. Z

    Data from: Synthetic Smart Card Data for the Analysis of Temporal and...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Paul Bouman (2020). Synthetic Smart Card Data for the Analysis of Temporal and Spatial Patterns [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_776718
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset authored and provided by
    Paul Bouman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is a synthetic smart card data set that can be used to test pattern detection methods for the extraction of temporal and spatial data. The data set is tab seperated and based on a stylized travel pattern description for city of Utrecht in The Netherlands and is developed and used in Chapter 6 of the PhD Thesis of Paul Bouman.

    This dataset contains the following files:

    journeys.tsv : the actual data set of synthetic smart card data

    utrecht.xml : the activity pattern definition that was used to randomly generate the synthethic smart card data

    validate.ref : a file derived from the activity pattern definition that can be used for validation purposes. It specifies which activity types occur at each location in the smart card data set.

  4. d

    Model validation data from 2018 to 2020 - Dataset - data.govt.nz - discover...

    • catalogue.data.govt.nz
    Updated Feb 1, 2001
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2001). Model validation data from 2018 to 2020 - Dataset - data.govt.nz - discover and use data [Dataset]. https://catalogue.data.govt.nz/dataset/oai-figshare-com-article-12278786
    Explore at:
    Dataset updated
    Feb 1, 2001
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset was used to validate the global distribution of kelp biome model. These data were downloaded from the GBIF online database and cleaned to maintain highest georeference accuracy. The MaxEnt probability values of each record were given in the last column.

  5. Z

    Data pipeline Validation And Load Testing using Multiple JSON Files

    • data.niaid.nih.gov
    Updated Mar 26, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Afsana Khan (2021). Data pipeline Validation And Load Testing using Multiple JSON Files [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4636789
    Explore at:
    Dataset updated
    Mar 26, 2021
    Dataset provided by
    Pelle Jakovits
    Mainak Adhikari
    Afsana Khan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The datasets were used to validate and test the data pipeline deployment following the RADON approach. The dataset contains temperature and humidity sensor readings of a particular day, which are synthetically generated using a data generator and are stored as JSON files to validate and test (performance/load testing) the data pipeline components.

  6. H

    Rainbow training and validation data

    • dataverse.harvard.edu
    Updated Nov 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kimberly Carlson (2022). Rainbow training and validation data [Dataset]. http://doi.org/10.7910/DVN/YTRMGN
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 26, 2022
    Dataset provided by
    Harvard Dataverse
    Authors
    Kimberly Carlson
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset includes the date and time, latitude (“lat”), longitude (“lon”), sun angle (“sun_angle”, in degrees [o]), rainbow presence (TRUE = rainbow, FALSE = no rainbow), cloud cover (“cloud_cover”, proportion), and liquid precipitation (“liquid_precip”, kg m-2 s-1) for each record used to train and/or validate the models.

  7. d

    Structure tensor validation - Dataset - data.govt.nz - discover and use data...

    • catalogue.data.govt.nz
    Updated Feb 1, 2001
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2001). Structure tensor validation - Dataset - data.govt.nz - discover and use data [Dataset]. https://catalogue.data.govt.nz/dataset/oai-figshare-com-article-25216145
    Explore at:
    Dataset updated
    Feb 1, 2001
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Structure tensor validationGeneral informationThis item contains test data to validate the structure tensor algorithms and a supplemental paper describing how the data was generated and used.ContentsThe test_data.zip archive contains 101 slices of a cylinder (701x701 pixels) with two artificially created fibre orientations. The outer fibres are oriented longitudinally, and the inner fibres are oriented circumferentially, similar to the ones found in the rat uterus.The SupplementaryMaterials_rat_uterus_texture_validation.pdf file is a short supplemental paper describing the generation of the test data and the results after being processed with the structure tensor code.

  8. c

    Gulf of Maine - Control Points Used to Validate the Accuracies of the...

    • s.cnmilf.com
    • datasets.ai
    • +2more
    Updated Oct 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (Point of Contact, Custodian) (2024). Gulf of Maine - Control Points Used to Validate the Accuracies of the Interpolated Water Density Rasters [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/gulf-of-maine-control-points-used-to-validate-the-accuracies-of-the-interpolated-water-density-1
    Explore at:
    Dataset updated
    Oct 18, 2024
    Dataset provided by
    (Point of Contact, Custodian)
    Area covered
    Gulf of Maine, Maine
    Description

    This feature dataset contains the control points used to validate the accuracies of the interpolated water density rasters for the Gulf of Maine. These control points were selected randomly from the water density data points, using Hawth's Create Random Selection Tool. Twenty-five percent of each seasonal bin (for each year and at each depth) were randomly selected and set aside for validation. For example, if there were 1,000 water density data points for the fall (September, October, November) 2003 at 0 meters, then 250 of those points were randomly selected, removed and set aside to assess the accuracy of interpolated surface. The naming convention of the validation point feature class includes the year (or years), the season, and the depth (in meters) it was selected from. So for example, the name: ValidationPoints_1997_2004_Fall_0m would indicate that this point feature class was randomly selected from water density points that were at 0 meters in the fall between 1997-2004. The seasons were defined using the same months as the remote sensing data--namely, Fall = September, October, November; Winter = December, January, February; Spring = March, April, May; and Summer = June, July, August.

  9. B

    Data set of "Smart" Brace Validation Study

    • borealisdata.ca
    • search.dataone.org
    Updated Nov 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vincent Nguyen; William Gage (2023). Data set of "Smart" Brace Validation Study [Dataset]. http://doi.org/10.5683/SP3/MGQYBR
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 29, 2023
    Dataset provided by
    Borealis
    Authors
    Vincent Nguyen; William Gage
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This data set was collected to validate a 'smart' knee brace with an IMU embedded on the thigh and shank area against a reference motion capture system (Vicon). There were 10 participants total and each participant came into the lab for 2 sessions, each on separate days. For each session, participants completed three trials of 2-minute treadmill walking at their preferred walking speed, three trials of 15 squats to parallel, three trials of 10 sit-to-stand on a chair that was about knee level, three trials of 15 total alternating lunges, and three trials of 2-minute treadmill jogging at their preferred speed, all in that order. 10 squats and 10 lunges were done for some participants' first sessions, but then did 15 squats and lunges in the second session (a .txt file will be included for each participants' session to specify). This dataset only contains the IMU data.

  10. h

    Refining and validating change requests from a crowd to derive requirements...

    • heidata.uni-heidelberg.de
    docx, pdf, xlsx
    Updated Nov 14, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Leon Radeck; Barbara Paech; Leon Radeck; Barbara Paech (2024). Refining and validating change requests from a crowd to derive requirements [data] [Dataset]. http://doi.org/10.11588/DATA/N1T5T8
    Explore at:
    docx(21014), xlsx(40387), pdf(10485117), xlsx(137970)Available download formats
    Dataset updated
    Nov 14, 2024
    Dataset provided by
    heiDATA
    Authors
    Leon Radeck; Barbara Paech; Leon Radeck; Barbara Paech
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    Carl Zeiss Foundation
    Description

    [Context/Motivation] Integrating user feedback into software development enhances system acceptance, decreases the likelihood of project failure and strengthens customer loyalty. Moreover, user feedback plays an important role in software evolution, because it can be the basis for deriving requirements. [Problems] However, to be able to derive requirements from feedback, the feedback must contain actionable change requests, that is contain detailed information regarding a change to the application. Furthermore, requirements engineers must know how many users support the change request. [Principal ideas] To address these challenges, we propose an approach that uses structured questions to transform non-actionable change requests into actionable and validate the change requests to assess their support among the users. We evaluate the approach in the large-scale research project SMART-AGE with over 200 older adults, aged 67 and older. [Contribution] We contribute a set of templates for our questions and our process, and we evaluate the approach’s feasibility, effectiveness and user satisfaction, resulting in very positive outcomes.

  11. g

    DEA Analysis Ready Data Phase 1 Validation Project : Data Summary

    • ecat.ga.gov.au
    Updated Mar 24, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). DEA Analysis Ready Data Phase 1 Validation Project : Data Summary [Dataset]. https://ecat.ga.gov.au/geonetwork/srv/search?keyword=Sentinel%202
    Explore at:
    Dataset updated
    Mar 24, 2021
    Description

    This report describes the results of an extended national field spectroscopy campaign designed to validate the Landsat 8 and Sentinel 2 Analysis Ready Data (ARD) surface reflectance (SR) products generated by Digital Earth Australia. Field spectral data from 55 overpass coincident field campaigns have been processed to match the ARD surface reflectances. The results suggest the Landsat 8 SR is validated to within 10%, the Sentinel 2A SR is validated to within 6.5% and Sentinel 2B is validated to within 6.8% . Overall combined Sentinel 2A and 2B are validated within 6.6% and the SR for all three ARD products are validated to within 7.7%.

  12. d

    GPS Validation Mark (DOT-031) - Datasets - data.wa.gov.au

    • catalogue.data.wa.gov.au
    Updated Jul 30, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). GPS Validation Mark (DOT-031) - Datasets - data.wa.gov.au [Dataset]. https://catalogue.data.wa.gov.au/dataset/gps-validation-mark-dot-031
    Explore at:
    Dataset updated
    Jul 30, 2020
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Western Australia
    Description

    Global Positioning System (GPS) satellite navigation validation marks are unique visible markers located at a number of public boat ramps and associated jetties, which mariners or owners of portable GPS units can use to validate their position and map datum settings. Show full description

  13. d

    Data from: RM3 Wave Tank Validation Model

    • catalog.data.gov
    • mhkdr.openei.org
    • +2more
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Laboratory (2025). RM3 Wave Tank Validation Model [Dataset]. https://catalog.data.gov/dataset/rm3-wave-tank-validation-model-ee6aa
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    National Renewable Energy Laboratory
    Description

    An approximately 1/75th scale point absorber wave energy absorber was built to validate the testing systems of a 16k gallon single paddle wave tank. The model was build based on the RM3 design and incorporated a linear position sensor, a force transducer, and wetness detection sensors. The data set also includes motion tracking data of the device's two bodies acquired from 4x Qualisys cameras. The tank wave spectrum is measured by 4 ultrasonic water height sensors.

  14. Z

    Validation data for a coupled water-heat-salt multi-field transport model

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Validation data for a coupled water-heat-salt multi-field transport model [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10452990
    Explore at:
    Dataset updated
    Jan 3, 2024
    Dataset authored and provided by
    Ruan, Dongmei
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These data were utilized to validate the accuracy of the newly developed multi-length coupling model. Water and salt transport experiments under freezing conditions were carried out in the laboratory to determine the mass salt content and volumetric water content at different heights of the soil column after freezing

  15. Z

    Data pipeline Validation And Load Testing using Multiple CSV Files

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Mar 26, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Afsana Khan (2021). Data pipeline Validation And Load Testing using Multiple CSV Files [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4636797
    Explore at:
    Dataset updated
    Mar 26, 2021
    Dataset provided by
    Pelle Jakovits
    Mainak Adhikari
    Afsana Khan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The datasets were used to validate and test the data pipeline deployment following the RADON approach. The dataset has a CSV file that contains around 32000 Twitter tweets. 100 CSV files have been created from the single CSV file and each CSV file containing 320 tweets. Those 100 CSV files are used to validate and test (performance/load testing) the data pipeline components.

  16. m

    Validation Data UT-SAFT Resolution

    • data.mendeley.com
    Updated Oct 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hubert Mooshofer (2024). Validation Data UT-SAFT Resolution [Dataset]. http://doi.org/10.17632/w9rsywyd43.1
    Explore at:
    Dataset updated
    Oct 7, 2024
    Authors
    Hubert Mooshofer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Experimental data obtained from the inspection of steel disks used to validate the UT-SAFT resolution formulas in the referencing paper 'UT-SAFT resolution'. The purpose is to show, that the indication size of small test reflectors matches well with the resolution formulas derived in this paper.

  17. Z

    Manual cross-validation data for the article: "Comparison of bibliographic...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Nov 24, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alkim Ozaygen (2022). Manual cross-validation data for the article: "Comparison of bibliographic data sources: Implications for the robustness of university rankings" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3379702
    Explore at:
    Dataset updated
    Nov 24, 2022
    Dataset provided by
    Chun-Kai (Karl) Huang
    Lucy Montgomery
    Alkim Ozaygen
    Katie Wilson
    Cameron Neylon
    Chloe Brookes-Kenworthy
    Richard Hosking
    Description

    These are sets of data collected from the manual cross-validation of DOIs (and related research outputs) that are sampled from Web of Science (WoS), Scopus and Microsoft Academic (MSA). For each of the 15 universities, we initially collect all DOIs indexed by each of the three bibliographic sources. Subsequently, we randomly sample 40, 30 and 30 DOIs from sets of DOIs that are exclusively indexed by WoS, Scopus and MSA, respectively, for each university. A manual cross-validation process is then followed to validate certain characteristics across the data sources. This cross-validation process was carried out by a data wrangler, on a part-time basis over a few months, for which online data was accessed from 18 December 2018 to 20 May 2019.

  18. U

    2019 Eastern Iowa Topographic Lidar Validation – USGS Field Survey Data

    • data.usgs.gov
    • catalog.data.gov
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeffrey Irwin; Jeffrey Danielson; Terry Robbins; Travis Kropuenske; Aparajithan Sampath; Minsu Kim; Seonkyung Park, 2019 Eastern Iowa Topographic Lidar Validation – USGS Field Survey Data [Dataset]. http://doi.org/10.5066/P9DI0G64
    Explore at:
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Jeffrey Irwin; Jeffrey Danielson; Terry Robbins; Travis Kropuenske; Aparajithan Sampath; Minsu Kim; Seonkyung Park
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Oct 25, 2020 - Oct 31, 2020
    Area covered
    Iowa
    Description

    U.S. Geological Survey (USGS) scientists conducted field data collection efforts between October 25th and 31st, 2020 at several sites in eastern Iowa using high accuracy surveying technologies. The work was initiated as an effort to validate commercially acquired topographic light detection and ranging (lidar) data that was collected between December 7th, 2019 and November 19th, 2020 using wide area mapping lidar systems for the USGS 3D Elevation Program (3DEP). The goal was to compare and validate the airborne lidar data to topographic, structural, and infrastructural data collected through more traditional means (e.g., Global Navigational Satellite System (GNSS) surveying). Evaluating these data will provide valuable information on the performance of wide area topographic lidar mapping capabilities that are becoming more widely used in 3DEP. The airborne lidar was collected to support the U.S. Department of Agriculture (USDA) Natural Resources Conservation Service (NRCS) Hig ...

  19. H

    Data from: Using survey questions to measure preferences: Lessons from an...

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Sep 2, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michal Bauer; Julie Chytilova; Edward Miguel (2020). Using survey questions to measure preferences: Lessons from an experimental validation in Kenya [Dataset]. http://doi.org/10.7910/DVN/M1ITQ1
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 2, 2020
    Dataset provided by
    Harvard Dataverse
    Authors
    Michal Bauer; Julie Chytilova; Edward Miguel
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Can a short survey instrument reliably measure a range of fundamental economic preferences across diverse settings? We focus on survey questions that systematically predict behavior in incentivized experimental tasks among German university students (Becker et al. 2016) and were implemented among representative samples across the globe (Falk et al. 2018). This paper presents results of an experimental validation conducted among low-income individuals in Nairobi, Kenya. We find that quantitative survey measures – hypothetical versions of experimental tasks – of time preference, attitude to risk and altruism are good predictors of choices in incentivized experiments, suggesting these measures are broadly experimentally valid. At the same time, we find that qualitative questions – self-assessments – do not correlate with the experimental measures of preferences in the Kenyan sample. Thus, caution is needed before treating self-assessments as proxies of preferences in new contexts.

  20. d

    High-Rate Volumetric Particle Tracking Microscopy (HR-VPTM) validation data

    • catalog.data.gov
    Updated Feb 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2023). High-Rate Volumetric Particle Tracking Microscopy (HR-VPTM) validation data [Dataset]. https://catalog.data.gov/dataset/high-rate-volumetric-particle-tracking-microscopy-hr-vptm-validation-data
    Explore at:
    Dataset updated
    Feb 23, 2023
    Dataset provided by
    National Institute of Standards and Technology
    Description

    To verify and validate the HR-VPTM technique, both synthetic images and gels with embedded particles undergoingcontrolled deformations were used to compared known and reconstructed deformations at assorted strain-ratesor frame-rates. We simulated the light-field representation of particles undergoing motion with ray tracing andinvestigated the sensitivity of the measurement technique to synthetic noise floor and various motion fields. Inexperiments, a custom-built device deformed a hydrogel specimen in nominally simple shear at applied strain ratesapproximately 2 1/s, while light-field images were collected at approximately 500 frames per second frames per second. Files and formats include .tif images (raw data, input), .mat (reconstructed images, tracking results),.txt, .csv, and .yaml (all metadata).See also the data on MINDS@UW (https://minds.wisconsin.edu/handle/1793/83031), the accompanying paper in Experimental Mechanics (https://doi.org/10.1007/s11340-022-00885-z), and the complete code package released by collaborators at UW-Madison (https://github.com/francklab/HR-VPTM).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
John Snow Labs (2021). MIPS Data Validation Criteria [Dataset]. https://www.johnsnowlabs.com/marketplace/mips-data-validation-criteria/
Organization logo

MIPS Data Validation Criteria

Explore at:
csvAvailable download formats
Dataset updated
Jan 20, 2021
Dataset authored and provided by
John Snow Labs
Time period covered
2017 - 2020
Area covered
United States
Description

This dataset includes the MIPS Data Validation Criteria. The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) streamlines a patchwork collection of programs with a single system where provider can be rewarded for better care. Providers will be able to practice as they always have, but they may receive higher Medicare payments based on their performance.

Search
Clear search
Close search
Google apps
Main menu