Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The construction of a robust healthcare information system is fundamental to enhancing countries’ capabilities in the surveillance and control of hepatitis B virus (HBV). Making use of China’s rapidly expanding primary healthcare system, this innovative approach using big data and machine learning (ML) could help towards the World Health Organization’s (WHO) HBV infection elimination goals of reaching 90% diagnosis and treatment rates by 2030. We aimed to develop and validate HBV detection models using routine clinical data to improve the detection of HBV and support the development of effective interventions to mitigate the impact of this disease in China. Relevant data records extracted from the Family Medicine Clinic of the University of Hong Kong-Shenzhen Hospital’s Hospital Information System were structuralized using state-of-the-art Natural Language Processing techniques. Several ML models have been used to develop HBV risk assessment models. The performance of the ML model was then interpreted using the Shapley value (SHAP) and validated using cohort data randomly divided at a ratio of 2:1 using a five-fold cross-validation framework. The patterns of physical complaints of patients with and without HBV infection were identified by processing 158,988 clinic attendance records. After removing cases without any clinical parameters from the derivation sample (n = 105,992), 27,392 cases were analysed using six modelling methods. A simplified model for HBV using patients’ physical complaints and parameters was developed with good discrimination (AUC = 0.78) and calibration (goodness of fit test p-value >0.05). Suspected case detection models of HBV, showing potential for clinical deployment, have been developed to improve HBV surveillance in primary care setting in China. (Word count: 264) This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections.We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China. This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections. We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China.
Validation to ensure data and identity integrity. DAS will also ensure security compliant standards are met.
This dataset includes the MIPS Data Validation Criteria. The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) streamlines a patchwork collection of programs with a single system where provider can be rewarded for better care. Providers will be able to practice as they always have, but they may receive higher Medicare payments based on their performance.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a synthetic smart card data set that can be used to test pattern detection methods for the extraction of temporal and spatial data. The data set is tab seperated and based on a stylized travel pattern description for city of Utrecht in The Netherlands and is developed and used in Chapter 6 of the PhD Thesis of Paul Bouman.
This dataset contains the following files:
journeys.tsv : the actual data set of synthetic smart card data
utrecht.xml : the activity pattern definition that was used to randomly generate the synthethic smart card data
validate.ref : a file derived from the activity pattern definition that can be used for validation purposes. It specifies which activity types occur at each location in the smart card data set.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was used to validate the global distribution of kelp biome model. These data were downloaded from the GBIF online database and cleaned to maintain highest georeference accuracy. The MaxEnt probability values of each record were given in the last column.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The datasets were used to validate and test the data pipeline deployment following the RADON approach. The dataset contains temperature and humidity sensor readings of a particular day, which are synthetically generated using a data generator and are stored as JSON files to validate and test (performance/load testing) the data pipeline components.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset includes the date and time, latitude (“lat”), longitude (“lon”), sun angle (“sun_angle”, in degrees [o]), rainbow presence (TRUE = rainbow, FALSE = no rainbow), cloud cover (“cloud_cover”, proportion), and liquid precipitation (“liquid_precip”, kg m-2 s-1) for each record used to train and/or validate the models.
http://vvlibri.org/fr/licence/odbl-10/legalcode/unofficialhttp://vvlibri.org/fr/licence/odbl-10/legalcode/unofficial
Validation data is updated at the end of February and the end of August.
This dataset presents the number of traveler validations by day per line and per transport ticket on the surface network.
Update update data
Data is updated semi-annually.
Documentation and dataset information may be updated more regularly.
</ p>
Documentation
Documentation for validation data is available
Consult the documentation on validation data
Validation data sets
To access validation data from previous years and other validation datasets, you can consult the < a href="https://data.iledefrance-mobilites.fr/explore/dataset/histo-validations" target="_blank">validation data history.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Structure tensor validationGeneral informationThis item contains test data to validate the structure tensor algorithms and a supplemental paper describing how the data was generated and used.ContentsThe test_data.zip archive contains 101 slices of a cylinder (701x701 pixels) with two artificially created fibre orientations. The outer fibres are oriented longitudinally, and the inner fibres are oriented circumferentially, similar to the ones found in the rat uterus.The SupplementaryMaterials_rat_uterus_texture_validation.pdf file is a short supplemental paper describing the generation of the test data and the results after being processed with the structure tensor code.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
[Context/Motivation] Integrating user feedback into software development enhances system acceptance, decreases the likelihood of project failure and strengthens customer loyalty. Moreover, user feedback plays an important role in software evolution, because it can be the basis for deriving requirements. [Problems] However, to be able to derive requirements from feedback, the feedback must contain actionable change requests, that is contain detailed information regarding a change to the application. Furthermore, requirements engineers must know how many users support the change request. [Principal ideas] To address these challenges, we propose an approach that uses structured questions to transform non-actionable change requests into actionable and validate the change requests to assess their support among the users. We evaluate the approach in the large-scale research project SMART-AGE with over 200 older adults, aged 67 and older. [Contribution] We contribute a set of templates for our questions and our process, and we evaluate the approach’s feasibility, effectiveness and user satisfaction, resulting in very positive outcomes.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data set was collected to validate a 'smart' knee brace with an IMU embedded on the thigh and shank area against a reference motion capture system (Vicon). There were 10 participants total and each participant came into the lab for 2 sessions, each on separate days. For each session, participants completed three trials of 2-minute treadmill walking at their preferred walking speed, three trials of 15 squats to parallel, three trials of 10 sit-to-stand on a chair that was about knee level, three trials of 15 total alternating lunges, and three trials of 2-minute treadmill jogging at their preferred speed, all in that order. 10 squats and 10 lunges were done for some participants' first sessions, but then did 15 squats and lunges in the second session (a .txt file will be included for each participants' session to specify). This dataset only contains the IMU data.
This feature dataset contains the control points used to validate the accuracies of the interpolated water density rasters for the Gulf of Maine. These control points were selected randomly from the water density data points, using Hawth's Create Random Selection Tool. Twenty-five percent of each seasonal bin (for each year and at each depth) were randomly selected and set aside for validation. For example, if there were 1,000 water density data points for the fall (September, October, November) 2003 at 0 meters, then 250 of those points were randomly selected, removed and set aside to assess the accuracy of interpolated surface. The naming convention of the validation point feature class includes the year (or years), the season, and the depth (in meters) it was selected from. So for example, the name: ValidationPoints_1997_2004_Fall_0m would indicate that this point feature class was randomly selected from water density points that were at 0 meters in the fall between 1997-2004. The seasons were defined using the same months as the remote sensing data--namely, Fall = September, October, November; Winter = December, January, February; Spring = March, April, May; and Summer = June, July, August.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Global Positioning System (GPS) satellite navigation validation marks are unique visible markers located at a number of public boat ramps and associated jetties, which mariners or owners of portable GPS units can use to validate their position and map datum settings. Show full description
An approximately 1/75th scale point absorber wave energy absorber was built to validate the testing systems of a 16k gallon single paddle wave tank. The model was build based on the RM3 design and incorporated a linear position sensor, a force transducer, and wetness detection sensors. The data set also includes motion tracking data of the device's two bodies acquired from 4x Qualisys cameras. The tank wave spectrum is measured by 4 ultrasonic water height sensors.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These data were utilized to validate the accuracy of the newly developed multi-length coupling model. Water and salt transport experiments under freezing conditions were carried out in the laboratory to determine the mass salt content and volumetric water content at different heights of the soil column after freezing
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The datasets were used to validate and test the data pipeline deployment following the RADON approach. The dataset has a CSV file that contains around 32000 Twitter tweets. 100 CSV files have been created from the single CSV file and each CSV file containing 320 tweets. Those 100 CSV files are used to validate and test (performance/load testing) the data pipeline components.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Total onboard validations (banded) by date, mode, route, direction, stop and media type
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Experimental data obtained from the inspection of steel disks used to validate the UT-SAFT resolution formulas in the referencing paper 'UT-SAFT resolution'. The purpose is to show, that the indication size of small test reflectors matches well with the resolution formulas derived in this paper.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
These data are part of the Gulf Watch Alaska (GWA) long term monitoring program, pelagic monitoring component. This dataset consists of one table, providing fish aerial validation data from summer surveys in Prince William Sound, Alaska. Data includes: date, location, latitude, longitude, aerial ID, validation ID, total length and validation method. Various catch methods were used to obtain fish samples for aerial validations, including:cast net, go pro, hydroacoustics, jig, dip net, gill net, purse seine, photo and visual identification.
https://doc.transport.data.gouv.fr/le-point-d-acces-national/cadre-juridique/conditions-dutilisation-des-donnees/licence-odblhttps://doc.transport.data.gouv.fr/le-point-d-acces-national/cadre-juridique/conditions-dutilisation-des-donnees/licence-odbl
Les données de validations sont mises à jour fin février et fin août.
Ce jeu de données présente le nombre de validations des voyageurs par jour par arrêt et par titre de transport sur le réseau ferré.
Mise à jour des données Les données sont mises à jour de manière semestrielle. La documentation et les informations du jeu de données peuvent être mis à jour plus régulièrement.
Documentation Une documentation relative aux données de validation est disponible : Consulter la documentation sur les données de validation
Les jeux des données de validation
Pour avoir accès aux données de validation des années précédentes et aux autres jeux de données de validation, vous pouvez consulter l'historique des données de validation.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The construction of a robust healthcare information system is fundamental to enhancing countries’ capabilities in the surveillance and control of hepatitis B virus (HBV). Making use of China’s rapidly expanding primary healthcare system, this innovative approach using big data and machine learning (ML) could help towards the World Health Organization’s (WHO) HBV infection elimination goals of reaching 90% diagnosis and treatment rates by 2030. We aimed to develop and validate HBV detection models using routine clinical data to improve the detection of HBV and support the development of effective interventions to mitigate the impact of this disease in China. Relevant data records extracted from the Family Medicine Clinic of the University of Hong Kong-Shenzhen Hospital’s Hospital Information System were structuralized using state-of-the-art Natural Language Processing techniques. Several ML models have been used to develop HBV risk assessment models. The performance of the ML model was then interpreted using the Shapley value (SHAP) and validated using cohort data randomly divided at a ratio of 2:1 using a five-fold cross-validation framework. The patterns of physical complaints of patients with and without HBV infection were identified by processing 158,988 clinic attendance records. After removing cases without any clinical parameters from the derivation sample (n = 105,992), 27,392 cases were analysed using six modelling methods. A simplified model for HBV using patients’ physical complaints and parameters was developed with good discrimination (AUC = 0.78) and calibration (goodness of fit test p-value >0.05). Suspected case detection models of HBV, showing potential for clinical deployment, have been developed to improve HBV surveillance in primary care setting in China. (Word count: 264) This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections.We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China. This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections. We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China.