100+ datasets found
  1. f

    Data from: Selection of optimal validation methods for quantitative...

    • tandf.figshare.com
    xlsx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    K. Héberger (2023). Selection of optimal validation methods for quantitative structure–activity relationships and applicability domain [Dataset]. http://doi.org/10.6084/m9.figshare.23185916.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    K. Héberger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This brief literature survey groups the (numerical) validation methods and emphasizes the contradictions and confusion considering bias, variance and predictive performance. A multicriteria decision-making analysis has been made using the sum of absolute ranking differences (SRD), illustrated with five case studies (seven examples). SRD was applied to compare external and cross-validation techniques, indicators of predictive performance, and to select optimal methods to determine the applicability domain (AD). The ordering of model validation methods was in accordance with the sayings of original authors, but they are contradictory within each other, suggesting that any variant of cross-validation can be superior or inferior to other variants depending on the algorithm, data structure and circumstances applied. A simple fivefold cross-validation proved to be superior to the Bayesian Information Criterion in the vast majority of situations. It is simply not sufficient to test a numerical validation method in one situation only, even if it is a well defined one. SRD as a preferable multicriteria decision-making algorithm is suitable for tailoring the techniques for validation, and for the optimal determination of the applicability domain according to the dataset in question.

  2. FDA Drug Product Labels Validation Method Data Package

    • johnsnowlabs.com
    csv
    Updated Jan 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Snow Labs (2021). FDA Drug Product Labels Validation Method Data Package [Dataset]. https://www.johnsnowlabs.com/marketplace/fda-drug-product-labels-validation-method-data-package/
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jan 20, 2021
    Dataset authored and provided by
    John Snow Labs
    Description

    This data package contains information on Structured Product Labeling (SPL) Terminology for SPL validation procedures and information on performing SPL validations.

  3. f

    Summary of Classifiers, Features, Validation Techniques and Sample Sizes...

    • plos.figshare.com
    xls
    Updated Dec 2, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Paul Fergus; Pauline Cheung; Abir Hussain; Dhiya Al-Jumeily; Chelsea Dobbins; Shamaila Iram (2015). Summary of Classifiers, Features, Validation Techniques and Sample Sizes used in this study. [Dataset]. http://doi.org/10.1371/journal.pone.0077154.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 2, 2015
    Dataset provided by
    PLOS ONE
    Authors
    Paul Fergus; Pauline Cheung; Abir Hussain; Dhiya Al-Jumeily; Chelsea Dobbins; Shamaila Iram
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Summary of Classifiers, Features, Validation Techniques and Sample Sizes used in this study.

  4. f

    Data from: Cross-Validation With Confidence

    • tandf.figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jing Lei (2023). Cross-Validation With Confidence [Dataset]. http://doi.org/10.6084/m9.figshare.9976901.v3
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Jing Lei
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Cross-validation is one of the most popular model and tuning parameter selection methods in statistics and machine learning. Despite its wide applicability, traditional cross-validation methods tend to overfit, due to the ignorance of the uncertainty in the testing sample. We develop a novel statistically principled inference tool based on cross-validation that takes into account the uncertainty in the testing sample. This method outputs a set of highly competitive candidate models containing the optimal one with guaranteed probability. As a consequence, our method can achieve consistent variable selection in a classical linear regression setting, for which existing cross-validation methods require unconventional split ratios. When used for tuning parameter selection, the method can provide an alternative trade-off between prediction accuracy and model interpretability than existing variants of cross-validation. We demonstrate the performance of the proposed method in several simulated and real data examples. Supplemental materials for this article can be found online.

  5. D

    Python functions -- cross-validation methods from a data-driven perspective

    • phys-techsciences.datastations.nl
    • zenodo.org
    docx, png +4
    Updated Aug 16, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Y. Wang; Y. Wang (2024). Python functions -- cross-validation methods from a data-driven perspective [Dataset]. http://doi.org/10.17026/PT/TXAU9W
    Explore at:
    tiff(2474294), tiff(2412540), tsv(49141), txt(1220), tiff(2413148), tsv(20072), tsv(30174), tiff(4833081), tiff(12196238), tiff(1606453), tiff(4729349), tiff(5695336), tsv(29), tiff(6478950), tiff(6534556), tiff(6466131), text/x-python(8210), docx(63366), tsv(12056), tiff(6567360), tsv(28), tiff(5385805), tsv(263901), tiff(6385076), text/x-python(5598), tiff(2423836), tiff(3417568), text/x-python(8181), png(110251), tiff(5726045), tsv(48948), tsv(1564525), tiff(3031197), tiff(2059260), tiff(2880005), tiff(6135064), tiff(3648419), tsv(102), tiff(3060978), tiff(3802696), tiff(4396561), tiff(1385025), text/x-python(1184), tiff(2817752), tiff(2516606), tsv(27725), text/x-python(12795), tiff(2282443)Available download formats
    Dataset updated
    Aug 16, 2024
    Dataset provided by
    DANS Data Station Physical and Technical Sciences
    Authors
    Y. Wang; Y. Wang
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is the organized python functions of proposed methods in Yanwen Wang PhD research. Researchers can directly use these functions to conduct spatial+ cross-validation, dissimilarity quantification method, and dissimilarity-adaptive cross-validation.

  6. d

    Data from: Sensor Validation using Bayesian Networks

    • catalog.data.gov
    • data.nasa.gov
    • +1more
    Updated Apr 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). Sensor Validation using Bayesian Networks [Dataset]. https://catalog.data.gov/dataset/sensor-validation-using-bayesian-networks
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    One of NASA’s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation techniques address this problem: given a vector of sensor readings, decide whether sensors have failed, therefore producing bad data. We take in this paper a probabilistic approach, using Bayesian networks, to diagnosis and sensor validation, and investigate several relevant but slightly different Bayesian network queries. We emphasize that on-board inference can be performed on a compiled model, giving fast and predictable execution times. Our results are illustrated using an electrical power system, and we show that a Bayesian network with over 400 nodes can be compiled into an arithmetic circuit that can correctly answer queries in less than 500 microseconds on average. Reference: O. J. Mengshoel, A. Darwiche, and S. Uckun, "Sensor Validation using Bayesian Networks." In Proc. of the 9th International Symposium on Artificial Intelligence, Robotics, and Automation in Space (iSAIRAS-08), Los Angeles, CA, 2008. BibTex Reference: @inproceedings{mengshoel08sensor, author = {Mengshoel, O. J. and Darwiche, A. and Uckun, S.}, title = {Sensor Validation using {Bayesian} Networks}, booktitle = {Proceedings of the 9th International Symposium on Artificial Intelligence, Robotics, and Automation in Space (iSAIRAS-08)}, year = {2008} }

  7. Supplementary Materials: Choosing Validation Methods for Agent-Based Models...

    • zenodo.org
    bin, csv, png
    Updated Jun 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Artem Serdyuk; Artem Serdyuk (2025). Supplementary Materials: Choosing Validation Methods for Agent-Based Models - R Scripts, Data, and Visualizations" [Dataset]. http://doi.org/10.5281/zenodo.15633195
    Explore at:
    bin, csv, pngAvailable download formats
    Dataset updated
    Jun 10, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Artem Serdyuk; Artem Serdyuk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains supplementary materials, including R scripts, data files, figures, and documentation for the agent-based model validation framework presented in the article. README.md includes a detailed description.

  8. o

    Validation for Carbon Monoxide HMMs

    • ordo.open.ac.uk
    txt
    Updated Sep 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vincent Rennie (2024). Validation for Carbon Monoxide HMMs [Dataset]. http://doi.org/10.21954/ou.rd.21222113.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Sep 28, 2024
    Dataset provided by
    The Open University
    Authors
    Vincent Rennie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data relating to the thesis: Constraining Microbial Community Composition, Structure, and Function Within Acidic Geothermal Springs on São Miguel Island

    This is a set of proteins used to validate the HMM models constructed in the thesis.

    These proteins are known to be actively involved in carbon monoxide oxidation.

  9. o

    Data from: A New Q-matrix Validation Method based on Signal Detection Theory...

    • osf.io
    Updated Dec 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jia Li (2024). A New Q-matrix Validation Method based on Signal Detection Theory [Dataset]. https://osf.io/tu4hs
    Explore at:
    Dataset updated
    Dec 25, 2024
    Dataset provided by
    Center For Open Science
    Authors
    Jia Li
    Description

    No description was included in this Dataset collected from the OSF

  10. H

    Data Repository for 'Bootstrap aggregation and cross-validation methods to...

    • beta.hydroshare.org
    • hydroshare.org
    • +1more
    zip
    Updated Jun 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zachary Paul Brodeur; Scott S. Steinschneider; Jonathan D. Herman (2020). Data Repository for 'Bootstrap aggregation and cross-validation methods to reduce overfitting in reservoir control policy search' [Dataset]. http://doi.org/10.4211/hs.b8f87a7b680d44cebfb4b3f4f4a6a447
    Explore at:
    zip(8.3 MB)Available download formats
    Dataset updated
    Jun 24, 2020
    Dataset provided by
    HydroShare
    Authors
    Zachary Paul Brodeur; Scott S. Steinschneider; Jonathan D. Herman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 1, 1922 - Sep 30, 2016
    Area covered
    Description

    Policy search methods provide a heuristic mapping between observations and decisions and have been widely used in reservoir control studies. However, recent studies have observed a tendency for policy search methods to overfit to the hydrologic data used in training, particularly the sequence of flood and drought events. This technical note develops an extension of bootstrap aggregation (bagging) and cross-validation techniques, inspired by the machine learning literature, to improve control policy performance on out-of-sample hydrology. We explore these methods using a case study of Folsom Reservoir, California using control policies structured as binary trees and daily streamflow resampling based on the paleo-inflow record. Results show that calibration-validation strategies for policy selection and certain ensemble aggregation methods can improve out-of-sample tradeoffs between water supply and flood risk objectives over baseline performance given fixed computational costs. These results highlight the potential to improve policy search methodologies by leveraging well-established model training strategies from machine learning.

  11. T

    Thermal Validation System Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jul 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Thermal Validation System Report [Dataset]. https://www.datainsightsmarket.com/reports/thermal-validation-system-1521853
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Jul 9, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global thermal validation system market is experiencing robust growth, driven by increasing regulatory scrutiny across pharmaceutical, biotechnology, and food processing industries. Stringent quality control standards and the need for accurate temperature monitoring throughout the manufacturing and storage processes are key factors fueling market expansion. The market is segmented by system type (e.g., autoclaves, ovens, incubators), application (pharmaceutical, food & beverage, etc.), and end-user (contract research organizations, pharmaceutical manufacturers, etc.). Technological advancements, such as the integration of IoT sensors and cloud-based data analysis, are enhancing the capabilities of thermal validation systems, leading to improved efficiency and data management. Furthermore, the rising demand for sophisticated validation techniques to comply with international regulations like GMP and FDA guidelines is further bolstering market growth. We estimate the 2025 market size to be approximately $850 million, growing at a Compound Annual Growth Rate (CAGR) of 7% from 2025 to 2033. This growth reflects the increasing adoption of advanced technologies and the expanding regulatory landscape in key regions like North America and Europe. Competition in the thermal validation system market is intense, with several established players and emerging companies vying for market share. Key players like Kaye, Ellab, and Thermo Fisher Scientific are leveraging their strong brand reputation and technological expertise to maintain market leadership. However, smaller, specialized firms are also gaining traction by offering niche solutions and innovative technologies. The market is expected to witness further consolidation in the coming years, with strategic acquisitions and partnerships playing a crucial role in shaping the competitive landscape. Geographic expansion, particularly in emerging markets in Asia-Pacific and Latin America, represents a significant growth opportunity for market participants. The restraints to growth include the high initial investment cost associated with implementing thermal validation systems and the need for skilled personnel to operate and maintain these systems.

  12. Z

    Data for training, validation and testing of methods in the thesis:...

    • data.niaid.nih.gov
    • zenodo.org
    Updated May 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lucia Hajduková (2021). Data for training, validation and testing of methods in the thesis: Camera-based Accuracy Improvement of Indoor Localization [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4730337
    Explore at:
    Dataset updated
    May 1, 2021
    Dataset authored and provided by
    Lucia Hajduková
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The package contains files for two modules designed to improve the accuracy of the indoor positioning system, namely the following:

    door detection

    videos_test - videos used to demonstrate the application of door detector

    videos_res - videos from videos_test directory with detected doors marked

    parts detection

    frames_train_val - images generated from videos used for training and validation of VGG16 neural network model

    frames_test - images generated from videos used for testing of the trained model

    videos_test - videos used to demonstrate the application of parts detector

    videos_res - videos from videos_test directory with detected parts marked

  13. n

    Data from: Proxies in practice: calibration and validation of multiple...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Mar 2, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matthew R. Falcy; Joshua L. McCormick; Shelly A. Miller (2016). Proxies in practice: calibration and validation of multiple indices of animal abundance [Dataset]. http://doi.org/10.5061/dryad.nk513
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 2, 2016
    Authors
    Matthew R. Falcy; Joshua L. McCormick; Shelly A. Miller
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Oregon
    Description

    The abundance of individuals in a population is a fundamental metric in basic and applied ecology, but sampling protocols yielding precise and unbiased estimates of abundance are often cost prohibitive. Proxies of abundance are therefore common, but require calibration and validation. There are many ways to calibrate a proxy, and it is not obvious which will perform best. We use data from eight populations of Chinook salmon (Oncorhynchus tshawytscha) on the Oregon coast where multiple proxies of abundance were obtained contemporaneously with independent mark-recapture estimates. We combined multiple proxy values associated with a single level of abundance into a univariate index and then calibrated that index to mark-recapture estimates using several different techniques. We tested our calibration methods using leave-one-out cross validation and simulation. Our cross-validation analysis did not definitively identify a single best calibration technique for all populations, but we could identify consistently inferior approaches. The simulations suggested that incorporating the known mark-recapture uncertainty into the calibration technique added bias and imprecision. Cross validation techniques should be used to test multiple methods of calibrating multiple proxies to an estimate of abundance. Critical uncertainties with the application of calibrated proxies still exist, and cost-benefit analysis should be performed to help identify optimal monitoring designs.

  14. GPM GROUND VALIDATION NOAA CPC MORPHING TECHNIQUE (CMORPH) IFLOODS

    • s.cnmilf.com
    • cmr.earthdata.nasa.gov
    • +4more
    Updated Jun 28, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NASA/MSFC/GHRC (2025). GPM GROUND VALIDATION NOAA CPC MORPHING TECHNIQUE (CMORPH) IFLOODS [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/gpm-ground-validation-noaa-cpc-morphing-technique-cmorph-ifloods
    Explore at:
    Dataset updated
    Jun 28, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The GPM Ground Validation NOAA CPC Morphing Technique (CMORPH) IFloodS dataset consists of global precipitation analyses data produced by the NOAA Climate Prediction Center (CPC). The Iowa Flood Studies (IFloodS) campaign was a ground measurement campaign that took place in eastern Iowa from May 1 to June 15, 2013. The goals of the campaign were to collect detailed measurements of precipitation at the Earth'ssurface using ground instruments and advanced weather radars and, simultaneously, collect data from satellites passing overhead. The CPC morphing technique uses precipitation estimates from low orbiter satellite microwave observations to produce global precipitation analyses at a high temporal and spatial resolution. Data has been selected for the Iowa Flood Studies (IFloodS) field campaign which took place from April 1, 2013 to June 30, 2013. The dataset includes both the near real-time raw data and bias corrected data from NOAA in binary and netCDF format.

  15. f

    Data from: Development and validation of HBV surveillance models using big...

    • tandf.figshare.com
    docx
    Updated Dec 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Weinan Dong; Cecilia Clara Da Roza; Dandan Cheng; Dahao Zhang; Yuling Xiang; Wai Kay Seto; William C. W. Wong (2024). Development and validation of HBV surveillance models using big data and machine learning [Dataset]. http://doi.org/10.6084/m9.figshare.25201473.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Dec 3, 2024
    Dataset provided by
    Taylor & Francis
    Authors
    Weinan Dong; Cecilia Clara Da Roza; Dandan Cheng; Dahao Zhang; Yuling Xiang; Wai Kay Seto; William C. W. Wong
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The construction of a robust healthcare information system is fundamental to enhancing countries’ capabilities in the surveillance and control of hepatitis B virus (HBV). Making use of China’s rapidly expanding primary healthcare system, this innovative approach using big data and machine learning (ML) could help towards the World Health Organization’s (WHO) HBV infection elimination goals of reaching 90% diagnosis and treatment rates by 2030. We aimed to develop and validate HBV detection models using routine clinical data to improve the detection of HBV and support the development of effective interventions to mitigate the impact of this disease in China. Relevant data records extracted from the Family Medicine Clinic of the University of Hong Kong-Shenzhen Hospital’s Hospital Information System were structuralized using state-of-the-art Natural Language Processing techniques. Several ML models have been used to develop HBV risk assessment models. The performance of the ML model was then interpreted using the Shapley value (SHAP) and validated using cohort data randomly divided at a ratio of 2:1 using a five-fold cross-validation framework. The patterns of physical complaints of patients with and without HBV infection were identified by processing 158,988 clinic attendance records. After removing cases without any clinical parameters from the derivation sample (n = 105,992), 27,392 cases were analysed using six modelling methods. A simplified model for HBV using patients’ physical complaints and parameters was developed with good discrimination (AUC = 0.78) and calibration (goodness of fit test p-value >0.05). Suspected case detection models of HBV, showing potential for clinical deployment, have been developed to improve HBV surveillance in primary care setting in China. (Word count: 264) This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections.We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China. This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections. We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China.

  16. Z

    Dataset for the publication "Theory and Experimental Validation of Two...

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Nov 16, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luiso Mario (2023). Dataset for the publication "Theory and Experimental Validation of Two Techniques for Compensating VT Nonlinearities" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7436627
    Explore at:
    Dataset updated
    Nov 16, 2023
    Dataset provided by
    D'Avanzo Giovanni
    Landi Carmine
    Toscani Sergio
    Faifer Marco
    Luiso Mario
    Letizia Palma Sara
    Laurano Christian
    Ottoboni Roberto
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is dataset for paper published:

    G. D’Avanzo et al., "Theory and Experimental Validation of Two Techniques for Compensating VT Nonlinearities," in IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1-12, 2022, Art no. 9001312, doi: 10.1109/TIM.2022.3147883.

  17. GPM GROUND VALIDATION NOAA CPC MORPHING TECHNIQUE (CMORPH) IPHEX V1

    • catalog.data.gov
    • data.nasa.gov
    • +1more
    Updated Apr 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NASA/MSFC/GHRC (2025). GPM GROUND VALIDATION NOAA CPC MORPHING TECHNIQUE (CMORPH) IPHEX V1 [Dataset]. https://catalog.data.gov/dataset/gpm-ground-validation-noaa-cpc-morphing-technique-cmorph-iphex-v1-623f6
    Explore at:
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The GPM Ground Validation NOAA CPC Morphing Technique (CMORPH) IPHEx dataset consists of global precipitation analyses data produced by the NOAA Climate Prediction Center (CPC) during the Global Precipitation Mission (GPM) Integrated Precipitation and Hydrology Experiment (IPHEx) field campaign in North Carolina. The goal of IPHEx was to evaluate the accuracy of satellite precipitation measurements and use the collected data for hydrology models in the region. The CPC morphing technique uses precipitation estimates from low orbiter satellite microwave observations to produce global precipitation analyses at a high temporal and spatial resolution. CMORPH data has been selected from May 1, 2014 through June 14, 2014, during the IPHEx field campaign. These data files are available in raw binary and netCDF-4 file format.

  18. Development and validation of methods for identification and quality...

    • osf.io
    Updated Aug 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emma Wilson; Florenz Cruz; Jing Liao; Sarah McCann; Malcolm Macleod; Emily Sena (2023). Development and validation of methods for identification and quality assessment of in vitro research [Dataset]. http://doi.org/10.17605/OSF.IO/AHFR3
    Explore at:
    Dataset updated
    Aug 5, 2023
    Dataset provided by
    Center for Open Sciencehttps://cos.io/
    Authors
    Emma Wilson; Florenz Cruz; Jing Liao; Sarah McCann; Malcolm Macleod; Emily Sena
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We aim to validate machine learning methods for accurate identification of in vitro research and the subsequent assessment of risk of bias reporting of these papers. In doing so, we will address the research question: Has the reporting of risk of bias measures relevant to in vitro experiments improved over time?

  19. d

    Data from: Development of a Mobile Robot Test Platform and Methods for...

    • catalog.data.gov
    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    • +1more
    Updated Apr 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). Development of a Mobile Robot Test Platform and Methods for Validation of Prognostics-Enabled Decision Making Algorithms [Dataset]. https://catalog.data.gov/dataset/development-of-a-mobile-robot-test-platform-and-methods-for-validation-of-prognostics-enab
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM) is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator.

  20. m

    PEN-Method: Predictor model and Validation Data

    • data.mendeley.com
    • narcis.nl
    Updated Sep 3, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alex Halle (2021). PEN-Method: Predictor model and Validation Data [Dataset]. http://doi.org/10.17632/459f33wxf6.4
    Explore at:
    Dataset updated
    Sep 3, 2021
    Authors
    Alex Halle
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This Data contains the PEN-Predictor-Keras-Model as well as the 100 validation data sets.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
K. Héberger (2023). Selection of optimal validation methods for quantitative structure–activity relationships and applicability domain [Dataset]. http://doi.org/10.6084/m9.figshare.23185916.v1

Data from: Selection of optimal validation methods for quantitative structure–activity relationships and applicability domain

Related Article
Explore at:
xlsxAvailable download formats
Dataset updated
Jun 1, 2023
Dataset provided by
Taylor & Francis
Authors
K. Héberger
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This brief literature survey groups the (numerical) validation methods and emphasizes the contradictions and confusion considering bias, variance and predictive performance. A multicriteria decision-making analysis has been made using the sum of absolute ranking differences (SRD), illustrated with five case studies (seven examples). SRD was applied to compare external and cross-validation techniques, indicators of predictive performance, and to select optimal methods to determine the applicability domain (AD). The ordering of model validation methods was in accordance with the sayings of original authors, but they are contradictory within each other, suggesting that any variant of cross-validation can be superior or inferior to other variants depending on the algorithm, data structure and circumstances applied. A simple fivefold cross-validation proved to be superior to the Bayesian Information Criterion in the vast majority of situations. It is simply not sufficient to test a numerical validation method in one situation only, even if it is a well defined one. SRD as a preferable multicriteria decision-making algorithm is suitable for tailoring the techniques for validation, and for the optimal determination of the applicability domain according to the dataset in question.

Search
Clear search
Close search
Google apps
Main menu