100+ datasets found
  1. Auction Verification Dataset

    • kaggle.com
    zip
    Updated Apr 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rabie El Kharoua (2024). Auction Verification Dataset [Dataset]. https://www.kaggle.com/datasets/rabieelkharoua/auction-verification-dataset/data
    Explore at:
    zip(15678 bytes)Available download formats
    Dataset updated
    Apr 24, 2024
    Authors
    Rabie El Kharoua
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We modeled a simultaneous multi-round auction with BPMN models, transformed the latter to Petri nets, and used a model checker to verify whether certain outcomes of the auction are possible or not.

    Dataset Characteristics: Tabular

    Subject Area: Computer Science

    Associated Tasks: Classification, Regression

    Instances: 2043

    Features: 7

    Dataset Information

    For what purpose was the dataset created? The dataset was created as part of a scientific study. The goal was to find out whether one could replace costly verification of complex process models (here: simultaneous multi-round auctions, as used for auctioning frequency spectra) with predictions of the outcome.

    What do the instances in this dataset represent? Each instance represents one verification run. Verification checks whether a particular price is possible for a particular product, and (for only some of the instances) whether a particular bidder might win the product to that price.

    Additional Information Our code to prepare the dataset and to make predictions is available here: https://github.com/Jakob-Bach/Analyzing-Auction-Verification

    Has Missing Values? No

    Introductory Paper

    Title: Analyzing and Predicting Verification of Data-Aware Process Models – a Case Study with Spectrum Auctions

    Authors: Elaheh Ordoni, Jakob Bach, Ann-Katrin Fleck. 2022

    Journal: Published in Journal

    Link of Article

    Abstract of Introductory Paper

    Verification techniques play an essential role in detecting undesirable behaviors in many applications like spectrum auctions. By verifying an auction design, one can detect the least favorable outcomes, e.g., the lowest revenue of an auctioneer. However, verification may be infeasible in practice, given the vast size of the state space on the one hand and the large number of properties to be verified on the other hand. To overcome this challenge, we leverage machine-learning techniques. In particular, we create a dataset by verifying properties of a spectrum auction first. Second, we use this dataset to analyze and predict outcomes of the auction and characteristics of the verification procedure. To evaluate the usefulness of machine learning in the given scenario, we consider prediction quality and feature importance. In our experiments, we observe that prediction models can capture relationships in our dataset well, though one needs to be careful to obtain a representative and sufficiently large training dataset. While the focus of this article is on a specific verification scenario, our analysis approach is general and can be adapted to other domains.

    Cite

    Citation:Ordoni,Elaheh, Bach,Jakob, Fleck,Ann-Katrin, and Bach,Jakob. (2022). Auction Verification. UCI Machine Learning Repository. https://doi.org/10.24432/C52K6N.

    BibTeX:@misc{misc_auction_verification_713, author = {Ordoni,Elaheh, Bach,Jakob, Fleck,Ann-Katrin, and Bach,Jakob}, title = {{Auction Verification}}, year = {2022}, howpublished = {UCI Machine Learning Repository}, note = {{DOI}: https://doi.org/10.24432/C52K6N} }

    Import in Python

    pip install ucimlrepo

    `from ucimlrepo import fetch_ucirepo

    fetch dataset

    auction_verification = fetch_ucirepo(id=713)

    data (as pandas dataframes)

    X = auction_verification.data.features y = auction_verification.data.targets

    metadata

    print(auction_verification.metadata)

    variable information

    print(auction_verification.variables) `

  2. Data from: Development and validation of HBV surveillance models using big...

    • tandf.figshare.com
    docx
    Updated Dec 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Weinan Dong; Cecilia Clara Da Roza; Dandan Cheng; Dahao Zhang; Yuling Xiang; Wai Kay Seto; William C. W. Wong (2024). Development and validation of HBV surveillance models using big data and machine learning [Dataset]. http://doi.org/10.6084/m9.figshare.25201473.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Dec 3, 2024
    Dataset provided by
    Taylor & Francishttps://taylorandfrancis.com/
    Authors
    Weinan Dong; Cecilia Clara Da Roza; Dandan Cheng; Dahao Zhang; Yuling Xiang; Wai Kay Seto; William C. W. Wong
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The construction of a robust healthcare information system is fundamental to enhancing countries’ capabilities in the surveillance and control of hepatitis B virus (HBV). Making use of China’s rapidly expanding primary healthcare system, this innovative approach using big data and machine learning (ML) could help towards the World Health Organization’s (WHO) HBV infection elimination goals of reaching 90% diagnosis and treatment rates by 2030. We aimed to develop and validate HBV detection models using routine clinical data to improve the detection of HBV and support the development of effective interventions to mitigate the impact of this disease in China. Relevant data records extracted from the Family Medicine Clinic of the University of Hong Kong-Shenzhen Hospital’s Hospital Information System were structuralized using state-of-the-art Natural Language Processing techniques. Several ML models have been used to develop HBV risk assessment models. The performance of the ML model was then interpreted using the Shapley value (SHAP) and validated using cohort data randomly divided at a ratio of 2:1 using a five-fold cross-validation framework. The patterns of physical complaints of patients with and without HBV infection were identified by processing 158,988 clinic attendance records. After removing cases without any clinical parameters from the derivation sample (n = 105,992), 27,392 cases were analysed using six modelling methods. A simplified model for HBV using patients’ physical complaints and parameters was developed with good discrimination (AUC = 0.78) and calibration (goodness of fit test p-value >0.05). Suspected case detection models of HBV, showing potential for clinical deployment, have been developed to improve HBV surveillance in primary care setting in China. (Word count: 264) This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections.We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China. This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections. We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China.

  3. E

    Email Validation Tools Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Jul 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). Email Validation Tools Report [Dataset]. https://www.marketresearchforecast.com/reports/email-validation-tools-549597
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Jul 25, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The email validation tools market is experiencing robust growth, driven by the increasing need for businesses to maintain clean and accurate email lists for effective marketing campaigns. The rising adoption of email marketing as a primary communication channel, coupled with stricter data privacy regulations like GDPR and CCPA, necessitates the use of tools that ensure email deliverability and prevent bounces. This market, estimated at $500 million in 2025, is projected to grow at a Compound Annual Growth Rate (CAGR) of 15% from 2025 to 2033, reaching approximately $1.5 billion by 2033. This expansion is fueled by the growing sophistication of email validation techniques, including real-time verification, syntax checks, and mailbox monitoring, offering businesses more robust solutions to improve their email marketing ROI. Key market segments include small and medium-sized businesses (SMBs), large enterprises, and email marketing agencies, each exhibiting varying levels of adoption and spending based on their specific needs and email marketing strategies. The competitive landscape is characterized by a mix of established players and emerging startups, offering a range of features and pricing models to cater to diverse customer requirements. The market's growth is, however, subject to factors like increasing costs associated with maintaining data accuracy and the potential for false positives in email verification. The key players in this dynamic market, such as Mailgun, BriteVerify, and similar companies, are continuously innovating to improve accuracy, speed, and integration with other marketing automation platforms. The market's geographical distribution is diverse, with North America and Europe currently holding significant market share due to higher email marketing adoption rates and a robust technological infrastructure. However, Asia-Pacific and other emerging markets are poised for considerable growth in the coming years due to increasing internet penetration and rising adoption of digital marketing techniques. The ongoing evolution of email marketing strategies, the increasing emphasis on data hygiene, and the rise of artificial intelligence in email verification are likely to further shape the trajectory of this market in the years to come, leading to further innovation and growth.

  4. d

    Patent AT-E401626-T1: [Translated] MAGNETIC DATA VERIFICATION SYSTEM

    • catalog.data.gov
    • data.virginia.gov
    Updated Sep 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Center for Biotechnology Information (NCBI) (2025). Patent AT-E401626-T1: [Translated] MAGNETIC DATA VERIFICATION SYSTEM [Dataset]. https://catalog.data.gov/dataset/patent-at-e401626-t1-translated-magnetic-data-verification-system
    Explore at:
    Dataset updated
    Sep 30, 2025
    Dataset provided by
    National Center for Biotechnology Information (NCBI)
    Description

    A method of verifying the accuracy or authenticity of alphanumeric magnetic data on a document, wherein the configuration of a pictorial or graphic magnetic reference image in the document is made visible by bringing movable particulate magnetic material into proximity therewith such that the particulate magnetic material takes up a distribution corresponding to the magnetic field of the reference image; and the magnetic image configuration thus revealed is compared with the reference image to identify any significant disconformity suggesting past exposure of the document to a magnetic field capable of altering said magnetic data.

  5. Machine learning algorithm validation with a limited sample size

    • plos.figshare.com
    text/x-python
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrius Vabalas; Emma Gowen; Ellen Poliakoff; Alexander J. Casson (2023). Machine learning algorithm validation with a limited sample size [Dataset]. http://doi.org/10.1371/journal.pone.0224365
    Explore at:
    text/x-pythonAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Andrius Vabalas; Emma Gowen; Ellen Poliakoff; Alexander J. Casson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Advances in neuroimaging, genomic, motion tracking, eye-tracking and many other technology-based data collection methods have led to a torrent of high dimensional datasets, which commonly have a small number of samples because of the intrinsic high cost of data collection involving human participants. High dimensional data with a small number of samples is of critical importance for identifying biomarkers and conducting feasibility and pilot work, however it can lead to biased machine learning (ML) performance estimates. Our review of studies which have applied ML to predict autistic from non-autistic individuals showed that small sample size is associated with higher reported classification accuracy. Thus, we have investigated whether this bias could be caused by the use of validation methods which do not sufficiently control overfitting. Our simulations show that K-fold Cross-Validation (CV) produces strongly biased performance estimates with small sample sizes, and the bias is still evident with sample size of 1000. Nested CV and train/test split approaches produce robust and unbiased performance estimates regardless of sample size. We also show that feature selection if performed on pooled training and testing data is contributing to bias considerably more than parameter tuning. In addition, the contribution to bias by data dimensionality, hyper-parameter space and number of CV folds was explored, and validation methods were compared with discriminable data. The results suggest how to design robust testing methodologies when working with small datasets and how to interpret the results of other studies based on what validation method was used.

  6. Text Function, Date, Data Validation

    • kaggle.com
    zip
    Updated Mar 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sanjana Murthy (2024). Text Function, Date, Data Validation [Dataset]. https://www.kaggle.com/sanjanamurthy392/text-function-date-data-validation
    Explore at:
    zip(25270 bytes)Available download formats
    Dataset updated
    Mar 15, 2024
    Authors
    Sanjana Murthy
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    This data contains Text Function, Date, Data Validation.

  7. Data from: Selection of optimal validation methods for quantitative...

    • tandf.figshare.com
    xlsx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    K. Héberger (2023). Selection of optimal validation methods for quantitative structure–activity relationships and applicability domain [Dataset]. http://doi.org/10.6084/m9.figshare.23185916.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Taylor & Francishttps://taylorandfrancis.com/
    Authors
    K. Héberger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This brief literature survey groups the (numerical) validation methods and emphasizes the contradictions and confusion considering bias, variance and predictive performance. A multicriteria decision-making analysis has been made using the sum of absolute ranking differences (SRD), illustrated with five case studies (seven examples). SRD was applied to compare external and cross-validation techniques, indicators of predictive performance, and to select optimal methods to determine the applicability domain (AD). The ordering of model validation methods was in accordance with the sayings of original authors, but they are contradictory within each other, suggesting that any variant of cross-validation can be superior or inferior to other variants depending on the algorithm, data structure and circumstances applied. A simple fivefold cross-validation proved to be superior to the Bayesian Information Criterion in the vast majority of situations. It is simply not sufficient to test a numerical validation method in one situation only, even if it is a well defined one. SRD as a preferable multicriteria decision-making algorithm is suitable for tailoring the techniques for validation, and for the optimal determination of the applicability domain according to the dataset in question.

  8. V

    Patent AT-E400853-T1: [Translated] METHOD AND APPARATUS FOR FORMAL CIRCUIT...

    • data.virginia.gov
    • catalog.data.gov
    html
    Updated Sep 6, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Center for Biotechnology Information (NCBI) (2025). Patent AT-E400853-T1: [Translated] METHOD AND APPARATUS FOR FORMAL CIRCUIT VERIFICATION [Dataset]. https://data.virginia.gov/dataset/patent-at-e400853-t1-translated-method-and-apparatus-for-formal-circuit-verification
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Sep 6, 2025
    Dataset provided by
    National Center for Biotechnology Information (NCBI)
    Description

    A method and apparatus for determining the time behavior of a digital circuit based on a starting assumption is disclosed. Generally, in a formal verification of a digital circuit, the time behavior of a digital circuit is monitored to verify or refute whether formulated properties, which comprise an assumption and an assertion, result as a consequence of a presence of an assumption in the digital circuit. In order to determine the behavior of the digital circuit, the time behavior of the digital circuit is examined from a starting initial state of the digital circuit. A relevant auxiliary property is activated and the assertion of the auxiliary property is added to the digital circuit. The digital circuit is then monitored over a period of time.

  9. Face Verification Dataset

    • kaggle.com
    zip
    Updated Apr 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aleksei Zagorskii (2025). Face Verification Dataset [Dataset]. https://www.kaggle.com/datasets/juice0lover/face-identification
    Explore at:
    zip(11105038 bytes)Available download formats
    Dataset updated
    Apr 4, 2025
    Authors
    Aleksei Zagorskii
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This dataset contains cropped face images of actors sourced from online film databases and public web resources. The data was collected and processed to support tasks such as face recognition, identification or verification. Each image has been automatically cropped using Haar cascade classifiers to focus on the facial area, and then filtered for quality and minimum quantity per identity. All images are standardized to 100x100 pixels to ensure uniformity across the dataset. Dataset is balanced 3339 different / 3318 same person image pairs

  10. Sensor Validation using Bayesian Networks - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Mar 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Sensor Validation using Bayesian Networks - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/sensor-validation-using-bayesian-networks
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    One of NASA’s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation techniques address this problem: given a vector of sensor readings, decide whether sensors have failed, therefore producing bad data. We take in this paper a probabilistic approach, using Bayesian networks, to diagnosis and sensor validation, and investigate several relevant but slightly different Bayesian network queries. We emphasize that on-board inference can be performed on a compiled model, giving fast and predictable execution times. Our results are illustrated using an electrical power system, and we show that a Bayesian network with over 400 nodes can be compiled into an arithmetic circuit that can correctly answer queries in less than 500 microseconds on average. Reference: O. J. Mengshoel, A. Darwiche, and S. Uckun, "Sensor Validation using Bayesian Networks." In Proc. of the 9th International Symposium on Artificial Intelligence, Robotics, and Automation in Space (iSAIRAS-08), Los Angeles, CA, 2008. BibTex Reference: @inproceedings{mengshoel08sensor, author = {Mengshoel, O. J. and Darwiche, A. and Uckun, S.}, title = {Sensor Validation using {Bayesian} Networks}, booktitle = {Proceedings of the 9th International Symposium on Artificial Intelligence, Robotics, and Automation in Space (iSAIRAS-08)}, year = {2008} }

  11. w

    Global Bulk Email Verification Service Market Research Report: By Service...

    • wiseguyreports.com
    Updated Sep 15, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Global Bulk Email Verification Service Market Research Report: By Service Type (Real-Time Verification, Batch Verification, List Cleaning), By Deployment Type (Cloud-Based, On-Premises), By End User (E-commerce, Marketing Agencies, Financial Services, Healthcare), By Verification Method (Syntax Validation, Domain Validation, Mailbox Validation) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2035 [Dataset]. https://www.wiseguyreports.com/reports/bulk-email-verification-service-market
    Explore at:
    Dataset updated
    Sep 15, 2025
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Sep 25, 2025
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2023
    REGIONS COVEREDNorth America, Europe, APAC, South America, MEA
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20241042.9(USD Million)
    MARKET SIZE 20251129.5(USD Million)
    MARKET SIZE 20352500.0(USD Million)
    SEGMENTS COVEREDService Type, Deployment Type, End User, Verification Method, Regional
    COUNTRIES COVEREDUS, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA
    KEY MARKET DYNAMICSgrowing demand for accurate marketing, rising concerns over email fraud, increasing regulations on data privacy, need for enhanced customer engagement, emergence of AI-driven verification solutions
    MARKET FORECAST UNITSUSD Million
    KEY COMPANIES PROFILEDNeverBounce, EmailChecker, Debounce, BulkEmailVerifier, VerifyBee, EmailOnDeck, DataValidation, ZeroBounce, ListWise, QuickEmailVerification, MyEmailVerifier, MailboxValidator, BriteVerify, Hunter, EmailListVerify
    MARKET FORECAST PERIOD2025 - 2035
    KEY MARKET OPPORTUNITIESGrowth in e-commerce platforms, Increasing focus on data quality, Rising demand for digital marketing, Expansion of cloud-based services, Need for GDPR compliance solutions
    COMPOUND ANNUAL GROWTH RATE (CAGR) 8.3% (2025 - 2035)
  12. m

    PEN-Method: Predictor model and Validation Data

    • data.mendeley.com
    • narcis.nl
    Updated Sep 3, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alex Halle (2021). PEN-Method: Predictor model and Validation Data [Dataset]. http://doi.org/10.17632/459f33wxf6.4
    Explore at:
    Dataset updated
    Sep 3, 2021
    Authors
    Alex Halle
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This Data contains the PEN-Predictor-Keras-Model as well as the 100 validation data sets.

  13. E

    M2VTS Speaker Verification Database

    • catalogue.elra.info
    • live.european-language-grid.eu
    Updated Jun 26, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ELRA (European Language Resources Association) and its operational body ELDA (Evaluations and Language resources Distribution Agency) (2017). M2VTS Speaker Verification Database [Dataset]. https://catalogue.elra.info/en-us/repository/browse/ELRA-S0021/
    Explore at:
    Dataset updated
    Jun 26, 2017
    Dataset provided by
    ELRA (European Language Resources Association)
    ELRA (European Language Resources Association) and its operational body ELDA (Evaluations and Language resources Distribution Agency)
    License

    https://catalogue.elra.info/static/from_media/metashare/licences/ELRA_END_USER.pdfhttps://catalogue.elra.info/static/from_media/metashare/licences/ELRA_END_USER.pdf

    Description

    The Multi Modal Verification for Teleservices and Security applications project (M2VTS), running under the European ACTS programme, has produced a database designed to facilitate access control using multimodal identification of human faces. This technique improves recognition efficiency by combining individual modalities (i.e. face and voice). Its relative novelty means that new test material had to be created, since no existing database could offer all modalities needed.The M2VTS database comprises 37 different faces, with 5 shots of each being taken at one-week intervals, or when drastic face changes occurred in the mean time. During each shot, subjects were asked to count from 0 to 9 in their native language (generally French), and to move their heads from left to right, both with and without glasses. The data were then used to create three sequences, for voice, motion and "glasses off". The first sequence can be used for speech verification, 2-D dynamic face verification and speech/lips movement correlation, while the second and third provide information on 3-D face recognition, and may also be used to compare other recognition techniques.

  14. I

    Identity Verification Market Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Jul 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). Identity Verification Market Report [Dataset]. https://www.promarketreports.com/reports/identity-verification-market-8655
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Jul 18, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Identity Verification Market encompasses a diverse range of sophisticated products, each designed to address specific security and authentication needs. These solutions leverage advanced technologies to ensure accurate and reliable identity verification across various applications and industries. Biometrics: Biometric authentication utilizes unique biological traits, including fingerprints, facial recognition, iris scanning, voice recognition, and behavioral biometrics (typing patterns, gait analysis), offering robust and secure identity verification. The ongoing advancement of biometric technologies ensures high accuracy and resistance to spoofing attempts. Document Verification: These solutions go beyond simple visual inspection, employing advanced image analysis and data validation techniques to authenticate identity documents such as passports, driver's licenses, national IDs, and other official documentation. This includes verifying document integrity, detecting forgeries, and confirming data consistency across multiple sources. Facial Recognition: Facial recognition systems utilize sophisticated algorithms to identify and verify individuals based on their facial features. These systems are constantly evolving, incorporating AI and machine learning to improve accuracy and adapt to variations in lighting, age, and expression. Knowledge-Based Authentication (KBA): While traditionally vulnerable to data breaches, modern KBA solutions utilize dynamic question sets and sophisticated risk assessment techniques to enhance security. These methods are often combined with other authentication factors for improved protection. Multi-Factor Authentication (MFA): MFA significantly strengthens security by requiring users to provide multiple forms of verification, combining factors like something they know (password), something they have (mobile device), and something they are (biometrics). This layered approach significantly reduces the risk of unauthorized access. Address Verification Systems (AVS): These systems verify the validity and accuracy of provided addresses, helping to prevent fraudulent activities and ensure accurate record-keeping. Device Fingerprinting: This technology creates a unique identifier for each device, providing an additional layer of security and enabling risk-based authentication. Recent developments include: July 2020: Experian teamed up with Data Consortium to improve its consumer identity verification services. Through this agreement, Experian clients can better meet Know Your Customer (KYC) and Anti Money Laundering (AML) regulatory requirements, enroll customers more quickly and increase anti-fraud safeguards., June 2022: Onfido joined TISA to support the organization's Digital ID program, which encourages reusable identification. Following its swift development of the Digital ID program, which now includes Barclays, Signicat, OneSpan, and Daon, Onfido is the newest member to join TISA.. Key drivers for this market are: Rising Digitalization with Initiatives, . Increasing Adoption of BYOD Trends in Enterprises. Potential restraints include: Privacy concerns and data protection regulations, such as the General Data Protection Regulation (GDPR) in Europe pose challenges. Notable trends are: Growing shift toward mobile-based identity verification solutions.

  15. Verification, validation, and field testing the USEPA National Stormwater...

    • catalog.data.gov
    • data.amerigeoss.org
    • +1more
    Updated Nov 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2020). Verification, validation, and field testing the USEPA National Stormwater Calculator [Dataset]. https://catalog.data.gov/dataset/verification-validation-and-field-testing-the-usepa-national-stormwater-calculator
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    We used this dataset to verify and validate functions in the USEPA National Stormwater Calculator, and then applied field data and commonly-available datasets to illustrate calibration techniques and uncertainty evaluation. This dataset is associated with the following publication: Schifman, L., M. Tryby, J. Berner, and W. Shuster. Managing Uncertainty in Runoff Estimation with the U.S. Environmental Protection Agency National Stormwater Calculator.. JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION. American Water Resources Association, Middleburg, VA, USA, 54(1): 148-159, (2018).

  16. Evaluation Data of the Combined Approach

    • zenodo.org
    • data.niaid.nih.gov
    txt, zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bernhard Beckert; Simon Bischof; Mihai Herda; Mihai Herda; Michael Kirsten; Holger Klein; Marko Kleine Büning; Joachim Müssig; Bernhard Beckert; Simon Bischof; Michael Kirsten; Holger Klein; Marko Kleine Büning; Joachim Müssig (2020). Evaluation Data of the Combined Approach [Dataset]. http://doi.org/10.5281/zenodo.3359387
    Explore at:
    txt, zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Bernhard Beckert; Simon Bischof; Mihai Herda; Mihai Herda; Michael Kirsten; Holger Klein; Marko Kleine Büning; Joachim Müssig; Bernhard Beckert; Simon Bischof; Michael Kirsten; Holger Klein; Marko Kleine Büning; Joachim Müssig
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the programs on which the Combined Approach was evaluated.

    The Java source code of each evaluated program is in the "src" folder.
    The compiled jar file of each evaluated program is in the "testdata" folder.

    The .joak file contains the annotated sources and sinks for each evaluated program.
    Note that you may not be able to load the .joak files directly into the Combined
    Approach program as the various paths in the .joak files need to be fixed first.
    The purpose of the .joak files in this repository is to document the sources and sinks.
    You can also generate new .joak files with the Combined Approach application.

  17. FDA Drug Product Labels Validation Method Data Package

    • johnsnowlabs.com
    csv
    Updated Jan 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Snow Labs (2021). FDA Drug Product Labels Validation Method Data Package [Dataset]. https://www.johnsnowlabs.com/marketplace/fda-drug-product-labels-validation-method-data-package/
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jan 20, 2021
    Dataset authored and provided by
    John Snow Labs
    Description

    This data package contains information on Structured Product Labeling (SPL) Terminology for SPL validation procedures and information on performing SPL validations.

  18. c

    Data from: The Case for Software Health Management

    • s.cnmilf.com
    • catalog.data.gov
    Updated Apr 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). The Case for Software Health Management [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/the-case-for-software-health-management
    Explore at:
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    Dashlink
    Description

    Software Health Management (SWHM) is a new field that is concerned with the development of tools and technologies to enable automated detection, diagnosis, prediction, and mitigation of adverse events due to software anomalies. Significant effort has been expended in the last several decades in the development of verification and validation methods for software intensive systems, but it is becoming increasingly more apparent that this is not enough to guarantee that a complex software system meets all safety and reliability requirements. Modern software systems can exhibit a variety of failure modes which can go undetected in a verification and validation process. While standard techniques for error handling, fault detection and isolation can have significant benefits for many systems, it is becoming increasingly evident that new technologies and methods are necessary for the development of techniques to detect, diagnose, predict, and then mitigate the adverse events due to software that has already undergone significant verification and validation procedures. These software faults often arise due to the interaction between the software and the operating environment. Unanticipated environmental changes lead to software anomalies that may have significant impact on the overall success of the mission. Because software is ubiquitous, it is not sufficient that errors are detected only after they occur. Rather, software must be instrumented and monitored for failures before they happen. This prognostic capability will yield safer and more dependable systems for the future. This paper addresses the motivation, needs, and requirements of software health management as a new discipline. Published in the Proceedings of the IEEE Conference on Space Mission Challenges for Information Technology, Palo Alto, CA, August 2011.

  19. Email Verification Market By Type (Cloud Based, Web Based), Application...

    • verifiedmarketresearch.com
    Updated Apr 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VERIFIED MARKET RESEARCH (2024). Email Verification Market By Type (Cloud Based, Web Based), Application (Small Medium Enterprises, Large Enterprises), & Region For 2024-2031 [Dataset]. https://www.verifiedmarketresearch.com/product/email-verification-market/
    Explore at:
    Dataset updated
    Apr 25, 2024
    Dataset provided by
    Verified Market Researchhttps://www.verifiedmarketresearch.com/
    Authors
    VERIFIED MARKET RESEARCH
    License

    https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/

    Time period covered
    2024 - 2031
    Area covered
    Global
    Description

    Email Verification Market size was valued at USD 5.243 Billion in 2023 and is projected to reach USD 9.849 Billion by 2031, growing at a CAGR of 8.2% during the forecast period 2024-2031.

    Email Verification Market: Definition/ Overview

    Email verification is a crucial process that confirms the existence, authenticity, and deliverability of an email address. It involves procedures such as syntax validation, domain validation, and mailbox validation. Email verification can be automated or manual and is used in various applications such as email marketing, account registration, customer communication, and fraud prevention.

    As technology advances, email verification processes become more sophisticated, leading to higher accuracy in identifying valid email addresses and reducing false positives. AI-powered algorithms can analyze patterns and behaviors to enhance email verification processes, identifying anomalies and potential fraud more effectively.

    Future email verification methods may prioritize data privacy and security, prioritizing methods that protect user privacy while ensuring verification accuracy. Blockchain technology offers potential for secure and decentralized email verification systems, reducing reliance on centralized authorities and enhancing trust in the verification process.

  20. Verification and Validation of Flight Critical Systems, Phase I

    • data.nasa.gov
    application/rdfxml +5
    Updated Jun 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Verification and Validation of Flight Critical Systems, Phase I [Dataset]. https://data.nasa.gov/dataset/Verification-and-Validation-of-Flight-Critical-Sys/nzj2-ysec
    Explore at:
    json, csv, tsv, xml, application/rdfxml, application/rssxmlAvailable download formats
    Dataset updated
    Jun 26, 2018
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    Verification and Validation is a multi-disciplinary activity that encompasses elements of systems engineering, safety, software engineering and test. The elements that go into the V&V of a complex, software intensive product come out of activities that are performed by all of these disciplines while also spanning the complete system development cycle. As modern systems become more reliant on software intensive solutions to perform mission and safety critical functions, the effort that is required for system certification experiences a corresponding increase. These systems are expected to perform correctly and safely while being flexible and portable enough to go though system refresh cycles and evolvable enough to take on new system functionality throughout the system lifecycle. . We propose a method of addressing this challenge with advanced modular safety cases to specify system safety properties and support the V&V of those properties with argument and evidence chains. The modular safety cases make use of formal specification of safety claims and use contracts to formalize the dependencies between the case modules. These cases can be used to form powerful verification and validation arguments for a system that are maintainable and can be used to support incremental V&V techniques.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Rabie El Kharoua (2024). Auction Verification Dataset [Dataset]. https://www.kaggle.com/datasets/rabieelkharoua/auction-verification-dataset/data
Organization logo

Auction Verification Dataset

Analyzing and Predicting Verification of Data-Aware Process Models

Explore at:
zip(15678 bytes)Available download formats
Dataset updated
Apr 24, 2024
Authors
Rabie El Kharoua
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

We modeled a simultaneous multi-round auction with BPMN models, transformed the latter to Petri nets, and used a model checker to verify whether certain outcomes of the auction are possible or not.

Dataset Characteristics: Tabular

Subject Area: Computer Science

Associated Tasks: Classification, Regression

Instances: 2043

Features: 7

Dataset Information

For what purpose was the dataset created? The dataset was created as part of a scientific study. The goal was to find out whether one could replace costly verification of complex process models (here: simultaneous multi-round auctions, as used for auctioning frequency spectra) with predictions of the outcome.

What do the instances in this dataset represent? Each instance represents one verification run. Verification checks whether a particular price is possible for a particular product, and (for only some of the instances) whether a particular bidder might win the product to that price.

Additional Information Our code to prepare the dataset and to make predictions is available here: https://github.com/Jakob-Bach/Analyzing-Auction-Verification

Has Missing Values? No

Introductory Paper

Title: Analyzing and Predicting Verification of Data-Aware Process Models – a Case Study with Spectrum Auctions

Authors: Elaheh Ordoni, Jakob Bach, Ann-Katrin Fleck. 2022

Journal: Published in Journal

Link of Article

Abstract of Introductory Paper

Verification techniques play an essential role in detecting undesirable behaviors in many applications like spectrum auctions. By verifying an auction design, one can detect the least favorable outcomes, e.g., the lowest revenue of an auctioneer. However, verification may be infeasible in practice, given the vast size of the state space on the one hand and the large number of properties to be verified on the other hand. To overcome this challenge, we leverage machine-learning techniques. In particular, we create a dataset by verifying properties of a spectrum auction first. Second, we use this dataset to analyze and predict outcomes of the auction and characteristics of the verification procedure. To evaluate the usefulness of machine learning in the given scenario, we consider prediction quality and feature importance. In our experiments, we observe that prediction models can capture relationships in our dataset well, though one needs to be careful to obtain a representative and sufficiently large training dataset. While the focus of this article is on a specific verification scenario, our analysis approach is general and can be adapted to other domains.

Cite

Citation:Ordoni,Elaheh, Bach,Jakob, Fleck,Ann-Katrin, and Bach,Jakob. (2022). Auction Verification. UCI Machine Learning Repository. https://doi.org/10.24432/C52K6N.

BibTeX:@misc{misc_auction_verification_713, author = {Ordoni,Elaheh, Bach,Jakob, Fleck,Ann-Katrin, and Bach,Jakob}, title = {{Auction Verification}}, year = {2022}, howpublished = {UCI Machine Learning Repository}, note = {{DOI}: https://doi.org/10.24432/C52K6N} }

Import in Python

pip install ucimlrepo

`from ucimlrepo import fetch_ucirepo

fetch dataset

auction_verification = fetch_ucirepo(id=713)

data (as pandas dataframes)

X = auction_verification.data.features y = auction_verification.data.targets

metadata

print(auction_verification.metadata)

variable information

print(auction_verification.variables) `

Search
Clear search
Close search
Google apps
Main menu