100+ datasets found
  1. Data from: E-Verify

    • catalog.data.gov
    • datadiscoverystudio.org
    • +1more
    Updated Jan 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Social Security Administration (2025). E-Verify [Dataset]. https://catalog.data.gov/dataset/e-verify
    Explore at:
    Dataset updated
    Jan 24, 2025
    Dataset provided by
    Social Security Administrationhttp://www.ssa.gov/
    Description

    Through an automated confirmation system, an employer matches information provided by a new employee (Form I-9) against existing information contained in Social Security Administration's (SSA) and the Department of Homeland Security's (DHS) U.S. Citizenship & Immigration Services (USCIS) databases. The SSA E-Verify System (SSA E-Verify) determines a specific verification code based upon information (SSN, DOB, L-Name, F-Name) in the NUMIDENT database. The verification code is returned to DHS E-Verify (DHS E-Verify) along with the original verification request. The message to the employer is determined by DHS E-Verify based on SSA's verification code.

  2. h

    spider-context-validation

    • huggingface.co
    Updated Jul 26, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard R. (2023). spider-context-validation [Dataset]. https://huggingface.co/datasets/richardr1126/spider-context-validation
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 26, 2023
    Authors
    Richard R.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset Card for Spider Context Validation

      Dataset Summary
    

    Spider is a large-scale complex and cross-domain semantic parsing and text-to-SQL dataset annotated by 11 Yale students The goal of the Spider challenge is to develop natural language interfaces to cross-domain databases. This dataset was created to validate spider-fine-tuned LLMs with database context.

      Yale Lily Spider Leaderboards
    

    The leaderboard can be seen at https://yale-lily.github.io/spider… See the full description on the dataset page: https://huggingface.co/datasets/richardr1126/spider-context-validation.

  3. MIPS Data Validation Criteria

    • johnsnowlabs.com
    csv
    Updated Jan 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Snow Labs (2021). MIPS Data Validation Criteria [Dataset]. https://www.johnsnowlabs.com/marketplace/mips-data-validation-criteria/
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jan 20, 2021
    Dataset authored and provided by
    John Snow Labs
    Time period covered
    2017 - 2020
    Area covered
    United States
    Description

    This dataset includes the MIPS Data Validation Criteria. The Medicare Access and CHIP Reauthorization Act of 2015 (MACRA) streamlines a patchwork collection of programs with a single system where provider can be rewarded for better care. Providers will be able to practice as they always have, but they may receive higher Medicare payments based on their performance.

  4. d

    Contact Data Cleaning, Validation & Enrichment - Data Health Check -...

    • datarade.ai
    .json, .csv, .xls
    Updated May 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    RampedUp Global Data Solutions (2025). Contact Data Cleaning, Validation & Enrichment - Data Health Check - RampedUp [Dataset]. https://datarade.ai/data-products/data-health-checkup
    Explore at:
    .json, .csv, .xlsAvailable download formats
    Dataset updated
    May 15, 2025
    Dataset authored and provided by
    RampedUp Global Data Solutions
    Area covered
    Sao Tome and Principe, Bahamas, Germany, Rwanda, Costa Rica, Malaysia, Iraq, Western Sahara, Bahrain, Bosnia and Herzegovina
    Description

    RampedUp helps marketers that are seeing their efforts generate poorer responses over time and do not understand why. Our experience tells us the reasons are mostly due to the impact of contact data decay. We wrote this article to help them understand why that me be case and this post to help them understand how their marketing data became dirty in the first place.

    Validation and Enrichment

    RampedUp validates email addresses in real-time and provides up to 60 pieces of detailed information on our contacts. This helps for better segmentation and targeted communication. Here are 10 reasons why people validate and enrich their data.

    Personal to Professional

    We can find professional information from people that complete online forms with their personal email address. This helps identify and qualify inbound leads. Here are 4 additional reasons to bridge the B2C / B2B gap.

    Cleansing

    By combining email and contact validation – RampedUp can identify harmful contact records within your database. This will improve inbox placement and deliverability. Here is a blog post on the High Risk of Catch All Email servers.

    Recovery

    RampedUp can identify the old records in your database and inform you where they are working today. This is a great way to find old customers that have moved to a new company. We wrote this blog post on how to engage old customers and get them back in the fold.

    Opt-In Compliance

    We can help you identify the contacts within your database that are protected by international Opt-In Laws such as the GDPR, CASL, and CCPA. We wrote this article to share how GDPR is impacting sales and marketing efforts.

  5. c

    ckanext-cprvalidation

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-cprvalidation [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-cprvalidation
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The ckanext-cprvalidation extension for CKAN is designed to validate resources specifically for the Danish national open data platform. According to the documentation, this extension ensures that datasets adhere to specific standards. It appears to be developed for CKAN v2.6, and the documentation stresses that compatibility with other versions is not ensured. Key Features: Resource Validation: Validates resources against specific criteria, presumably related to or mandated by the Danish national open data platform. The exact validation rules are not detailed in the available documentation. Scheduled Scanning: Can be configured to scan resources at regular intervals via a CRON job, enabling automated and ongoing validation. Exception Handling: Allows adding exceptions to the database, potentially to exclude certain resources or validation errors from triggering alerts or blocking publication. Database Integration: Requires a dedicated database user ("cprvalidation") for operation, with database connection settings added to the CKAN configuration file (production.ini). Technical Integration: The extension installs as a CKAN plugin and requires activation in the CKAN configuration. It necessitates database setup, including the creation of a specific database user and corresponding credentials. The extension likely adds functionality through CKAN's plugin interface and may provide custom CLI commands for database initialization. Scheduled tasks are managed through a CRON job, external to CKAN itself, but triggered to interact with the validation logic. It's also evident that the extension makes use of additional database settings to be configured in the production.ini file. Benefits & Impact: The ckanext-cprvalidation extension ensures data quality and compliance with the standards of the Danish national open data platform. By automating validation and enabling scheduled checks, it reduces the manual effort needed to maintain data integrity, ensuring that published resources meet required standards.

  6. f

    Data from: Development and validation of HBV surveillance models using big...

    • tandf.figshare.com
    docx
    Updated Dec 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Weinan Dong; Cecilia Clara Da Roza; Dandan Cheng; Dahao Zhang; Yuling Xiang; Wai Kay Seto; William C. W. Wong (2024). Development and validation of HBV surveillance models using big data and machine learning [Dataset]. http://doi.org/10.6084/m9.figshare.25201473.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Dec 3, 2024
    Dataset provided by
    Taylor & Francis
    Authors
    Weinan Dong; Cecilia Clara Da Roza; Dandan Cheng; Dahao Zhang; Yuling Xiang; Wai Kay Seto; William C. W. Wong
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The construction of a robust healthcare information system is fundamental to enhancing countries’ capabilities in the surveillance and control of hepatitis B virus (HBV). Making use of China’s rapidly expanding primary healthcare system, this innovative approach using big data and machine learning (ML) could help towards the World Health Organization’s (WHO) HBV infection elimination goals of reaching 90% diagnosis and treatment rates by 2030. We aimed to develop and validate HBV detection models using routine clinical data to improve the detection of HBV and support the development of effective interventions to mitigate the impact of this disease in China. Relevant data records extracted from the Family Medicine Clinic of the University of Hong Kong-Shenzhen Hospital’s Hospital Information System were structuralized using state-of-the-art Natural Language Processing techniques. Several ML models have been used to develop HBV risk assessment models. The performance of the ML model was then interpreted using the Shapley value (SHAP) and validated using cohort data randomly divided at a ratio of 2:1 using a five-fold cross-validation framework. The patterns of physical complaints of patients with and without HBV infection were identified by processing 158,988 clinic attendance records. After removing cases without any clinical parameters from the derivation sample (n = 105,992), 27,392 cases were analysed using six modelling methods. A simplified model for HBV using patients’ physical complaints and parameters was developed with good discrimination (AUC = 0.78) and calibration (goodness of fit test p-value >0.05). Suspected case detection models of HBV, showing potential for clinical deployment, have been developed to improve HBV surveillance in primary care setting in China. (Word count: 264) This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections.We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China. This study has developed a suspected case detection model for HBV, which can facilitate early identification and treatment of HBV in the primary care setting in China, contributing towards the achievement of WHO’s elimination goals of HBV infections. We utilized the state-of-art natural language processing techniques to structure the data records, leading to the development of a robust healthcare information system which enhances the surveillance and control of HBV in China.

  7. f

    Model validation data from 2018 to 2020

    • auckland.figshare.com
    txt
    Updated May 10, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dinusha Rasanjalee Jayathilake Mudiyanselage; Mark Costello (2020). Model validation data from 2018 to 2020 [Dataset]. http://doi.org/10.17608/k6.auckland.12278786.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 10, 2020
    Dataset provided by
    The University of Auckland
    Authors
    Dinusha Rasanjalee Jayathilake Mudiyanselage; Mark Costello
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset was used to validate the global distribution of kelp biome model. These data were downloaded from the GBIF online database and cleaned to maintain highest georeference accuracy. The MaxEnt probability values of each record were given in the last column.

  8. Data from: Validation of Urban Freeway Models [supporting datasets]

    • catalog.data.gov
    • data.bts.gov
    • +2more
    Updated Dec 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Federal Highway Administration (2023). Validation of Urban Freeway Models [supporting datasets] [Dataset]. https://catalog.data.gov/dataset/validation-of-urban-freeway-models-supporting-datasets
    Explore at:
    Dataset updated
    Dec 7, 2023
    Dataset provided by
    Federal Highway Administrationhttps://highways.dot.gov/
    Description

    The goal of the SHRP 2 Project L33 Validation of Urban Freeway Models was to assess and enhance the predictive travel time reliability models developed in the SHRP 2 Project L03, Analytic Procedures for Determining the Impacts of Reliability Mitigation Strategies. SHRP 2 Project L03, which concluded in 2010, developed two categories of reliability models to be used for the estimation or prediction of travel time reliability within planning, programming, and systems management contexts: data-rich and data-poor models. The objectives of Project L33 were the following: • The first was to validate the most important models – the “Data Poor” and “Data Rich” models with new datasets. • The second objective was to assess the validation outcomes to recommend potential enhancements. • The third was to explore enhancements and develop a final set of predictive equations. • The fourth was to validate the enhanced models. • The last was to develop a clear set of application guidelines for practitioner use of the project outputs. The datasets in these 5 zip files are in support of SHRP 2 Report S2-L33-RW-1, Validation of Urban Freeway Models, https://rosap.ntl.bts.gov/view/dot/3604 The 5 zip files contain a total of 60 comma separated value (.csv) files. The compressed zip files total 3.8 GB in size. The files have been uploaded as-is; no further documentation was supplied. These files can be unzipped using any zip compression/decompression software. The files can be read in any simple text editor. [software requirements] Note: Data files larger than 1GB each. Direct data download links: L03-01: https://doi.org/10.21949/1500858 L03-02: https://doi.org/10.21949/1500868 L03-03: https://doi.org/10.21949/1500869 L03-04: https://doi.org/10.21949/1500870 L03-05: https://doi.org/10.21949/1500871

  9. c

    ckanext-validation

    • catalog.civicdataecosystem.org
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). ckanext-validation [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-validation
    Explore at:
    Dataset updated
    Dec 16, 2024
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    The Validation extension for CKAN enhances data quality within the CKAN ecosystem by leveraging the Frictionless Framework to validate tabular data. This extension allows for automated data validation, generating comprehensive reports directly accessible within the CKAN interface. The validation process helps identify structural and schema-level issues, ensuring data consistency and reliability. Key Features: Automated Data Validation: Performs data validation automatically in the background or during dataset creation, streamlining the quality assurance process. Comprehensive Validation Reports: Generates detailed reports on data quality, highlighting issues such as missing headers, blank rows, incorrect data types, or values outside of defined ranges. Frictionless Framework Integration: Utilizes the Frictionless Framework library for robust and standardized data validation. Exposed Actions: Provides accessible action functions that allows data validation to be integrated into custom workflows from other CKAN extensions. Command Line Interface: Offers a command-line interface (CLI) to manually trigger validation jobs for specific datasets, resources, or based on search criteria. Reporting Utilities: Enables the generation of global reports summarizing validation statuses across all resources. Use Cases: Improve Data Quality: Ensures data integrity and adherence to defined schemas, leading to better data-driven decision-making. Streamline Data Workflows: Integrates validation as part of data creation or update processes, automating quality checks and saving time. Customize Data Validation Rules: Allows developers to extend the validation process with their own custom workflows and integrations using the exposed actions. Technical Integration: The Validation extension integrates deeply within CKAN by providing new action functions (resourcevalidationrun, resourcevalidationshow, resourcevalidationdelete, resourcevalidationrunbatch) that can be called via the CKAN API. It also includes a plugin interface (IPipeValidation) for more advanced customization, which allows other extensions to receive and process validation reports. Users can utilize the command-line interface to trigger validation jobs and generate overview reports. Benefits & Impact: By implementing the Validation extension, CKAN installations can significantly improve the quality and reliability of their data. This leads to increased trust in the data, better data governance, and reduced errors in downstream applications that rely on the data. Automated validation helps to proactively identify and resolve data issues, contributing to a more efficient data management process.

  10. c

    ckanext-validator

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-validator [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-validator
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The Validator extension for CKAN enables data validation within the CKAN ecosystem, leveraging the 'goodtables' library. This allows users to ensure the quality and integrity of tabular data resources published and managed within their CKAN instances. By integrating data validation capabilities, the extension aims to improve data reliability and usability. Key Features: Data Validation using Goodtables: Utilizes the 'goodtables' library for validating tabular data resources, providing a standardized and robust validation process. Automated Validation: Automatically validate packages, resources or datasets upon each upload or update. Technical Integration: Given the limited information in the README, it can be assumed that the extension integrates with the CKAN resource creation and editing workflow. The extension likely adds validation steps to the data upload and modification process, possibly providing feedback to users on any data quality issues detected. Benefits & Impact: By implementing the Validator extension, data publishers increase the reliability and reusability of data resources. This directly improves data quality control, enhances collaboration, lowers the risk of data-driven problems in data applications, and creates opportunities for data-driven organizations to scale up.

  11. B

    Data set of "Smart" Brace Validation Study

    • borealisdata.ca
    • dataone.org
    Updated Nov 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vincent Nguyen; William Gage (2023). Data set of "Smart" Brace Validation Study [Dataset]. http://doi.org/10.5683/SP3/MGQYBR
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 29, 2023
    Dataset provided by
    Borealis
    Authors
    Vincent Nguyen; William Gage
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This data set was collected to validate a 'smart' knee brace with an IMU embedded on the thigh and shank area against a reference motion capture system (Vicon). There were 10 participants total and each participant came into the lab for 2 sessions, each on separate days. For each session, participants completed three trials of 2-minute treadmill walking at their preferred walking speed, three trials of 15 squats to parallel, three trials of 10 sit-to-stand on a chair that was about knee level, three trials of 15 total alternating lunges, and three trials of 2-minute treadmill jogging at their preferred speed, all in that order. 10 squats and 10 lunges were done for some participants' first sessions, but then did 15 squats and lunges in the second session (a .txt file will be included for each participants' session to specify). This dataset only contains the IMU data.

  12. d

    More than 120,520 Verified Emails and Phone numbers of Dentists From USA |...

    • datarade.ai
    Updated Apr 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DataCaptive (2021). More than 120,520 Verified Emails and Phone numbers of Dentists From USA | Dentists Data | DataCaptive [Dataset]. https://datarade.ai/data-categories/special-offer-promotion-data
    Explore at:
    .json, .xml, .csv, .xls, .txtAvailable download formats
    Dataset updated
    Apr 20, 2021
    Dataset authored and provided by
    DataCaptive
    Area covered
    United States of America
    Description

    Salient Features of Dentists Email Addresses

    So make sure that you don’t find excuses for failing at global marketing campaigns and in reaching targeted medical practitioners and healthcare specialists. With our Dentists Email Leads, you will seldom have a reason not to succeed! So make haste and take action today!

    1. 1.2 million phone calls per month as a part of a data verification
    2. 85% telephone and email verified Dentist Mailing Lists
    3. Quarterly SMTP and NCOA verified to keep data fresh and active
    4. 15 million verification messages sent every month to validate email addresses
    5. Connect with top Dentists across the US, Canada, UK, Europe, EMEA, Australia, APAC and many more countries.
    6. egularly updated and cleansed databases to keep it free of duplicate and inaccurate data

    How Can Our Dentists Data Help You to Market to Dentists?

    We provide a variety of methods for marketing your dental appliances or products to the top-rated dentists in the United States. Take a glance at some of the available channels:

    • Email blast • Marketing viability • Test campaigns • Direct mail • Sales leads • Drift campaigns • ABM campaigns • Product launches • B2B marketing

    Data Sources

    The contact details of your targeted healthcare professionals are compiled from highly credible resources like: • Websites • Medical seminars • Medical records • Trade shows • Medical conferences

    What’s in for you? Over choosing us, here are a few advantages we authenticate- • Locate, target, and prospect leads from 170+ countries • Design and execute ABM and multi-channel campaigns • Seamless and smooth pre-and post-sale customer service • Connect with old leads and build a fruitful customer relationship • Analyze the market for product development and sales campaigns • Boost sales and ROI with increased customer acquisition and retention

    Our security compliance

    We use of globally recognized data laws like –

    GDPR, CCPA, ACMA, EDPS, CAN-SPAM and ANTI CAN-SPAM to ensure the privacy and security of our database. We engage certified auditors to validate our security and privacy by providing us with certificates to represent our security compliance.

    Our USPs- what makes us your ideal choice?

    At DataCaptive™, we strive consistently to improve our services and cater to the needs of businesses around the world while keeping up with industry trends.

    • Elaborate data mining from credible sources • 7-tier verification, including manual quality check • Strict adherence to global and local data policies • Guaranteed 95% accuracy or cash-back • Free sample database available on request

    Guaranteed benefits of our Dentists email database!

    85% email deliverability and 95% accuracy on other data fields

    We understand the importance of data accuracy and employ every avenue to keep our database fresh and updated. We execute a multi-step QC process backed by our Patented AI and Machine learning tools to prevent anomalies in consistency and data precision. This cycle repeats every 45 days. Although maintaining 100% accuracy is quite impractical, since data such as email, physical addresses, and phone numbers are subjected to change, we guarantee 85% email deliverability and 95% accuracy on other data points.

    100% replacement in case of hard bounces

    Every data point is meticulously verified and then re-verified to ensure you get the best. Data Accuracy is paramount in successfully penetrating a new market or working within a familiar one. We are committed to precision. However, in an unlikely event where hard bounces or inaccuracies exceed the guaranteed percentage, we offer replacement with immediate effect. If need be, we even offer credits and/or refunds for inaccurate contacts.

    Other promised benefits

    • Contacts are for the perpetual usage • The database comprises consent-based opt-in contacts only • The list is free of duplicate contacts and generic emails • Round-the-clock customer service assistance • 360-degree database solutions

  13. Z

    Data pipeline Validation And Load Testing using Multiple CSV Files

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Mar 26, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pelle Jakovits (2021). Data pipeline Validation And Load Testing using Multiple CSV Files [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4636797
    Explore at:
    Dataset updated
    Mar 26, 2021
    Dataset provided by
    Afsana Khan
    Pelle Jakovits
    Mainak Adhikari
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The datasets were used to validate and test the data pipeline deployment following the RADON approach. The dataset has a CSV file that contains around 32000 Twitter tweets. 100 CSV files have been created from the single CSV file and each CSV file containing 320 tweets. Those 100 CSV files are used to validate and test (performance/load testing) the data pipeline components.

  14. M

    Parcel Data Geospatial Advisory Council (GAC) Validation Tool

    • gisdata.mn.gov
    esri_toolbox, html
    Updated Jul 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MetroGIS (2020). Parcel Data Geospatial Advisory Council (GAC) Validation Tool [Dataset]. https://gisdata.mn.gov/dataset/us-mn-state-metrogis-plan-gac-parcel-validation
    Explore at:
    esri_toolbox, htmlAvailable download formats
    Dataset updated
    Jul 9, 2020
    Dataset provided by
    MetroGIS
    Description

    Data producers or those who maintain parcel data can use this tool to validate their data against the state Geospatial Advisory Committee (GAC) Parcel Data Standard. The validations within the tool were originally created as part of a MetroGIS Regional Parcel Dataset workflow.

    Counties using this tool can obtain a schema geodatabase from Parcel Data Standard page hosted by MnGeo (link below). All counties, cities or those maintaining authoritative data on a local jurisdiction's behalf, are encouraged to use and modify the tool as needed to support local workflows.

    Parcel Data Standard Page
    http://www.mngeo.state.mn.us/committee/standards/parcel_attrib/parcel_attrib.html

    Specific validation information and tool requirements can be found in the following documents included within this resource.
    Readme_HowTo.pdf
    Readme_Validations.pdf


  15. d

    B2B Data Full Record Purchase | 80MM Total Universe B2B Contact Data Mailing...

    • datarade.ai
    .xml, .csv, .xls
    Updated Feb 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    McGRAW (2025). B2B Data Full Record Purchase | 80MM Total Universe B2B Contact Data Mailing List [Dataset]. https://datarade.ai/data-products/b2b-data-full-record-purchase-80mm-total-universe-b2b-conta-mcgraw
    Explore at:
    .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Feb 22, 2025
    Dataset authored and provided by
    McGRAW
    Area covered
    Niue, Myanmar, Uzbekistan, Namibia, Burkina Faso, Swaziland, United Arab Emirates, Anguilla, Guinea-Bissau, Zimbabwe
    Description

    McGRAW’s US B2B Data: Accurate, Reliable, and Market-Ready

    Our B2B database delivers over 80 million verified contacts with 95%+ accuracy. Supported by in-house call centers, social media validation, and market research teams, we ensure that every record is fresh, reliable, and optimized for B2B outreach, lead generation, and advanced market insights.

    Our B2B database is one of the most accurate and extensive datasets available, covering over 91 million business executives with a 95%+ accuracy guarantee. Designed for businesses that require the highest quality data, this database provides detailed, validated, and continuously updated information on decision-makers and industry influencers worldwide.

    The B2B Database is meticulously curated to meet the needs of businesses seeking precise and actionable data. Our datasets are not only extensive but also rigorously validated and updated to ensure the highest level of accuracy and reliability.

    Key Data Attributes:

    • Personal Identifiers: First name, last name
    • Professional Details: Title, direct dial numbers
    • Business Information: Company name, address, phone number, fax number, website
    • Company Metrics: Employee size, sales volume
    • Technology Insights: Information on hardware and software usage across organizations
    • Social Media Connections: LinkedIn, Facebook, and direct dial contacts
    • Corporate Insights: Detailed company profiles

    Unlike many providers that rely solely on third-party vendor files, McGRAW takes a hands-on approach to data validation. Our dedicated nearshore and offshore call centers engage directly with data before each delivery to ensure every record meets our high standards of accuracy and relevance.

    In addition, our teams of social media validators, market researchers, and digital marketing specialists continuously refine and update records to maintain data freshness. Each dataset undergoes multiple verification checks using internal validation processes and third-party tools such as Fresh Address, BriteVerify, and Impressionwise to guarantee the highest data quality.

    Additional Data Solutions and Services

    • Data Enhancement: Email and LinkedIn appends, contact discovery across global roles and functions

    • Business Verification: Real-time validation through call centers, social media, and market research

    • Technology Insights: Detailed IT infrastructure reports, spending trends, and executive insights

    • Healthcare Database: Access to over 80 million healthcare professionals and industry leaders

    • Global Reach: US and international GDPR-compliant datasets, complete with email, postal, and phone contacts

    • Email Broadcast Services: Full-service campaign execution, from testing to live deployment, with tracking of key engagement metrics such as opens and clicks

    Many B2B data providers rely on vendor-contributed files without conducting the rigorous validation necessary to ensure accuracy. This often results in outdated and unreliable data that fails to meet the demands of a fast-moving business environment.

    McGRAW takes a different approach. By owning and operating dedicated call centers, we directly verify and validate our data before delivery, ensuring that every record is up-to-date and ready to drive business success.

    Through continuous validation, social media verification, and real-time updates, McGRAW provides a high-quality, dependable database for businesses that prioritize data integrity and performance. Our Global Business Executives database is the ideal solution for companies that need accurate, relevant, and market-ready data to fuel their strategies.

  16. d

    Data from: NWEI Azura RTI 1/20th Model Validation Wave Tank Test Data

    • catalog.data.gov
    • mhkdr.openei.org
    • +1more
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Northwest Energy Innovations (2025). NWEI Azura RTI 1/20th Model Validation Wave Tank Test Data [Dataset]. https://catalog.data.gov/dataset/nwei-azura-rti-1-20th-model-validation-wave-tank-test-data-e6ff6
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    Northwest Energy Innovations
    Description

    Data from the 1/20th wave tank test of the RTI model. Northwest Energy Innovations (NWEI) has licensed intellectual property from RTI, and modified the PTO and retested the 1/20th RTI model that was tested as part of the Wave Energy Prize. The goal of the test was to validate NWEI's simulation models of the model. The test occurred at the University of Maine in Orono (UMO).

  17. d

    Central South Dakota Airborne Lidar Validation - Field Survey Data

    • catalog.data.gov
    • data.usgs.gov
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Central South Dakota Airborne Lidar Validation - Field Survey Data [Dataset]. https://catalog.data.gov/dataset/central-south-dakota-airborne-lidar-validation-field-survey-data
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Central City, South Dakota
    Description

    U.S. Geological Survey (USGS) scientists conducted field data collection efforts during the time periods of April 25 - 26, 2017, October 24 - 28, 2017, and July 25 - 26, 2018, using a combination of surveying technologies to map and validate topography, structures, and other features at five sites in central South Dakota. The five sites included the Chamberlain Explorers Athletic Complex and the Chamberlain High School in Chamberlain, SD, Hanson Lake State Public Shooting Area near Corsica, SD, the State Capital Grounds in Pierre, SD, and Platte Creek State Recreation Area near Platte, SD. The work was initiated as an effort to evaluate airborne Geiger-Mode and Single Photon light detection and ranging (lidar) data that were collected over parts of central South Dakota. Both Single Photon and Geiger-Mode lidar offer the promise of being able to map areas at high altitudes, thus requiring less time than traditional airborne lidar collections, while acquiring higher point densities. Real Time Kinematic Global Navigational Satellite System (RTK-GNSS), total station, and ground-based lidar (GBL) data were collected to evaluate data collected by the Geiger-Mode and Single Photon systems.

  18. d

    Data from: How to validate a Bayesian evolutionary model

    • datadryad.org
    • data.niaid.nih.gov
    zip
    Updated Nov 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    How to validate a Bayesian evolutionary model [Dataset]. https://datadryad.org/stash/dataset/doi:10.5061/dryad.xksn02vpk
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 12, 2024
    Dataset provided by
    Dryad
    Authors
    Fábio K. Mendes; Remco Bouckaert; Luiz M. Carvalho; Alexei J. Drummond
    Description

    How to validate a Bayesian evolutionary model

    The main supplementary file (text document) is mendes_etal_validation_supp_2024.pdf. You should find this file hosted on Zenodo and pointed to by Dryad.

    This Dryad repository also hosts key output files for reproducing the figures in the manuscript. Namely:

    1. covg_match_3_to_300_tips_yule_bm.RData: R workspace containing table with simulated and inferred (HPD) values of parameters investigated in "Scenario 1". This data underlies coverage validation plots (Fig. 4 and 7) in the main manuscript;
    2. covg_mismatch_3_to_300_tips_yule_bm.RData: R workspace containing table with simulated and inferred (HPD) values of parameters investigated in "Scenario 2". This data underlies coverage validation plots (Fig. 4 and 7) in the main manuscript;
    3. covg_match_100_to_200_tips_yule_bm.RData: R workspace containing table with simulated and inferred (HPD) values of parameters investigated in "Scenario 3". This data underlies coverage validatio...
  19. R

    Experimental data of free evacuations dedicated to the validation of egress...

    • entrepot.recherche.data.gouv.fr
    tsv, txt
    Updated Aug 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anthony Collin; Anthony Collin; Davood Zeinali; Alexis Marchand; Thomas Gasparotto; Davood Zeinali; Alexis Marchand; Thomas Gasparotto (2023). Experimental data of free evacuations dedicated to the validation of egress models [Dataset]. http://doi.org/10.12763/XNYB9A
    Explore at:
    tsv(22), tsv(135), tsv(48), tsv(5), tsv(25), tsv(50), tsv(128), tsv(23), tsv(85), tsv(24), tsv(2), tsv(131), tsv(4), tsv(49), tsv(142), tsv(134), tsv(51), tsv(288), tsv(133), tsv(136), tsv(46), tsv(44), tsv(47), tsv(125), txt(7438), tsv(21), tsv(90), tsv(138), tsv(147), tsv(87), tsv(282), tsv(20), tsv(139), tsv(140)Available download formats
    Dataset updated
    Aug 31, 2023
    Dataset provided by
    Recherche Data Gouv
    Authors
    Anthony Collin; Anthony Collin; Davood Zeinali; Alexis Marchand; Thomas Gasparotto; Davood Zeinali; Alexis Marchand; Thomas Gasparotto
    License

    https://spdx.org/licenses/etalab-2.0.htmlhttps://spdx.org/licenses/etalab-2.0.html

    Time period covered
    Apr 1, 2015 - Sep 28, 2015
    Description

    This database contains the results of 146 experimental evacuation drills. Several configurations are proposed: from a single room to a multi-compartment configuration. For each test, a unique file contains all the evacuation times in seconds from the drill start when people go through the doorways. These raw data will be handled by future users to calibrate or validate their own evacuation models.

  20. d

    Data from: Experimental Validation of a Prognostic Health Management System...

    • catalog.data.gov
    • datasets.ai
    • +3more
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). Experimental Validation of a Prognostic Health Management System for Electro-Mechanical Actuators [Dataset]. https://catalog.data.gov/dataset/experimental-validation-of-a-prognostic-health-management-system-for-electro-mechanical-ac
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    The work described herein is aimed to advance prognostic health management solutions for electro-mechanical actuators and, thus, increase their reliability and attractiveness to designers of the next generation aircraft and spacecraft. In pursuit of this goal the team adopted a systematic approach by starting with EMA FMECA reviews, consultations with EMA manufacturers, and extensive literature reviews of previous efforts. Based on the acquired knowledge, nominal/off-nominal physics models and prognostic health management algorithms were developed. In order to aid with development of the algorithms and validate them on realistic data, a testbed capable of supporting experiments in both laboratory and flight environment was developed. Test actuators with architectures similar to potential flight-certified units were obtained for the purposes of testing and realistic fault injection methods were designed. Several hundred fault scenarios were created, using permutations of position and load profiles, as well as fault severity levels. The diagnostic system was tested extensively on these scenarios, with the test results demonstrating high accuracy and low numbers of false positive and false negative diagnoses. The prognostic system was utilized to track fault progression in some of the fault scenarios, predicting the remaining useful life of the actuator. A series of run-to-failure experiments were conducted to validate its performance, with the resulting error in predicting time to failure generally lesser than 10% error. While a more robust validation procedure would require dozens more experiments executed under the same conditions (and, consequently, more test articles destroyed), the current results already demonstrate the potential for predicting fault progression in this type of devices. More prognostic experiments are planned for the next phase of this work, including investigation and comparison of other prognostic algorithms (such as various types of Particle Filter and GPR), addition of new fault types, and execution of prognostic experiments in flight environment.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Social Security Administration (2025). E-Verify [Dataset]. https://catalog.data.gov/dataset/e-verify
Organization logo

Data from: E-Verify

Related Article
Explore at:
Dataset updated
Jan 24, 2025
Dataset provided by
Social Security Administrationhttp://www.ssa.gov/
Description

Through an automated confirmation system, an employer matches information provided by a new employee (Form I-9) against existing information contained in Social Security Administration's (SSA) and the Department of Homeland Security's (DHS) U.S. Citizenship & Immigration Services (USCIS) databases. The SSA E-Verify System (SSA E-Verify) determines a specific verification code based upon information (SSN, DOB, L-Name, F-Name) in the NUMIDENT database. The verification code is returned to DHS E-Verify (DHS E-Verify) along with the original verification request. The message to the employer is determined by DHS E-Verify based on SSA's verification code.

Search
Clear search
Close search
Google apps
Main menu