100+ datasets found
  1. n

    Measuring quality of routine primary care data

    • data.niaid.nih.gov
    • datasetcatalog.nlm.nih.gov
    • +1more
    zip
    Updated Mar 12, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Olga Kostopoulou; Brendan Delaney (2021). Measuring quality of routine primary care data [Dataset]. http://doi.org/10.5061/dryad.dncjsxkzh
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 12, 2021
    Dataset provided by
    Imperial College London
    Authors
    Olga Kostopoulou; Brendan Delaney
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Objective: Routine primary care data may be used for the derivation of clinical prediction rules and risk scores. We sought to measure the impact of a decision support system (DSS) on data completeness and freedom from bias.

    Materials and Methods: We used the clinical documentation of 34 UK General Practitioners who took part in a previous study evaluating the DSS. They consulted with 12 standardized patients. In addition to suggesting diagnoses, the DSS facilitates data coding. We compared the documentation from consultations with the electronic health record (EHR) (baseline consultations) vs. consultations with the EHR-integrated DSS (supported consultations). We measured the proportion of EHR data items related to the physician’s final diagnosis. We expected that in baseline consultations, physicians would document only or predominantly observations related to their diagnosis, while in supported consultations, they would also document other observations as a result of exploring more diagnoses and/or ease of coding.

    Results: Supported documentation contained significantly more codes (IRR=5.76 [4.31, 7.70] P<0.001) and less free text (IRR = 0.32 [0.27, 0.40] P<0.001) than baseline documentation. As expected, the proportion of diagnosis-related data was significantly lower (b=-0.08 [-0.11, -0.05] P<0.001) in the supported consultations, and this was the case for both codes and free text.

    Conclusions: We provide evidence that data entry in the EHR is incomplete and reflects physicians’ cognitive biases. This has serious implications for epidemiological research that uses routine data. A DSS that facilitates and motivates data entry during the consultation can improve routine documentation.

  2. DataSheet1_Continuity and Completeness of Electronic Health Record Data for...

    • frontiersin.figshare.com
    docx
    Updated Jun 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chien-Ning Hsu; Kelly Huang; Fang-Ju Lin; Huang-Tz Ou; Ling-Ya Huang; Hsiao-Ching Kuo; Chi-Chuan Wang; Sengwee Toh (2023). DataSheet1_Continuity and Completeness of Electronic Health Record Data for Patients Treated With Oral Hypoglycemic Agents: Findings From Healthcare Delivery Systems in Taiwan.docx [Dataset]. http://doi.org/10.3389/fphar.2022.845949.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 12, 2023
    Dataset provided by
    Frontiers Mediahttp://www.frontiersin.org/
    Authors
    Chien-Ning Hsu; Kelly Huang; Fang-Ju Lin; Huang-Tz Ou; Ling-Ya Huang; Hsiao-Ching Kuo; Chi-Chuan Wang; Sengwee Toh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Objective: To evaluate the continuity and completeness of electronic health record (EHR) data, and the concordance of select clinical outcomes and baseline comorbidities between EHR and linked claims data, from three healthcare delivery systems in Taiwan.Methods: We identified oral hypoglycemic agent (OHA) users from the Integrated Medical Database of National Taiwan University Hospital (NTUH-iMD), which was linked to the National Health Insurance Research Database (NHIRD), from June 2011 to December 2016. A secondary evaluation involved two additional EHR databases. We created consecutive 90-day periods before and after the first recorded OHA prescription and defined patients as having continuous EHR data if there was at least one encounter or prescription in a 90-day interval. EHR data completeness was measured by dividing the number of encounters in the NTUH-iMD by the number of encounters in the NHIRD. We assessed the concordance between EHR and claims data on three clinical outcomes (cardiovascular events, nephropathy-related events, and heart failure admission). We used individual comorbidities that comprised the Charlson comorbidity index to examine the concordance of select baseline comorbidities between EHRs and claims.Results: We identified 39,268 OHA users in the NTUH-iMD. Thirty-one percent (n = 12,296) of these users contributed to the analysis that examined data continuity during the 6-month baseline and 24-month follow-up period; 31% (n = 3,845) of the 12,296 users had continuous data during this 30-month period and EHR data completeness was 52%. The concordance of major cardiovascular events, nephropathy-related events, and heart failure admission was moderate, with the NTU-iMD capturing 49–55% of the outcome events recorded in the NHIRD. The concordance of comorbidities was considerably different between the NTUH-iMD and NHIRD, with an absolute standardized difference >0.1 for most comorbidities examined. Across the three EHR databases studied, 29–55% of the OHA users had continuous records during the 6-month baseline and 24-month follow-up period.Conclusion: EHR data continuity and data completeness may be suboptimal. A thorough evaluation of data continuity and completeness is recommended before conducting clinical and translational research using EHR data in Taiwan.

  3. The impact of routine data quality assessments on electronic medical record...

    • plos.figshare.com
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Veronica Muthee; Aaron F. Bochner; Allison Osterman; Nzisa Liku; Willis Akhwale; James Kwach; Mehta Prachi; Joyce Wamicwe; Jacob Odhiambo; Fredrick Onyango; Nancy Puttkammer (2023). The impact of routine data quality assessments on electronic medical record data quality in Kenya [Dataset]. http://doi.org/10.1371/journal.pone.0195362
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Veronica Muthee; Aaron F. Bochner; Allison Osterman; Nzisa Liku; Willis Akhwale; James Kwach; Mehta Prachi; Joyce Wamicwe; Jacob Odhiambo; Fredrick Onyango; Nancy Puttkammer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Kenya
    Description

    BackgroundRoutine Data Quality Assessments (RDQAs) were developed to measure and improve facility-level electronic medical record (EMR) data quality. We assessed if RDQAs were associated with improvements in data quality in KenyaEMR, an HIV care and treatment EMR used at 341 facilities in Kenya.MethodsRDQAs assess data quality by comparing information recorded in paper records to KenyaEMR. RDQAs are conducted during a one-day site visit, where approximately 100 records are randomly selected and 24 data elements are reviewed to assess data completeness and concordance. Results are immediately provided to facility staff and action plans are developed for data quality improvement. For facilities that had received more than one RDQA (baseline and follow-up), we used generalized estimating equation models to determine if data completeness or concordance improved from the baseline to the follow-up RDQAs.Results27 facilities received two RDQAs and were included in the analysis, with 2369 and 2355 records reviewed from baseline and follow-up RDQAs, respectively. The frequency of missing data in KenyaEMR declined from the baseline (31% missing) to the follow-up (13% missing) RDQAs. After adjusting for facility characteristics, records from follow-up RDQAs had 0.43-times the risk (95% CI: 0.32–0.58) of having at least one missing value among nine required data elements compared to records from baseline RDQAs. Using a scale with one point awarded for each of 20 data elements with concordant values in paper records and KenyaEMR, we found that data concordance improved from baseline (11.9/20) to follow-up (13.6/20) RDQAs, with the mean concordance score increasing by 1.79 (95% CI: 0.25–3.33).ConclusionsThis manuscript demonstrates that RDQAs can be implemented on a large scale and used to identify EMR data quality problems. RDQAs were associated with meaningful improvements in data quality and could be adapted for implementation in other settings.

  4. D

    Data from: A new perspective on eruption data completeness: insights from...

    • researchdata.ntu.edu.sg
    pdf
    Updated Sep 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vanesa Burgos; Vanesa Burgos (2022). A new perspective on eruption data completeness: insights from the First Recorded EruptionS in the Holocene (FRESH) database [Dataset]. http://doi.org/10.21979/N9/PKQ3UC
    Explore at:
    pdf(6263471)Available download formats
    Dataset updated
    Sep 7, 2022
    Dataset provided by
    DR-NTU (Data)
    Authors
    Vanesa Burgos; Vanesa Burgos
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Dataset funded by
    Resilience to Nature's Challenges Volcano program, New Zealand
    National Research Foundation (NRF)
    Ministry of Education (MOE)
    Earth Observatory of Singapore
    Description

    Burgos, V., Jenkins, S.F., Bebbington, M., Newhall, C., Taisne, B., 2022. A new perspective on eruption data completeness: insights from the First Recorded EruptionS in the Holocene (FRESH) database. Journal of Volcanology and Geothermal Research 431, 107648. https://doi.org/10.1016/j.jvolgeores.2022.107648

  5. Data Governance Market Analysis North America, Europe, APAC, South America,...

    • technavio.com
    pdf
    Updated Oct 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2024). Data Governance Market Analysis North America, Europe, APAC, South America, Middle East and Africa - US, Germany, Canada, Singapore, Australia, UK, France, The Netherlands, India, Sweden - Size and Forecast 2024-2028 [Dataset]. https://www.technavio.com/report/data-governance-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Oct 12, 2024
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2024 - 2028
    Area covered
    France, Germany, Netherlands, United Kingdom, Canada, United States
    Description

    Snapshot img

    Data Governance Market Size 2024-2028

    The data governance market size is forecast to increase by USD 5.39 billion at a CAGR of 21.1% between 2023 and 2028. The market is experiencing significant growth due to the increasing importance of informed decision-making in business operations. With the rise of remote workforces and the continuous generation of data from various sources, including medical devices and IT infrastructure, the need for strong data governance policies has become essential. With the data deluge brought about by the Internet of Things (IoT) device implementation and remote patient monitoring, ensuring data completeness, security, and oversight has become crucial. Stricter regulations and compliance requirements for data usage are driving market growth, as organizations seek to ensure accountability and resilience in their data management practices. companies are responding by launching innovative solutions to help businesses navigate these complexities, while also addressing the continued reliance on legacy systems. Ensuring data security and compliance, particularly in handling sensitive information, remains a top priority for organizations. In the healthcare sector, data governance is particularly crucial for ensuring the security and privacy of sensitive patient information.

    What will be the Size of the Market During the Forecast Period?

    Request Free Sample

    Data governance refers to the overall management of an organization's information assets. In today's digital landscape, ensuring secure and accurate data is crucial for businesses to gain meaningful insights and make informed decisions. With the increasing adoption of digital transformation, big data, IoT technologies, and healthcare industries' digitalization, the need for sophisticated data governance has become essential. Policies and standards are the backbone of a strong data governance strategy. They provide guidelines for managing data's quality, completeness, accuracy, and security. In the context of the US market, these policies and standards are essential for maintaining trust and accountability within an organization and with its stakeholders.

    Moreover, data volumes have been escalating, making data management strategies increasingly complex. Big data and IoT device implementation have led to data duplication, which can result in data deluge. In such a scenario, data governance plays a vital role in ensuring data accuracy, completeness, and security. Sensitive information, such as patient records in the healthcare sector, is of utmost importance. Data governance policies and standards help maintain data security and privacy, ensuring that only authorized personnel have access to this information. Medical research also benefits from data governance, as it ensures the accuracy and completeness of data used for analysis.

    Furthermore, data security is a critical aspect of data governance. With the increasing use of remote patient monitoring and digital health records, ensuring data security becomes even more important. Data governance policies and standards help organizations implement the necessary measures to protect their information assets from unauthorized access, use, disclosure, disruption, modification, or destruction. In conclusion, data governance is a vital component of any organization's digital strategy. It helps ensure high-quality data, secure data, and meaningful insights. By implementing strong data governance policies and standards, organizations can maintain trust and accountability, protect sensitive information, and gain a competitive edge in today's data-driven market.

    Market Segmentation

    The market research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD billion' for the period 2024-2028, as well as historical data from 2018-2022 for the following segments.

    Application
    
      Risk management
      Incident management
      Audit management
      Compliance management
      Others
    
    
    Deployment
    
      On-premises
      Cloud-based
    
    
    Geography
    
      North America
    
        Canada
        US
    
    
      Europe
    
        Germany
        UK
        France
        Sweden
    
    
      APAC
    
        India
        Singapore
    
    
      South America
    
    
    
      Middle East and Africa
    

    By Application Insights

    The risk management segment is estimated to witness significant growth during the forecast period. Data governance is a critical aspect of managing data in today's business environment, particularly in the context of wearables and remote monitoring tools. With the increasing use of these technologies for collecting and transmitting sensitive health and personal data, the risk of data breaches and cybersecurity threats has become a significant concern. Compliance regulations such as HIPAA and GDPR mandate strict data management practices to protect this information. To address these challenges, advanced data governance solutions are being adopted. AI t

  6. The degrees of data completeness for the response variables investigated.

    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianghua Liu; Anna Rotkirch; Virpi Lummaa (2023). The degrees of data completeness for the response variables investigated. [Dataset]. http://doi.org/10.1371/journal.pone.0034898.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Jianghua Liu; Anna Rotkirch; Virpi Lummaa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Note. Complete–the variable value can be accurately determined (e.g. for 86.60% of the mothers under study, survival status (survival versus death) at age 15 of all produced offspring can be accurately determined); Incomplete–the variable value was estimated using the records available for some of all offspring (e.g. for 11.45% of the mothers, survival status data were available for some (at least one, but not all) of their offspring); Missing–there was no way to estimate the variable value and relevant mothers must be excluded from the analyses (e.g. for 1.95% of the mothers, survival status data were missing for all of their offspring); M-fertility–maternal lifetime fertility; O-survival–offspring survival rate at age 15; O-breeding–offspring breeding probability; M-LRS–maternal lifetime reproductive success; M-RBF–maternal risk of breeding failure.

  7. u

    Ozone (O3) (Data Completeness Report) - 2 - Catalogue - Canadian Urban Data...

    • data.urbandatacentre.ca
    Updated Sep 18, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Ozone (O3) (Data Completeness Report) - 2 - Catalogue - Canadian Urban Data Catalogue (CUDC) [Dataset]. https://data.urbandatacentre.ca/dataset/ozone-o3-data-completeness-report-2
    Explore at:
    Dataset updated
    Sep 18, 2023
    Area covered
    Canada
    Description

    Hourly ground-level ozone (O3) concentrations were estimated with CHRONOS (Canadian Hemispherical Regional Ozone and NOx System) model from 2002 to 2009, and with GEM-MACH (Global Environmental Multi-scale Modelling Air Quality and Chemistry) model from 2010 to 2015, by Environment and Climate Change Canada staff. Estimates incorporate ground-level observation data. Please note that Environment and Climate Change Canada (ECCC) provides data air quality data directly - see the ECCC End Use Licence.pdf file referenced above under Supporting Documentation.These datasets were used by CANUE staff to calculate values of monthly concentrations of O3, for all postal codes in Canada for each year from 2002 to 2015 (DMTI Spatial, 2015). Values are reported only when data completeness thresholds are met - see Data Completeness.pdf in Supporting Documentation.

  8. D

    Data Quality Software Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Nov 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Data Quality Software Report [Dataset]. https://www.datainsightsmarket.com/reports/data-quality-software-529643
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Nov 9, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Explore the booming Data Quality Software market, driven by big data analytics and AI. Discover key insights, growth drivers, restraints, and regional trends for enterprise and SME solutions.

  9. G

    Map Data Quality Assurance Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Map Data Quality Assurance Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/map-data-quality-assurance-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Aug 22, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Map Data Quality Assurance Market Outlook



    As per our latest research, the global map data quality assurance market size reached USD 1.85 billion in 2024, driven by the surging demand for high-precision geospatial information across industries. The market is experiencing robust momentum, growing at a CAGR of 10.2% during the forecast period. By 2033, the global map data quality assurance market is forecasted to attain USD 4.85 billion, fueled by the integration of advanced spatial analytics, regulatory compliance needs, and the proliferation of location-based services. The expansion is primarily underpinned by the criticality of data accuracy for navigation, urban planning, asset management, and other geospatial applications.




    One of the primary growth factors for the map data quality assurance market is the exponential rise in the adoption of location-based services and navigation solutions across various sectors. As businesses and governments increasingly rely on real-time geospatial insights for operational efficiency and strategic decision-making, the need for high-quality, reliable map data has become paramount. Furthermore, the evolution of smart cities and connected infrastructure has intensified the demand for accurate mapping data to enable seamless urban mobility, effective resource allocation, and disaster management. The proliferation of Internet of Things (IoT) devices and autonomous systems further accentuates the significance of data integrity and completeness, thereby propelling the adoption of advanced map data quality assurance solutions.




    Another significant driver contributing to the market’s expansion is the growing regulatory emphasis on geospatial data accuracy and privacy. Governments and regulatory bodies worldwide are instituting stringent standards for spatial data collection, validation, and sharing to ensure public safety, environmental conservation, and efficient governance. These regulations mandate comprehensive quality assurance protocols, fostering the integration of sophisticated software and services for data validation, error detection, and correction. Additionally, the increasing complexity of spatial datasets—spanning satellite imagery, aerial surveys, and ground-based sensors—necessitates robust quality assurance frameworks to maintain data consistency and reliability across platforms and applications.




    Technological advancements are also playing a pivotal role in shaping the trajectory of the map data quality assurance market. The advent of artificial intelligence (AI), machine learning, and cloud computing has revolutionized the way spatial data is processed, analyzed, and validated. AI-powered algorithms can now automate anomaly detection, spatial alignment, and feature extraction, significantly enhancing the speed and accuracy of quality assurance processes. Moreover, the emergence of cloud-based platforms has democratized access to advanced geospatial tools, enabling organizations of all sizes to implement scalable and cost-effective data quality solutions. These technological innovations are expected to further accelerate market growth, opening new avenues for product development and service delivery.




    From a regional perspective, North America currently dominates the map data quality assurance market, accounting for the largest revenue share in 2024. This leadership position is attributed to the region’s early adoption of advanced geospatial technologies, strong regulatory frameworks, and the presence of leading industry players. However, the Asia Pacific region is poised to witness the fastest growth over the forecast period, propelled by rapid urbanization, infrastructure development, and increased investments in smart city projects. Europe also maintains a significant market presence, driven by robust government initiatives for environmental monitoring and urban planning. Meanwhile, Latin America and the Middle East & Africa are gradually emerging as promising markets, supported by growing digitalization and expanding geospatial applications in transportation, utilities, and resource management.





    <h2 id='

  10. HadISD: Global sub-daily, surface meteorological station data, 1931-2020,...

    • catalogue.ceda.ac.uk
    • data-search.nerc.ac.uk
    Updated Jan 27, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Centre for Environmental Data Analysis (CEDA) (2021). HadISD: Global sub-daily, surface meteorological station data, 1931-2020, v3.1.1.2020f [Dataset]. https://catalogue.ceda.ac.uk/uuid/f5a674c74cdd427594b6f3793b536cd0
    Explore at:
    Dataset updated
    Jan 27, 2021
    Dataset provided by
    Centre for Environmental Data Analysishttp://www.ceda.ac.uk/
    License

    http://www.nationalarchives.gov.uk/doc/non-commercial-government-licence/version/2/http://www.nationalarchives.gov.uk/doc/non-commercial-government-licence/version/2/

    Time period covered
    Jan 1, 1931 - Dec 31, 2020
    Area covered
    Earth
    Variables measured
    time, altitude, latitude, longitude, wind_speed, air_temperature, wind_speed_of_gust, cloud_area_fraction, cloud_base_altitude, wind_from_direction, and 8 more
    Description

    This is version 3.1.1.2020f of Met Office Hadley Centre's Integrated Surface Database, HadISD. These data are global sub-daily surface meteorological data that extends HadISD v3.1.0.2019f to include 2020 and so spans 1931-2020.

    The quality controlled variables in this dataset are: temperature, dewpoint temperature, sea-level pressure, wind speed and direction, cloud data (total, low, mid and high level). Past significant weather and precipitation data are also included, but have not been quality controlled, so their quality and completeness cannot be guaranteed. Quality control flags and data values which have been removed during the quality control process are provided in the qc_flags and flagged_values fields, and ancillary data files show the station listing with a station listing with IDs, names and location information.

    The data are provided as one NetCDF file per station. Files in the station_data folder station data files have the format "station_code"_HadISD_HadOBS_19310101-20210101_v3-1-1-2020f.nc. The station codes can be found under the docs tab. The station codes file has five columns as follows: 1) station code, 2) station name 3) station latitude 4) station longitude 5) station height.

    To keep informed about updates, news and announcements follow the HadOBS team on twitter @metofficeHadOBS.

    For more detailed information e.g bug fixes, routine updates and other exploratory analysis, see the HadISD blog: http://hadisd.blogspot.co.uk/

    References: When using the dataset in a paper you must cite the following papers (see Docs for link to the publications) and this dataset (using the "citable as" reference) :

    Dunn, R. J. H., (2019), HadISD version 3: monthly updates, Hadley Centre Technical Note.

    Dunn, R. J. H., Willett, K. M., Parker, D. E., and Mitchell, L.: Expanding HadISD: quality-controlled, sub-daily station data from 1931, Geosci. Instrum. Method. Data Syst., 5, 473-491, doi:10.5194/gi-5-473-2016, 2016.

    Dunn, R. J. H., et al. (2012), HadISD: A Quality Controlled global synoptic report database for selected variables at long-term stations from 1973-2011, Clim. Past, 8, 1649-1679, 2012, doi:10.5194/cp-8-1649-2012

    Smith, A., N. Lott, and R. Vose, 2011: The Integrated Surface Database: Recent Developments and Partnerships. Bulletin of the American Meteorological Society, 92, 704–708, doi:10.1175/2011BAMS3015.1

    For a homogeneity assessment of HadISD please see this following reference

    Dunn, R. J. H., K. M. Willett, C. P. Morice, and D. E. Parker. "Pairwise homogeneity assessment of HadISD." Climate of the Past 10, no. 4 (2014): 1501-1522. doi:10.5194/cp-10-1501-2014, 2014.

  11. b

    Data from: The fossil record of ichthyosaurs, completeness metrics and...

    • data.bris.ac.uk
    Updated Jul 23, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2015). Data from: The fossil record of ichthyosaurs, completeness metrics and sampling biases - Datasets - data.bris [Dataset]. https://data.bris.ac.uk/data/dataset/cef0e89d595144212006a31cf822cd55
    Explore at:
    Dataset updated
    Jul 23, 2015
    Description

    Excel file containing all data on ichthyosaur completeness, environmental parameters, references, and model outputs.

  12. C

    Cloud Data Quality Monitoring and Testing Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Oct 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). Cloud Data Quality Monitoring and Testing Report [Dataset]. https://www.archivemarketresearch.com/reports/cloud-data-quality-monitoring-and-testing-560914
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Oct 14, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Cloud Data Quality Monitoring and Testing market is poised for robust expansion, projected to reach an estimated market size of USD 15,000 million in 2025, with a remarkable Compound Annual Growth Rate (CAGR) of 18% expected from 2025 to 2033. This significant growth is fueled by the escalating volume of data generated by organizations and the increasing adoption of cloud-based solutions for data management. Businesses are recognizing that reliable data is paramount for informed decision-making, regulatory compliance, and driving competitive advantage. As more critical business processes migrate to the cloud, the imperative to ensure the accuracy, completeness, consistency, and validity of this data becomes a top priority. Consequently, investments in sophisticated monitoring and testing tools are surging, enabling organizations to proactively identify and rectify data quality issues before they impact operations or strategic initiatives. Key drivers propelling this market forward include the growing demand for real-time data analytics, the complexities introduced by multi-cloud and hybrid cloud environments, and the increasing stringency of data privacy regulations. Cloud Data Quality Monitoring and Testing solutions offer enterprises the agility and scalability required to manage vast datasets effectively. The market is segmented by deployment into On-Premises and Cloud-Based solutions, with a clear shift towards cloud-native approaches due to their inherent flexibility and cost-effectiveness. Furthermore, the adoption of these solutions is observed across both Large Enterprises and Small and Medium-sized Enterprises (SMEs), indicating a broad market appeal. Emerging trends such as AI-powered data quality anomaly detection and automated data profiling are further enhancing the capabilities of these platforms, promising to streamline data governance and boost overall data trustworthiness. However, challenges such as the initial cost of implementation and a potential shortage of skilled data quality professionals may temper the growth trajectory in certain segments. Here's a comprehensive report description for Cloud Data Quality Monitoring and Testing, incorporating your specified elements:

  13. d

    Data from: Reporting of measures of accuracy in systematic reviews of...

    • catalog.data.gov
    • odgavaprod.ogopendata.com
    Updated Sep 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institutes of Health (2025). Reporting of measures of accuracy in systematic reviews of diagnostic literature [Dataset]. https://catalog.data.gov/dataset/reporting-of-measures-of-accuracy-in-systematic-reviews-of-diagnostic-literature
    Explore at:
    Dataset updated
    Sep 7, 2025
    Dataset provided by
    National Institutes of Health
    Description

    Background There are a variety of ways in which accuracy of clinical tests can be summarised in systematic reviews. Variation in reporting of summary measures has only been assessed in a small survey restricted to meta-analyses of screening studies found in a single database. Therefore, we performed this study to assess the measures of accuracy used for reporting results of primary studies as well as their meta-analysis in systematic reviews of test accuracy studies. Methods Relevant reviews on test accuracy were selected from the Database of Abstracts of Reviews of Effectiveness (1994–2000), which electronically searches seven bibliographic databases and manually searches key resources. The structured abstracts of these reviews were screened and information on accuracy measures was extracted from the full texts of 90 relevant reviews, 60 of which used meta-analysis. Results Sensitivity or specificity was used for reporting the results of primary studies in 65/90 (72%) reviews, predictive values in 26/90 (28%), and likelihood ratios in 20/90 (22%). For meta-analysis, pooled sensitivity or specificity was used in 35/60 (58%) reviews, pooled predictive values in 11/60 (18%), pooled likelihood ratios in 13/60 (22%), and pooled diagnostic odds ratio in 5/60 (8%). Summary ROC was used in 44/60 (73%) of the meta-analyses. There were no significant differences in measures of test accuracy among reviews published earlier (1994–97) and those published later (1998–2000). Conclusions There is considerable variation in ways of reporting and summarising results of test accuracy studies in systematic reviews. There is a need for consensus about the best ways of reporting results of test accuracy studies in reviews.

  14. f

    Comparison of data set completeness.

    • datasetcatalog.nlm.nih.gov
    Updated Aug 14, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wilke, Claus O.; Marcotte, Edward M.; Boutz, Daniel R.; Sridhara, Viswanadham; Carroll, Sean M.; Houser, John R.; Sydykova, Dariya K.; Dasgupta, Aurko; Barnhart, Craig; Marx, Christopher J.; Barrick, Jeffrey E.; Trent, M. Stephen; Needham, Brittany D.; Papoulas, Ophelia; Michener, Joshua K. (2015). Comparison of data set completeness. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001878402
    Explore at:
    Dataset updated
    Aug 14, 2015
    Authors
    Wilke, Claus O.; Marcotte, Edward M.; Boutz, Daniel R.; Sridhara, Viswanadham; Carroll, Sean M.; Houser, John R.; Sydykova, Dariya K.; Dasgupta, Aurko; Barnhart, Craig; Marx, Christopher J.; Barrick, Jeffrey E.; Trent, M. Stephen; Needham, Brittany D.; Papoulas, Ophelia; Michener, Joshua K.
    Description

    *We counted proteins as observed if they appeared in at least 1 of 3 biological repeats, whereas Ref. [4] counted proteins that appeared in at least 1 of 2 biological repeats.Comparison of data set completeness.

  15. G

    Data Quality AI Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Data Quality AI Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/data-quality-ai-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Aug 29, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Data Quality AI Market Outlook



    According to our latest research, the global Data Quality AI market size reached USD 1.92 billion in 2024, driven by a robust surge in data-driven business operations across industries. The sector has demonstrated a remarkable compound annual growth rate (CAGR) of 18.6% from 2024, with projections indicating that the market will expand to USD 9.38 billion by 2033. This impressive growth trajectory is underpinned by the increasing necessity for automated data quality management solutions, as organizations recognize the strategic value of high-quality data for analytics, compliance, and digital transformation initiatives.



    One of the primary growth factors for the Data Quality AI market is the exponential increase in data volume and complexity generated by modern enterprises. With the proliferation of IoT devices, cloud platforms, and digital business models, organizations are inundated with vast and diverse datasets. This data deluge, while offering immense potential, also introduces significant challenges related to data consistency, accuracy, and reliability. As a result, businesses are increasingly turning to AI-powered data quality solutions that can automate data cleansing, profiling, matching, and enrichment processes. These solutions not only enhance data integrity but also reduce manual intervention, enabling organizations to extract actionable insights more efficiently and cost-effectively.



    Another significant driver fueling the growth of the Data Quality AI market is the mounting regulatory pressure and compliance requirements across various sectors, particularly in BFSI, healthcare, and government. Stringent regulations such as GDPR, HIPAA, and CCPA mandate organizations to maintain high standards of data accuracy, security, and privacy. AI-driven data quality tools are instrumental in ensuring compliance by continuously monitoring data flows, identifying anomalies, and providing real-time remediation. This proactive approach to data governance mitigates risks associated with data breaches, financial penalties, and reputational damage, thereby making AI-based data quality management a strategic investment for organizations operating in highly regulated environments.



    The rapid adoption of advanced analytics, machine learning, and artificial intelligence across industries has also amplified the demand for high-quality data. As organizations increasingly leverage AI and advanced analytics for decision-making, the importance of data quality becomes paramount. Poor data quality can lead to inaccurate predictions, flawed business strategies, and suboptimal outcomes. Consequently, enterprises are prioritizing investments in AI-powered data quality solutions to ensure that their analytics initiatives are built on a foundation of reliable and consistent data. This trend is particularly pronounced among large enterprises and digitally mature organizations that view data as a critical asset for competitive differentiation and innovation.



    Data Quality Tools have become indispensable in the modern business landscape, particularly as organizations grapple with the complexities of managing vast amounts of data. These tools are designed to ensure that data is accurate, consistent, and reliable, which is crucial for making informed business decisions. By leveraging advanced algorithms and machine learning, Data Quality Tools can automate the processes of data cleansing, profiling, and enrichment, thereby reducing the time and effort required for manual data management. This automation not only enhances data integrity but also empowers businesses to derive actionable insights more efficiently. As a result, companies are increasingly investing in these tools to maintain a competitive edge in their respective industries.



    From a regional perspective, North America continues to dominate the Data Quality AI market, accounting for the largest share in 2024. The region's leadership is attributed to the presence of major technology vendors, early adoption of AI-driven solutions, and a robust ecosystem of data-centric enterprises. However, Asia Pacific is emerging as the fastest-growing region, propelled by rapid digital transformation, increasing investments in cloud infrastructure, and a burgeoning startup ecosystem. Europe, Latin America, and the Middle East & Africa are also witnessing steady growth, driven by regulatory mandat

  16. DataSheet_1_Quality indicators: completeness, validity and timeliness of...

    • frontiersin.figshare.com
    pdf
    Updated Jul 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francesco Giusti; Carmen Martos; Raquel Negrão Carvalho; Liesbet Van Eycken; Otto Visser; Manola Bettio (2023). DataSheet_1_Quality indicators: completeness, validity and timeliness of cancer registry data contributing to the European Cancer Information System.pdf [Dataset]. http://doi.org/10.3389/fonc.2023.1219128.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jul 28, 2023
    Dataset provided by
    Frontiers Mediahttp://www.frontiersin.org/
    Authors
    Francesco Giusti; Carmen Martos; Raquel Negrão Carvalho; Liesbet Van Eycken; Otto Visser; Manola Bettio
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Population-based Cancer Registries (PBCRs) are tasked with collecting high-quality data, important for monitoring cancer burden and its trends, planning and evaluating cancer control activities, clinical and epidemiological research and development of health policies. The main indicators to measure data quality are validity, completeness, comparability and timeliness. The aim of this article is to evaluate the quality of PBCRs data collected in the first ENCR-JRC data call, dated 2015.MethodsAll malignant tumours, except skin non-melanoma, and in situ and uncertain behaviour of bladder were obtained from 130 European general PBCRs for patients older than 19 years. Proportion of cases with death certificate only (DCO%), proportion of cases with unknown primary site (PSU%), proportion of microscopically verified cases (MV%), mortality to incidence (M:I) ratio, proportion of cases with unspecified morphology (UM%) and the median of the difference between the registration date and the incidence date were computed by sex, age group, cancer site, period and PBCR.ResultsA total of 28,776,562 cases from 130 PBCRs, operating in 30 European countries were included in the analysis. The quality of incidence data reported by PBCRs has been improving across the study period. Data quality is worse for the oldest age groups and for cancer sites with poor survival. No differences were found between males and females. High variability in data quality was detected across European PBCRs.Conclusionthe results reported in this paper are to be interpreted as the baseline for monitoring PBCRs data quality indicators in Europe along time.

  17. G

    Data Quality Tools Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Data Quality Tools Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/data-quality-tools-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Aug 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Data Quality Tools Market Outlook



    According to our latest research, the global Data Quality Tools market size reached USD 2.65 billion in 2024, reflecting robust demand across industries for solutions that ensure data accuracy, consistency, and reliability. The market is poised to expand at a CAGR of 17.6% from 2025 to 2033, driven by increasing digital transformation initiatives, regulatory compliance requirements, and the exponential growth of enterprise data. By 2033, the Data Quality Tools market is forecasted to attain a value of USD 12.06 billion, as organizations worldwide continue to prioritize data-driven decision-making and invest in advanced data management solutions.




    A key growth factor propelling the Data Quality Tools market is the proliferation of data across diverse business ecosystems. Enterprises are increasingly leveraging big data analytics, artificial intelligence, and cloud computing, all of which demand high-quality data as a foundational element. The surge in unstructured and structured data from various sources such as customer interactions, IoT devices, and business operations has made data quality management a strategic imperative. Organizations recognize that poor data quality can lead to erroneous insights, operational inefficiencies, and compliance risks. As a result, the adoption of comprehensive Data Quality Tools for data profiling, cleansing, and enrichment is accelerating, particularly among industries with high data sensitivity like BFSI, healthcare, and retail.




    Another significant driver for the Data Quality Tools market is the intensifying regulatory landscape. Data privacy laws such as the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and other country-specific mandates require organizations to maintain high standards of data integrity and traceability. Non-compliance can result in substantial financial penalties and reputational damage. Consequently, businesses are investing in sophisticated Data Quality Tools that provide automated monitoring, data lineage, and audit trails to ensure regulatory adherence. This regulatory push is particularly prominent in sectors like finance, healthcare, and government, where the stakes for data accuracy and security are exceptionally high.




    Advancements in cloud technology and the growing trend of digital transformation across enterprises are also fueling market growth. Cloud-based Data Quality Tools offer scalability, flexibility, and cost-efficiency, enabling organizations to manage data quality processes remotely and in real-time. The shift towards Software-as-a-Service (SaaS) models has lowered the entry barrier for small and medium enterprises (SMEs), allowing them to implement enterprise-grade data quality solutions without substantial upfront investments. Furthermore, the integration of machine learning and artificial intelligence capabilities into data quality platforms is enhancing automation, reducing manual intervention, and improving the overall accuracy and efficiency of data management processes.




    From a regional perspective, North America continues to dominate the Data Quality Tools market due to its early adoption of advanced technologies, a mature IT infrastructure, and the presence of leading market players. However, the Asia Pacific region is emerging as a high-growth market, driven by rapid digitalization, increasing investments in IT, and a burgeoning SME sector. Europe maintains a strong position owing to stringent data privacy regulations and widespread enterprise adoption of data management solutions. Latin America and the Middle East & Africa, while relatively nascent, are witnessing growing awareness and adoption, particularly in the banking, government, and telecommunications sectors.





    Component Analysis



    The Component segment of the Data Quality Tools market is bifurcated into software and services. Software dominates the segment, accounting for a significant share of the global market revenue in 2024. This dominance is

  18. Global Data Quality Tools Market Size By Deployment Mode (On-Premises,...

    • verifiedmarketresearch.com
    Updated Oct 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VERIFIED MARKET RESEARCH (2025). Global Data Quality Tools Market Size By Deployment Mode (On-Premises, Cloud-Based), By Organization Size (Small and Medium sized Enterprises (SMEs), Large Enterprises), By End User Industry (Banking, Financial Services, and Insurance (BFSI)), By Geographic Scope And Forecast [Dataset]. https://www.verifiedmarketresearch.com/product/global-data-quality-tools-market-size-and-forecast/
    Explore at:
    Dataset updated
    Oct 13, 2025
    Dataset provided by
    Verified Market Researchhttps://www.verifiedmarketresearch.com/
    Authors
    VERIFIED MARKET RESEARCH
    License

    https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/

    Time period covered
    2026 - 2032
    Area covered
    Global
    Description

    Data Quality Tools Market size was valued at USD 2.71 Billion in 2024 and is projected to reach USD 4.15 Billion by 2032, growing at a CAGR of 5.46% from 2026 to 2032.Global Data Quality Tools Market DriversGrowing Data Volume and Complexity: Sturdy data quality technologies are necessary to guarantee accurate, consistent, and trustworthy information because of the exponential increase in the volume and complexity of data supplied by companies.Growing Knowledge of Data Governance: Businesses are realizing how critical it is to uphold strict standards for data integrity and data governance. Tools for improving data quality are essential for advancing data governance programs.Needs for Regulatory Compliance: Adoption of data quality technologies is prompted by strict regulatory requirements, like GDPR, HIPAA, and other data protection rules, which aim to ensure compliance and reduce the risk of negative legal and financial outcomes.Growing Emphasis on Analytics and Business Intelligence (BI): The requirement for accurate and trustworthy data is highlighted by the increasing reliance on corporate intelligence and analytics for well-informed decision-making. Tools for improving data quality contribute to increased data accuracy for analytics and reporting.Initiatives for Data Integration and Migration: Companies engaged in data integration or migration initiatives understand how critical it is to preserve data quality throughout these procedures. The use of data quality technologies is essential for guaranteeing seamless transitions and avoiding inconsistent data.Real-time data quality management is in demand: Organizations looking to make prompt decisions based on precise and current information are driving an increased need for real-time data quality management systems.The emergence of cloud computing and big data: Strong data quality tools are required to manage many data sources, formats, and environments while upholding high data quality standards as big data and cloud computing solutions become more widely used.Pay attention to customer satisfaction and experience: Businesses are aware of how data quality affects customer happiness and experience. Establishing and maintaining consistent and accurate customer data is essential to fostering trust and providing individualized services.Preventing Fraud and Data-Related Errors: By detecting and fixing mistakes in real time, data quality technologies assist firms in preventing errors, discrepancies, and fraudulent activities while lowering the risk of monetary losses and reputational harm.Linking Master Data Management (MDM) Programs: Integrating with MDM solutions improves master data management overall and guarantees high-quality, accurate, and consistent maintenance of vital corporate information.Offerings for Data Quality as a Service (DQaaS): Data quality tools are now more widely available and scalable for companies of all sizes thanks to the development of Data Quality as a Service (DQaaS), which offers cloud-based solutions to firms.

  19. G

    Data Quality as a Service Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Sep 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Data Quality as a Service Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/data-quality-as-a-service-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Sep 1, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Data Quality as a Service (DQaaS) Market Outlook



    According to the latest research, the global Data Quality as a Service (DQaaS) market size reached USD 2.48 billion in 2024, reflecting a robust interest in data integrity solutions across diverse industries. The market is poised to expand at a compound annual growth rate (CAGR) of 18.7% from 2025 to 2033, with the forecasted market size anticipated to reach USD 12.19 billion by 2033. This remarkable growth is primarily driven by the increasing reliance on data-driven decision-making, regulatory compliance mandates, and the proliferation of cloud-based technologies. Organizations are recognizing the necessity of high-quality data to fuel analytics, artificial intelligence, and operational efficiency, which is accelerating the adoption of DQaaS globally.




    The exponential growth of the Data Quality as a Service market is underpinned by several key factors. Primarily, the surge in data volumes generated by digital transformation initiatives and the Internet of Things (IoT) has created an urgent need for robust data quality management platforms. Enterprises are increasingly leveraging DQaaS to ensure the accuracy, completeness, and reliability of their data assets, which are crucial for maintaining a competitive edge. Additionally, the rising adoption of cloud computing has made it more feasible for organizations of all sizes to access advanced data quality tools without the need for significant upfront investment in infrastructure. This democratization of data quality solutions is expected to further fuel market expansion in the coming years.




    Another significant driver is the growing emphasis on regulatory compliance and risk mitigation. Industries such as BFSI, healthcare, and government are subject to stringent regulations regarding data privacy, security, and reporting. DQaaS platforms offer automated data validation, cleansing, and monitoring capabilities, enabling organizations to adhere to these regulatory requirements efficiently. The increasing prevalence of data breaches and cyber threats has also highlighted the importance of maintaining high-quality data, as poor data quality can exacerbate vulnerabilities and compliance risks. As a result, organizations are investing in DQaaS not only to enhance operational efficiency but also to safeguard their reputation and avoid costly penalties.




    Furthermore, the integration of artificial intelligence (AI) and machine learning (ML) technologies into DQaaS solutions is transforming the market landscape. These advanced technologies enable real-time data profiling, anomaly detection, and predictive analytics, which significantly enhance the effectiveness of data quality management. The ability to automate complex data quality processes and derive actionable insights from vast datasets is particularly appealing to large enterprises and data-centric organizations. As AI and ML continue to evolve, their application within DQaaS platforms is expected to drive innovation and unlock new growth opportunities, further solidifying the marketÂ’s upward trajectory.



    Ensuring the reliability of data through Map Data Quality Assurance is becoming increasingly crucial as organizations expand their geographic data usage. This process involves a systematic approach to verify the accuracy and consistency of spatial data, which is essential for applications ranging from logistics to urban planning. By implementing rigorous quality assurance protocols, businesses can enhance the precision of their location-based services, leading to improved decision-making and operational efficiency. As the demand for geographic information systems (GIS) grows, the emphasis on maintaining high standards of map data quality will continue to rise, supporting the overall integrity of data-driven strategies.




    From a regional perspective, North America currently dominates the Data Quality as a Service market, accounting for the largest share in 2024. This leadership is attributed to the early adoption of cloud technologies, a mature IT infrastructure, and a strong focus on data governance among enterprises in the region. Europe follows closely, with significant growth driven by strict data protection regulations such as GDPR. Meanwhile, the Asia Pacific region is witnessing the fastest growth, propelled by rapid digitalization, increasing investments in cloud

  20. g

    Development Economics Data Group - Completeness of birth registration, male...

    • gimi9.com
    Updated Jan 15, 2004
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2004). Development Economics Data Group - Completeness of birth registration, male (%) | gimi9.com [Dataset]. https://gimi9.com/dataset/worldbank_wb_wdi_sp_reg_brth_ma_zs/
    Explore at:
    Dataset updated
    Jan 15, 2004
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Completeness of birth registration is the percentage of children under age 5 whose births were registered at the time of the survey. The numerator of completeness of birth registration includes children whose birth certificate was seen by the interviewer or whose mother or caretaker says the birth has been registered.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Olga Kostopoulou; Brendan Delaney (2021). Measuring quality of routine primary care data [Dataset]. http://doi.org/10.5061/dryad.dncjsxkzh

Measuring quality of routine primary care data

Explore at:
zipAvailable download formats
Dataset updated
Mar 12, 2021
Dataset provided by
Imperial College London
Authors
Olga Kostopoulou; Brendan Delaney
License

https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

Description

Objective: Routine primary care data may be used for the derivation of clinical prediction rules and risk scores. We sought to measure the impact of a decision support system (DSS) on data completeness and freedom from bias.

Materials and Methods: We used the clinical documentation of 34 UK General Practitioners who took part in a previous study evaluating the DSS. They consulted with 12 standardized patients. In addition to suggesting diagnoses, the DSS facilitates data coding. We compared the documentation from consultations with the electronic health record (EHR) (baseline consultations) vs. consultations with the EHR-integrated DSS (supported consultations). We measured the proportion of EHR data items related to the physician’s final diagnosis. We expected that in baseline consultations, physicians would document only or predominantly observations related to their diagnosis, while in supported consultations, they would also document other observations as a result of exploring more diagnoses and/or ease of coding.

Results: Supported documentation contained significantly more codes (IRR=5.76 [4.31, 7.70] P<0.001) and less free text (IRR = 0.32 [0.27, 0.40] P<0.001) than baseline documentation. As expected, the proportion of diagnosis-related data was significantly lower (b=-0.08 [-0.11, -0.05] P<0.001) in the supported consultations, and this was the case for both codes and free text.

Conclusions: We provide evidence that data entry in the EHR is incomplete and reflects physicians’ cognitive biases. This has serious implications for epidemiological research that uses routine data. A DSS that facilitates and motivates data entry during the consultation can improve routine documentation.

Search
Clear search
Close search
Google apps
Main menu