https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Objective: Routine primary care data may be used for the derivation of clinical prediction rules and risk scores. We sought to measure the impact of a decision support system (DSS) on data completeness and freedom from bias.
Materials and Methods: We used the clinical documentation of 34 UK General Practitioners who took part in a previous study evaluating the DSS. They consulted with 12 standardized patients. In addition to suggesting diagnoses, the DSS facilitates data coding. We compared the documentation from consultations with the electronic health record (EHR) (baseline consultations) vs. consultations with the EHR-integrated DSS (supported consultations). We measured the proportion of EHR data items related to the physician’s final diagnosis. We expected that in baseline consultations, physicians would document only or predominantly observations related to their diagnosis, while in supported consultations, they would also document other observations as a result of exploring more diagnoses and/or ease of coding.
Results: Supported documentation contained significantly more codes (IRR=5.76 [4.31, 7.70] P<0.001) and less free text (IRR = 0.32 [0.27, 0.40] P<0.001) than baseline documentation. As expected, the proportion of diagnosis-related data was significantly lower (b=-0.08 [-0.11, -0.05] P<0.001) in the supported consultations, and this was the case for both codes and free text.
Conclusions: We provide evidence that data entry in the EHR is incomplete and reflects physicians’ cognitive biases. This has serious implications for epidemiological research that uses routine data. A DSS that facilitates and motivates data entry during the consultation can improve routine documentation.
This data table provides the detailed data quality assessment scores for the Technical Limits dataset. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this seminar, you will learn about ArcGIS Data Reviewer tools that allow you to automate, centrally manage, and improve your GIS data quality control processes.This seminar was developed to support the following:ArcGIS 10.0 For Desktop (ArcView, ArcEditor, Or ArcInfo)ArcGIS Data Reviewer for Desktop
Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
Metrics used to give an indication of data quality between our test’s groups. This includes whether documentation was used and what proportion of respondents rounded their answers. Unit and item non-response are also reported.
This data table provides the detailed data quality assessment scores for the Curtailment dataset. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
This data table provides the detailed data quality assessment scores for the Historic Faults dataset. The quality assessment was carried out on the 31st March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
This data table provides the detailed data quality assessment scores for the SPD DG Connections Network Info dataset. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refresehed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
This data table provides the detailed data quality assessment scores for the Long Term Development Statement dataset. The quality assessment was carried out on 31st March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality; to demonstrate our progress we conduct annual assessments of our data quality in line with the dataset refresh rate. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
Groundwater samples were collected and analyzed from 782 wells as part of the National Water-Quality Assessment Project of the U.S. Geological Survey National Water-Quality Program and the water-quality data and quality-control data are included in this data release. The samples were collected from three types of well networks: principal aquifer study networks, which are used to assess the quality of groundwater used for public water supply; land-use study networks, which are used to assess land-use effects on shallow groundwater quality, and major aquifer study networks, which are used to assess the quality of groundwater used for domestic supply. Groundwater samples were analyzed for a large number of water-quality indicators and constituents, including nutrients, major ions, trace elements, volatile organic compounds (VOCs), pesticides, radionuclides, and microbial indicators. Data from samples collected between 2012 and 2019 are associated with networks described in a collection of data series reports and associated data releases (Arnold and others, 2016a,b, 2017a,b, 2018a,b, 2020a,b; Kingsbury and others, 2020 and 2021). This data release includes data from networks sampled in 2019 through 2022. For some networks, certain constituent group data were not completely reviewed and released by the analyzing laboratory for all network sites in time for publication of this data release. For networks with incomplete data, no data were published for the incomplete constituent group(s). Datasets excluded from this data release because of incomplete results will be included in the earliest data release published after the dataset is complete. NOTE: While previous versions are available from the author, all the records in previous versions can be found in version 3.0. First posted - December 12, 2021 (available from author) Revised - January 27, 2023 (version 2.0: available from author) Revised - November 2, 2023 (version 3.0) The compressed file (NWQP_GW_QW_DataRelease_v3.zip) contains 24 files: 23 files of groundwater-quality, quality-control data, and general information in ASCII text tab-delimited format, and one corresponding metadata file in xml format that includes descriptions of all the tables and attributes. A shapefile containing study areas for each of the sampled groundwater networks also is provided as part of this data release and is described in the metadata (Network_Boundaries_v3.zip). The files are as follows: Description_of_Data_Field_v3.txt: Information for all constituents and ancillary information found in Tables 3 through 21. Network_Reference_List_v3.txt: References used for the description of the networks sampled by the USGS NAWQA Project. Table_1_site_list_v3.txt: Information about wells that have environmental data. Table_2_parameters_v3.txt: Constituent primary uses and sources; laboratory analytical schedules and sampling period; USGS parameter codes (pcodes); comparison thresholds; and reporting levels. Table_3_qw_indicators_v3.txt: Water-quality indicators in groundwater samples collected by the USGS NAWQA Project. Table_4_nutrients_v3.txt: Nutrients and dissolved organic carbon in groundwater samples collected by the USGS NAWQA Project. Table_5_major_ions_v3.txt: Major and minor ions in groundwater samples collected by the USGS NAWQA Project. Table_6_trace_elements_v3.txt: Trace elements in groundwater samples collected by the USGS NAWQA Project. Table_7_vocs_v3.txt: Volatile organic compounds (VOCs) in groundwater samples collected by the USGS NAWQA Project. Table_8_pesticides_v3.txt: Pesticides in groundwater samples collected by the USGS NAWQA Project. Table_9_radchem_v3.txt: Radionuclides in groundwater samples collected by the USGS NAWQA Project. Table_10_micro_v3.txt: Microbiological indicators in groundwater samples collected by the USGS NAWQA Project. Table_11_qw_ind_QC_v3.txt: Water-quality indicators in groundwater replicate samples collected by the USGS NAWQA Project. Table_12_nuts_QC_v3.txt: Nutrients and dissolved organic carbon in groundwater blank and replicate samples collected by the USGS NAWQA Project. Table_13_majors_QC_v3.txt: Major and minor ions in groundwater blank and replicate samples collected by the USGS NAWQA Project. Table_14_trace_element_QC_v3.txt: Trace elements in groundwater blank and replicate samples collected by the USGS NAWQA Project. Table_15_vocs_QC_v3.txt: Volatile organic compounds (VOCs) in groundwater blank, replicate, and spike samples collected by the USGS NAWQA Project. Table_16_pesticides_QC_v3.txt: Pesticide compounds in groundwater blank, replicate, and spike samples collected by the USGS NAWQA Project. Table_17_radchem_QC_v3.txt: Radionuclides in groundwater replicate samples collected by the USGS NAWQA Project. Table_18_micro_QC_v3.txt: Microbiological indicators in groundwater blank, replicate, and spike samples collected by the USGS NAWQA Project. Table_19_TE_SpikeStats_v3.txt: Statistics for trace elements in groundwater spike samples collected by the USGS NAWQA Project. Table_20_VOCLabSpikeStats_v3.txt: Statistics for volatile organic compounds (VOCs) in groundwater spike samples collected by the USGS NAWQA Project. Table_21_PestFieldSpikeStats_v3.txt: Statistics for pesticide compounds in groundwater spike samples collected by the USGS NAWQA Project. References Arnold, T.L., Bexfield, L.M., Musgrove, MaryLynn, Lindsey, B.D., Stackelberg, P.E., Barlow, J.R., DeSimone, L.A., Kulongoski, J.T., Kingsbury, J.A., Ayotte, J.D., Fleming, B.J., and Belitz, Kenneth, 2017a, Groundwater-quality data from the National Water-Quality Assessment Project, January through December 2014 and select quality-control data from May 2012 through December 2014: U.S. Geological Survey Data Series 1063, 83 p., https://doi.org/10.3133/ds1063. Arnold, T.L., Bexfield, L.M., Musgrove, MaryLynn, Lindsey, B.D., Stackelberg, P.E., Barlow, J.R., DeSimone, L.A., Kulongoski, J.T., Kingsbury, J.A., Ayotte, J.D., Fleming, B.J., and Belitz, Kenneth, 2017b, Datasets from Groundwater quality data from the National Water Quality Assessment Project, January through December 2014 and select quality-control data from May 2012 through December 2014: U.S. Geological Survey data release, https://doi.org/10.5066/F7W0942N. Arnold, T.L., Bexfield, L.M., Musgrove, M., Erickson, M.L., Kingsbury, J.A., Degnan, J.R., Tesoriero, A.J., Kulongoski, J.T., and Belitz, K., 2020a, Groundwater-quality and select quality-control data from the National Water-Quality Assessment Project, January through December 2016, and previously unpublished data from 2013 to 2015: U.S. Geological Survey Data Series 1124, 135 p., https://doi.org/10.3133/ds1124. Arnold, T.L., Bexfield, L.M., Musgrove, M., Lindsey, B.D., Stackelberg, P.E., Lindsey, B.D., Barlow, J.R., Kulongoski, J.T., and Belitz, K., 2018b, Datasets from Groundwater-Quality and Select Quality-Control Data from the National Water-Quality Assessment Project, January through December 2015 and Previously Unpublished Data from 2013-2014, U.S. Geological Survey data release, https://doi.org/10.5066/F7XK8DHK. Arnold, T.L., Bexfield, L.M., Musgrove, M., Stackelberg, P.E., Lindsey, B.D., Kingsbury, J.A., Kulongoski, J.T., and Belitz, K., 2018a, Groundwater-quality and select quality-control data from the National Water-Quality Assessment Project, January through December 2015, and previously unpublished data from 2013 to 2014: U.S. Geological Survey Data Series 1087, 68 p., https://doi.org/10.3133/ds1087. Arnold, T.L., DeSimone, L.A., Bexfield, L.M., Lindsey, B.D., Barlow, J.R., Kulongoski, J.T., Musgrove, MaryLynn, Kingsbury, J.A., and Belitz, Kenneth, 2016a, Groundwater quality data from the National Water-Quality Assessment Project, May 2012 through December 2013 (ver. 1.1, November 2016): U.S. Geological Survey Data Series 997, 56 p., https://doi.org/10.3133/ds997. Arnold, T.L., DeSimone, L.A., Bexfield, L.M., Lindsey, B.D., Barlow, J.R., Kulongoski, J.T., Musgrove, MaryLynn, Kingsbury, J.A., and Belitz, Kenneth, 2016b, Groundwater quality data from the National Water Quality Assessment Project, May 2012 through December 2014 and select quality-control data from May 2012 through December 2013: U.S. Geological Survey data release, https://doi.org/10.5066/F7HQ3X18. Arnold, T.L., Sharpe, J.B., Bexfield, L.M., Musgrove, M., Erickson, M.L., Kingsbury, J.A., Degnan, J.R., Tesoriero, A.J., Kulongoski, J.T., and Belitz, K., 2020b, Datasets from groundwater-quality and select quality-control data from the National Water-Quality Assessment Project, January through December 2016, and previously unpublished data from 2013 to 2015: U.S. Geological Survey data release, https://doi.org/10.5066/P9W4RR74. Kingsbury, J.A., Sharpe, J.B., Bexfield, L.M., Arnold, T.L., Musgrove, M., Erickson, M.L., Degnan, J.R., Kulongoski, J.T., Lindsey, B.D., and Belitz, K., 2020, Datasets from Groundwater-Quality and Select Quality-Control Data from the National Water-Quality Assessment Project, January 2017 through December 2019 (ver. 1.1, January 2021): U.S. Geological Survey data release, https://doi.org/10.5066/P9XATXV1. Kingsbury, J.A., Bexfield, L.M., Arnold, T.L., Musgrove, M., Erickson, M.L., Degnan, J.R., Tesoriero, A.J., Lindsey B.D., and Belitz, K., 2021, Groundwater-Quality and Select Quality-Control Data from the National Water-Quality Assessment Project, January 2017 through December 2019: U.S. Geological Survey Data Series 1136, 97 p., https://doi.org/10.3133/ds1136.
This data table provides the detailed data quality assessment scores for the Single Digital View dataset. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
This dataset presents the impact indicators of the data.gouv.fr platform. The mission of data.gouv.fr is to ensure the provision of quality open data to promote transparency and efficiency of public action while facilitating the creation of new services. These indicators aim to monitor the extent to which data.gouv.fr meets its objectives. Objective 1: data.gouv.fr promotes the discoverability of open data The aim here is to measure the extent to which users find the data they need. Indicator: percentage of users who answered positively to the question "Did you find what you were looking for?" Objective 2: data.gouv.fr promotes open data quality This is to measure whether data.gouv.fr makes it easy to publish and reference quality data. Indicator: Average quality score of the 1000 most viewed datasets on the platform. Objective 3: data.gouv.fr promotes the reuse of open data The aim here is to measure the extent to which data.gouv.fr facilitates interactions between data producers and re-users. Indicator: average time for a "legitimate" response to discussions on datasets (legitimate: reply by a member of the organisation publishing the dataset or by a member of the data.gouv.fr team.) Objective 4: data.gouv.fr facilitates access to information of the most important datasets This is to measure the extent to which data.gouv.fr participates in access to information. Indicator: number of datasets in the top 100 associated with "quality" reuse (quality reuse is an editorial choice of the data.gouv.fr. team) ## Data format This dataset shall comply with the "impact of a digital public service" data scheme aimed at ensuring a smooth publication of the impact statistics of digital public services. Its use makes it possible to compare and centralize data from different products, in order to facilitate their understanding and reuse. Read more ### Description of columns - "administration_rattachement" : aadministration to which the digital public service is attached. - public_numeric_service_name
: name of the digital public service - "indicator": Name of indicator. - "value" : vvalue of the indicator, as determined on the date indicated in the field 'date'. - "unite_measure" : unity of the indicator - “is_target” : Indicates whether the value is a target value (projected to a future date) or whether it is an actual value (measured to a past date). - frequency_monitoring
: frequency with which the indicator is consulted and used by the service. - "date": date when the indicator was measured, or when the target value is desired if it is a target. - est_periode
: Boolean indicating whether the measurement is made over a period (true) or whether it is a stock (false). - date_start
: date of the start of the measurement period, if the indicator covers a period of time. - “is_automated“: specifies whether data collection is automated (true) or manual (false). - "source_collection": specify how the collection is carried out: script, survey, manual collection... - insee_code
: if the indicator is calculated at a certain geographical scale, specify the INSEE code of that scale. - dataviz_wish
: indication for visualization producers of the appropriate type of dataviz for this indicator. - "comments": specify the known limitations and biases and justify the choice of the indicator despite its limitations.
This data table provides the detailed data quality assessment scores for the Transmission Generation Heat Map. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the dataset schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
Performance rates on frequently reported health care quality measures in the CMS Medicaid/CHIP Child and Adult Core Sets, for FFY 2020 reporting.
Source: Mathematica analysis of MACPro and Form CMS-416 reports for the FFY 2020 reporting cycle. Dataset revised September 2021. For more information, see the Children's Health Care Quality Measures and Adult Health Care Quality Measures webpages.
This is historical data. The update frequency has been set to "Static Data" and is here for historic value. Updated on 8/14/2024 Rate per 100,000; rates are adjusted for age and sex. PQI measures are used to monitor performance over time or across regions and populations using patient data found in a typical hospital discharge abstract. PQI measures can be used to summarize quality across multiple indicators, improve ability to detect differences, identify important domains and drivers of quality, prioritize action for quality improvement, make current decisions about future and unknown health care needs. Overall Composite (PQI #90) is comprised of: all PQI measures within the Chronic and Acute Composites. Chronic Composite (PQI #92) is comprised of: PQI #01 Diabetes Short-Term Complications Admission Rate, PQI #03 Diabetes Long-Term Complications Admission Rate, PQI #05 Chronic Obstructive Pulmonary Disease (COPD) or Asthma in Older Adults Admission Rate, PQI #07 Hypertension Admission Rate, PQI #08 Congestive Heart Failure (CHF) Admission Rate, PQI #13 Angina without Procedure Admission Rate, PQI #14 Uncontrolled Diabetes Admission Rate, PQI #15 Asthma in Younger Adults Admission Rate, PQI #16 Rate of Lower-Extremity Amputation Among Patients With Diabetes, Acute Composite (PQI #91) is comprised of: PQI #10 Dehydration Admission Rate, PQI #11 Bacterial Pneumonia Admission Rate, and PQI #12 Urinary Tract Infection Admission Rate. NOTE: PQI Version Change in 2012. Previous numbers for 2012 have been overwritten. Trends may not be accurate between 2011 and 2012.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data of the contaminants measured in the stations of the city of Barcelona. The update is carried out in intervals of one hour indicating whether the value is validated or not. The data of three days prior to the current one is also displayed.
Psychiatric facilities that are eligible for the Inpatient Psychiatric Facility Quality Reporting (IPFQR) program are required to meet all program requirements, otherwise their Medicare payments may be reduced. Follow-Up After Hospitalization for Mental Illness (FUH) measure data on this table are marked as not available. Results for this measure are provided on a separate table.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data from a weather station (Vilanova i la Geltrú) deployed at the Catalan coast from 2013 to 2014. The station at Vilanova i la Geltrú provides air temperature, wind speed and wind direction. Data from the Vilanova i la Geltrú weather station has been acquired every minute, then a quality control procedure has been applied following QARTOD guidelines. Afterwards the quality controlled data has been averaged in periods of 30 minutes (discarding data flagged as bad data). Every data point has an associated a quality control flag and standard deviation value. The quality control flag values are 1: good data, 2: not applied, 3: suspicious data, 4: bad data, 9: missing data. The standard deviation provides a measure of the variability of the data within the 30min time window used in the average.
This operations dashboard shows historic and current data related to this performance measure. The performance measure dashboard is available at 5.01 Quality of Business Services. Data Dictionary
This data package contains information regarding different hospitals and their quality of surgical outcomes and structural measures. It includes datasets over facility, national and state-level data for Inpatient Psychiatric Hospital Facility Quality Reporting (IPFQR) and payment measures. It also provides Timely and Effective Care information by national and state-level data for measures of heart attack care, heart failure care, pneumonia care, surgical care and emergency department care.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Objective: Routine primary care data may be used for the derivation of clinical prediction rules and risk scores. We sought to measure the impact of a decision support system (DSS) on data completeness and freedom from bias.
Materials and Methods: We used the clinical documentation of 34 UK General Practitioners who took part in a previous study evaluating the DSS. They consulted with 12 standardized patients. In addition to suggesting diagnoses, the DSS facilitates data coding. We compared the documentation from consultations with the electronic health record (EHR) (baseline consultations) vs. consultations with the EHR-integrated DSS (supported consultations). We measured the proportion of EHR data items related to the physician’s final diagnosis. We expected that in baseline consultations, physicians would document only or predominantly observations related to their diagnosis, while in supported consultations, they would also document other observations as a result of exploring more diagnoses and/or ease of coding.
Results: Supported documentation contained significantly more codes (IRR=5.76 [4.31, 7.70] P<0.001) and less free text (IRR = 0.32 [0.27, 0.40] P<0.001) than baseline documentation. As expected, the proportion of diagnosis-related data was significantly lower (b=-0.08 [-0.11, -0.05] P<0.001) in the supported consultations, and this was the case for both codes and free text.
Conclusions: We provide evidence that data entry in the EHR is incomplete and reflects physicians’ cognitive biases. This has serious implications for epidemiological research that uses routine data. A DSS that facilitates and motivates data entry during the consultation can improve routine documentation.