CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Objective: Routine primary care data may be used for the derivation of clinical prediction rules and risk scores. We sought to measure the impact of a decision support system (DSS) on data completeness and freedom from bias.
Materials and Methods: We used the clinical documentation of 34 UK General Practitioners who took part in a previous study evaluating the DSS. They consulted with 12 standardized patients. In addition to suggesting diagnoses, the DSS facilitates data coding. We compared the documentation from consultations with the electronic health record (EHR) (baseline consultations) vs. consultations with the EHR-integrated DSS (supported consultations). We measured the proportion of EHR data items related to the physician's final diagnosis. We expected that in baseline consultations, physicians would document only or predominantly observations related to their diagnosis, while in supported consultations, they would also document other observations as a result of exploring more diagnoses and/or ease of coding.
Results: Supported documentation contained significantly more codes (IRR=5.76 [4.31, 7.70] P<0.001) and less free text (IRR = 0.32 [0.27, 0.40] P<0.001) than baseline documentation. As expected, the proportion of diagnosis-related data was significantly lower (b=-0.08 [-0.11, -0.05] P<0.001) in the supported consultations, and this was the case for both codes and free text.
Conclusions: We provide evidence that data entry in the EHR is incomplete and reflects physicians' cognitive biases. This has serious implications for epidemiological research that uses routine data. A DSS that facilitates and motivates data entry during the consultation can improve routine documentation.
This data table provides the detailed data quality assessment scores for the Technical Limits dataset. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
Metrics used to give an indication of data quality between our test’s groups. This includes whether documentation was used and what proportion of respondents rounded their answers. Unit and item non-response are also reported.
This data table provides the detailed data quality assessment scores for the Curtailment dataset. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
Performance rates on frequently reported health care quality measures in the CMS Medicaid/CHIP Child and Adult Core Sets, for FFY 2020 reporting.
Source: Mathematica analysis of MACPro and Form CMS-416 reports for the FFY 2020 reporting cycle. Dataset revised September 2021. For more information, see the Children's Health Care Quality Measures and Adult Health Care Quality Measures webpages.
This dataset presents the impact indicators of the data.gouv.fr platform. The mission of data.gouv.fr is to ensure the provision of quality open data to promote transparency and efficiency of public action while facilitating the creation of new services. These indicators aim to monitor the extent to which data.gouv.fr meets its objectives. Objective 1: data.gouv.fr promotes the discoverability of open data The aim here is to measure the extent to which users find the data they need. Indicator: percentage of users who answered positively to the question "Did you find what you were looking for?" Objective 2: data.gouv.fr promotes open data quality This is to measure whether data.gouv.fr makes it easy to publish and reference quality data. Indicator: Average quality score of the 1000 most viewed datasets on the platform. Objective 3: data.gouv.fr promotes the reuse of open data The aim here is to measure the extent to which data.gouv.fr facilitates interactions between data producers and re-users. Indicator: average time for a "legitimate" response to discussions on datasets (legitimate: reply by a member of the organisation publishing the dataset or by a member of the data.gouv.fr team.) Objective 4: data.gouv.fr facilitates access to information of the most important datasets This is to measure the extent to which data.gouv.fr participates in access to information. Indicator: number of datasets in the top 100 associated with "quality" reuse (quality reuse is an editorial choice of the data.gouv.fr. team) ## Data format This dataset shall comply with the "impact of a digital public service" data scheme aimed at ensuring a smooth publication of the impact statistics of digital public services. Its use makes it possible to compare and centralize data from different products, in order to facilitate their understanding and reuse. Read more ### Description of columns - "administration_rattachement" : aadministration to which the digital public service is attached. - public_numeric_service_name
: name of the digital public service - "indicator": Name of indicator. - "value" : vvalue of the indicator, as determined on the date indicated in the field 'date'. - "unite_measure" : unity of the indicator - “is_target” : Indicates whether the value is a target value (projected to a future date) or whether it is an actual value (measured to a past date). - frequency_monitoring
: frequency with which the indicator is consulted and used by the service. - "date": date when the indicator was measured, or when the target value is desired if it is a target. - est_periode
: Boolean indicating whether the measurement is made over a period (true) or whether it is a stock (false). - date_start
: date of the start of the measurement period, if the indicator covers a period of time. - “is_automated“: specifies whether data collection is automated (true) or manual (false). - "source_collection": specify how the collection is carried out: script, survey, manual collection... - insee_code
: if the indicator is calculated at a certain geographical scale, specify the INSEE code of that scale. - dataviz_wish
: indication for visualization producers of the appropriate type of dataviz for this indicator. - "comments": specify the known limitations and biases and justify the choice of the indicator despite its limitations.
This data table provides the detailed data quality assessment scores for the Long Term Development Statement dataset. The quality assessment was carried out on 31st March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality; to demonstrate our progress we conduct annual assessments of our data quality in line with the dataset refresh rate. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
Political scientists routinely face the challenge of assessing the quality (validity and reliability) of measures in order to use them in substantive research. While stand-alone assessment tools exist, researchers rarely combine them comprehensively. Further, while a large literature informs data producers, data consumers lack guidance on how to assess existing measures for use in substantive research. We delineate a three-component practical approach to data quality assessment that integrates complementary multi-method tools to assess: 1) content validity; 2) the validity and reliability of the data generation process; and 3) convergent validity. We apply our quality assessment approach to the corruption measures from the Varieties of Democracy (V-Dem) project, both illustrating our rubric and unearthing several quality advantages and disadvantages of the V-Dem measures, compared to other existing measures of corruption.
This operations dashboard shows historic and current data related to this performance measure. The performance measure dashboard is available at 5.01 Quality of Business Services. Data Dictionary
This data table provides the detailed data quality assessment scores for the Operational Forecasting dataset. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data of the contaminants measured in the stations of the city of Barcelona. The update is carried out in intervals of one hour indicating whether the value is validated or not. The data of three days prior to the current one is also displayed.
The Agency for Healthcare Research and Quality (AHRQ) Patient Safety Indicator 11 (PSI-11) Measure Rates dataset provides information on provider-level measure rates regarding one preventable complication (postoperative respiratory failure) for Medicare fee-for-service discharges. The PSI-11 measure data is solely reported for providers’ information and quality improvement purposes and are not a part of the Deficit Reduction Act (DRA) Hospital-Acquired Condition (HAC) Payment Provision or HAC Reduction Program.
This is historical data. The update frequency has been set to "Static Data" and is here for historic value. Updated on 8/14/2024 Rate per 100,000; rates are adjusted for age and sex. PQI measures are used to monitor performance over time or across regions and populations using patient data found in a typical hospital discharge abstract. PQI measures can be used to summarize quality across multiple indicators, improve ability to detect differences, identify important domains and drivers of quality, prioritize action for quality improvement, make current decisions about future and unknown health care needs. Overall Composite (PQI #90) is comprised of: all PQI measures within the Chronic and Acute Composites. Chronic Composite (PQI #92) is comprised of: PQI #01 Diabetes Short-Term Complications Admission Rate, PQI #03 Diabetes Long-Term Complications Admission Rate, PQI #05 Chronic Obstructive Pulmonary Disease (COPD) or Asthma in Older Adults Admission Rate, PQI #07 Hypertension Admission Rate, PQI #08 Congestive Heart Failure (CHF) Admission Rate, PQI #13 Angina without Procedure Admission Rate, PQI #14 Uncontrolled Diabetes Admission Rate, PQI #15 Asthma in Younger Adults Admission Rate, PQI #16 Rate of Lower-Extremity Amputation Among Patients With Diabetes, Acute Composite (PQI #91) is comprised of: PQI #10 Dehydration Admission Rate, PQI #11 Bacterial Pneumonia Admission Rate, and PQI #12 Urinary Tract Infection Admission Rate. NOTE: PQI Version Change in 2012. Previous numbers for 2012 have been overwritten. Trends may not be accurate between 2011 and 2012.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The Inpatient Psychiatric Facility Quality Reporting (IPFQR) program currently uses six measures for Utah psychiatric facilities. Psychiatric facilities that are eligible for this program may have their Medicare payments reduced if they do not report.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this seminar, you will learn about ArcGIS Data Reviewer tools that allow you to automate, centrally manage, and improve your GIS data quality control processes.This seminar was developed to support the following:ArcGIS 10.0 For Desktop (ArcView, ArcEditor, Or ArcInfo)ArcGIS Data Reviewer for Desktop
This dataset contains quality measures displayed on Nursing Home Compare, based on the resident assessments that make up the nursing home Minimum Data Set (MDS). Each row contains a specific measure for a nursing home and includes the four-quarter score average and scores for individual quarter.
Psychiatric facilities that are eligible for the Inpatient Psychiatric Facility Quality Reporting (IPFQR) program are required to meet all program requirements, otherwise their Medicare payments may be reduced. Follow-Up After Hospitalization for Mental Illness (FUH) measure data on this table are marked as not available. Results for this measure are provided on a separate table.
Biennial Business Survey data summary for Quality of Business Services survey results. The Business Survey question that relates to this dataset is: “Quality of services provided by City of Tempe” Respondents are asked to rate their satisfaction level using a scale of 1 to 5, where 1 means "Very Dissatisfied" and 5 means "Very Satisfied".This page provides data for the Quality of Business Services performance measure.The performance measure dashboard is available at 5.01 Quality of Business Services.Additional InformationSource: Business Survey (Vendor: ETC Institute) Contact: Wydale HolmesContact E-Mail: wydale_holmes@tempe.govData Source Type: .pdf, ExcelPreparation Method: The City contracts with a vendor to conduct the survey, analyze the data and prepare for publication.Publish Frequency: Every other yearPublish Method: Manual, .pdfData Dictionary
This operations dashboard shows historic and current data related to this performance measure.The performance measure dashboard is available at 3.36 Quality of City Services. Data Dictionary
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Objective: Routine primary care data may be used for the derivation of clinical prediction rules and risk scores. We sought to measure the impact of a decision support system (DSS) on data completeness and freedom from bias.
Materials and Methods: We used the clinical documentation of 34 UK General Practitioners who took part in a previous study evaluating the DSS. They consulted with 12 standardized patients. In addition to suggesting diagnoses, the DSS facilitates data coding. We compared the documentation from consultations with the electronic health record (EHR) (baseline consultations) vs. consultations with the EHR-integrated DSS (supported consultations). We measured the proportion of EHR data items related to the physician's final diagnosis. We expected that in baseline consultations, physicians would document only or predominantly observations related to their diagnosis, while in supported consultations, they would also document other observations as a result of exploring more diagnoses and/or ease of coding.
Results: Supported documentation contained significantly more codes (IRR=5.76 [4.31, 7.70] P<0.001) and less free text (IRR = 0.32 [0.27, 0.40] P<0.001) than baseline documentation. As expected, the proportion of diagnosis-related data was significantly lower (b=-0.08 [-0.11, -0.05] P<0.001) in the supported consultations, and this was the case for both codes and free text.
Conclusions: We provide evidence that data entry in the EHR is incomplete and reflects physicians' cognitive biases. This has serious implications for epidemiological research that uses routine data. A DSS that facilitates and motivates data entry during the consultation can improve routine documentation.