This dataset was created by Isa Zeynalov
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data - Quality assessment table
This data table provides the detailed data quality assessment scores for the Curtailment dataset. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
For more up to date quality metadata, please visit https://w3id.org/lodquator
This dataset is a collection of TRiG files with quality metadata for different datasets on the LOD cloud. Each dataset was assessed for
The length of URIs
Usage of RDF primitives
Re-use of existing terms
Usage of undefined terms
Usage of blank nodes
Indication for different serialisation formats
Usage of multiple languages
This data dump is part of the empirical study conducted for the paper "Are LOD Cloud Datasets Well Represented? A Data Representation Quality Survey."
For more information visit http://jerdeb.github.io/lodqa
The USACE IENCs coverage area consists of 7,260 miles across 21 rivers primarily located in the Central United States. IENCs apply to inland waterways that are maintained for navigation by USACE for shallow-draft vessels (e.g., maintained at a depth of 9-14 feet, dependent upon the waterway project authorization). Generally, IENCs are produced for those commercially navigable waterways which the National Oceanic and Atmospheric Administration (NOAA) does not produce Electronic Navigational Charts (ENCs). However, Special Purpose IENCs may be produced in agreement with NOAA. IENC POC: IENC_POC@usace.army.mil
This dataset contains replicate samples collected in the field by community technicians. No field replicates were collected in 2012. Replicate constituents with differences less than 10 percent are considered acceptable.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes information on quality control and data management of researchers and data curators from a social science organization. Four data curators and 24 researchers provided responses for the study. Data collection techniques, data processing strategies, data storage and preservation, metadata standards, data sharing procedures, and the perceived significance of quality control and data quality assurance are the main areas of focus. The dataset attempts to provide insight on the RDM procedures that are being used by a social science organization as well as the difficulties that researchers and data curators encounter in upholding high standards of data quality. The goal of the study is to encourage more investigations aimed at enhancing scientific community data management practices and guidelines.
Homeland Infrastructure Foundation-Level Data (HIFLD) geospatial data sets containing information on Data Quality Assessment Areas (USACE IENC).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Organizations are increasingly accepting data quality (DQ) as a major key to their success. In order to assess and improve DQ, methods have been devised. Many of these methods attempt to raise DQ by directly manipulating low quality data. Such methods operate reactively and are suitable for organizations with highly developed integrated systems. However, there is a lack of a proactive DQ method for businesses with weak IT infrastructure where data quality is largely affected by tasks that are performed by human agents. This study aims to develop and evaluate a new method for structured data, which is simple and practical so that it can easily be applied to real world situations. The new method detects the potentially risky tasks within a process, and adds new improving tasks to counter them. To achieve continuous improvement, an award system is also developed to help with the better selection of the proposed improving tasks. The task-based DQ method (TBDQ) is most appropriate for small and medium organizations, and simplicity in implementation is one of its most prominent features. TBDQ is case studied in an international trade company. The case study shows that TBDQ is effective in selecting optimal activities for DQ improvement in terms of cost and improvement.
This data table provides the detailed data quality assessment scores for the Technical Limits dataset. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Under the Open Government Action Plan, and related National Action Plan, the FGP is required to report on its commitments related to: supporting a user-friendly open government platform; improving the quality of open data available on open.canada.ca; and reviewing additional geospatial datasets to assess their quality. This report summarizes the FGP’s action on meeting these commitments.
ADBNet is an online database tracking Iowa's water quality assessments. These assessments are prepared under guidance provided by the US EPA under Section 305b of the Clean Water Act. The assessments are intended to estimate the extent to which Iowa's waterbodies meet the goals of the Clean Water Act and attain state water quality standards, and share this information with planners, citizens and other partners in basin planning and watershed management activities. Water quality in Iowa is measured by comparisons of recent monitoring data to the Iowa Water Quality Standards. Results of recent water quality monitoring, special water quality studies, and other assessments of the quality of Iowa's waters are used to determine the degree to which Iowa's rivers, streams, lakes, and wetlands support the beneficial uses for which they are designated in the Iowa Water Quality Standards (for example, aquatic life (fishing), swimming, and/or use as a source of a public water supply). Other information from water quality monitoring and studies that are up to five years old are also used to expand the coverage of assessments in the report. Waters assessed as impaired (that is, either partially supporting or not supporting their designated uses) form the basis for the state's list of impaired waters as required by Section 303(d) of the Clean Water Act.
This dataset includes laboratory instrument detection limit data associated with laboratory instruments used in the analysis of surface water samples collected as part of the USGS - Yukon River Inter-Tribal Watershed Council collaborative water quality monitoring project.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A key aim of the FNS-Cloud project (grant agreement no. 863059) was to overcome fragmentation within food, nutrition and health data through development of tools and services facilitating matching and merging of data to promote increased reuse. However, in an era of increasing data reuse, it is imperative that the scientific quality of data analysis is maintained. Whilst it is true that many datasets can be reused, questions remain regarding whether they should be, thus, there is a need to support researchers making such a decision. This paper describes the development and evaluation of the FNS-Cloud data quality assessment tool for dietary intake datasets. Markers of quality were identified from the literature for dietary intake, lifestyle, demographic, anthropometric, and consumer behavior data at all levels of data generation (data collection, underlying data sources used, dataset management and data analysis). These markers informed the development of a quality assessment framework, which comprised of decision trees and feedback messages relating to each quality parameter. These fed into a report provided to the researcher on completion of the assessment, with considerations to support them in deciding whether the dataset is appropriate for reuse. This quality assessment framework was transformed into an online tool and a user evaluation study undertaken. Participants recruited from three centres (N = 13) were observed and interviewed while using the tool to assess the quality of a dataset they were familiar with. Participants positively rated the assessment format and feedback messages in helping them assess the quality of a dataset. Several participants quoted the tool as being potentially useful in training students and inexperienced researchers in the use of secondary datasets. This quality assessment tool, deployed within FNS-Cloud, is openly accessible to users as one of the first steps in identifying datasets suitable for use in their specific analyses. It is intended to support researchers in their decision-making process of whether previously collected datasets under consideration for reuse are fit their new intended research purposes. While it has been developed and evaluated, further testing and refinement of this resource would improve its applicability to a broader range of users.
This data table provides the detailed data quality assessment scores for the Long Term Development Statement dataset. The quality assessment was carried out on 31st March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality; to demonstrate our progress we conduct annual assessments of our data quality in line with the dataset refresh rate. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
This data table provides the detailed data quality assessment scores for the Flexibility Market Prospectus dataset. The quality assessment was carried out on the 31st March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
This data table provides the detailed data quality assessment scores for the Operational Forecasting dataset. The quality assessment was carried out on the 31st of March. At SPEN, we are dedicated to sharing high-quality data with our stakeholders and being transparent about its' quality. This is why we openly share the results of our data quality assessments. We collaborate closely with Data Owners to address any identified issues and enhance our overall data quality. To demonstrate our progress we conduct, at a minimum, bi-annual assessments of our data quality - for datasets that are refreshed more frequently than this, please note that the quality assessment may be based on an earlier version of the dataset. To learn more about our approach to how we assess data quality, visit Data Quality - SP Energy Networks. We welcome feedback and questions from our stakeholders regarding this process. Our Open Data Team is available to answer any enquiries or receive feedback on the assessments. You can contact them via our Open Data mailbox at opendata@spenergynetworks.co.uk.The first phase of our comprehensive data quality assessment measures the quality of our datasets across three dimensions. Please refer to the data table schema for the definitions of these dimensions. We are now in the process of expanding our quality assessments to include additional dimensions to provide a more comprehensive evaluation and will update the data tables with the results when available.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A vocabulary for the specification and exchange of Data Quality Assessment Requirements
built on top of already well-established vocabularies such as the Data Quality Vocabulary (DQV)
Further description at http://purl.org/net/vsr/daqar
Contact:
André Langer
Professorship for Distruted and Self-Organizing Systems
Chemnitz University of Technology
Germany
[andre.langer@informatik.tu-chemnitz.de]
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The EOSC-A FAIR Metrics and Data Quality Task Force (TF) supported the European Open Science Cloud Association (EOSC-A) by providing strategic directions on FAIRness (Findable, Accessible, Interoperable, and Reusable) and data quality. The Task Force conducted a survey using the EUsurvey tool between 15.11.2022 and 18.01.2023, targeting both developers and users of FAIR assessment tools. The survey aimed at supporting the harmonisation of FAIR assessments, in terms of what it evaluated and how, across existing (and future) tools and services, as well as explore if and how a community-driven governance on these FAIR assessments would look like. The survey received 78 responses, mainly from academia, representing various domains and organisational roles. This is the anonymised survey dataset in csv format; most open-ended answers have been dropped. The codebook contains variable names, labels, and frequencies.
This dataset was created by Isa Zeynalov