100+ datasets found
  1. d

    Data Quality Assurance - Instrument Detection Limits

    • catalog.data.gov
    • dataone.org
    Updated Jul 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Data Quality Assurance - Instrument Detection Limits [Dataset]. https://catalog.data.gov/dataset/data-quality-assurance-instrument-detection-limits
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    This dataset includes laboratory instrument detection limit data associated with laboratory instruments used in the analysis of surface water samples collected as part of the USGS - Yukon River Inter-Tribal Watershed Council collaborative water quality monitoring project.

  2. d

    Environmental Monitoring Results for Radioactivity: Other Samples

    • catalog.data.gov
    • data.ct.gov
    Updated Jul 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.ct.gov (2025). Environmental Monitoring Results for Radioactivity: Other Samples [Dataset]. https://catalog.data.gov/dataset/environmental-monitoring-results-for-radioactivity-other-samples
    Explore at:
    Dataset updated
    Jul 5, 2025
    Dataset provided by
    data.ct.gov
    Description

    Reporting units of sample results [where 1 picoCurie (pCi) = 1 trillionth (1E-12) Curie (Ci)]: • Other samples are reported in pCi/g. Data Quality Disclaimer: This database is for informational use and is not a controlled quality database. Efforts have been made to ensure accuracy of data in the database; however, errors and omissions may occur. Examples of potential errors include: • Data entry errors. • Lab results not reported for entry into the database. • Missing results due to equipment failure or unable to retrieve samples due to lost or environmental hazards. • Translation errors – the data has been migrated to newer data platforms numerous times, and each time there have been errors and data losses. Error Results are the calculated uncertainty for the sample measurement results and are reported as (+/-). Environmental Sample Records are from the year 1998 until present. Prior to 1998 results were stored in hardcopy, in a non-database format. Requests for results from samples taken prior to 1998 or results subject to quality assurance are available from archived records and can be made through the DEEP Freedom of Information Act (FOIA) administrator at deep.foia@ct.gov. Information on FOIA requests can be found on the DEEP website. FOIA Administrator Office of the Commissioner Department of Energy and Environmental Protection 79 Elm Street, 3rd Floor Hartford, CT 06106

  3. d

    Maryland Counties Match Tool for Data Quality

    • catalog.data.gov
    • opendata.maryland.gov
    • +1more
    Updated Sep 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    opendata.maryland.gov (2023). Maryland Counties Match Tool for Data Quality [Dataset]. https://catalog.data.gov/dataset/maryland-counties-match-tool-for-data-quality
    Explore at:
    Dataset updated
    Sep 15, 2023
    Dataset provided by
    opendata.maryland.gov
    Area covered
    Maryland
    Description

    Data standardization is an important part of effective management. However, sometimes people have data that doesn't match. This dataset includes different ways that counties might get written by different people. It can be used as a lookup table when you need County to be your unique identifier. For example, it allows you to match St. Mary's, St Marys, and Saint Mary's so that you can use it with disparate data from other data sets.

  4. d

    Hydroinformatics Instruction Module Example Code: Sensor Data Quality...

    • search.dataone.org
    • beta.hydroshare.org
    • +1more
    Updated Dec 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amber Spackman Jones (2023). Hydroinformatics Instruction Module Example Code: Sensor Data Quality Control with pyhydroqc [Dataset]. https://search.dataone.org/view/sha256%3A481577821de9acf7d3d8ff140d43b228dc772dbcfbc7ba7aeece4bca39590c72
    Explore at:
    Dataset updated
    Dec 30, 2023
    Dataset provided by
    Hydroshare
    Authors
    Amber Spackman Jones
    Description

    This resource contains Jupyter Notebooks with examples for conducting quality control post processing for in situ aquatic sensor data. The code uses the Python pyhydroqc package. The resource is part of set of materials for hydroinformatics and water data science instruction. Complete learning module materials are found in HydroLearn: Jones, A.S., Horsburgh, J.S., Bastidas Pacheco, C.J. (2022). Hydroinformatics and Water Data Science. HydroLearn. https://edx.hydrolearn.org/courses/course-v1:USU+CEE6110+2022/about.

    This resources consists of 3 example notebooks and associated data files.

    Notebooks: 1. Example 1: Import and plot data 2. Example 2: Perform rules-based quality control 3. Example 3: Perform model-based quality control (ARIMA)

    Data files: Data files are available for 6 aquatic sites in the Logan River Observatory. Each file contains data for one site for a single year. Each file corresponds to a single year of data. The files are named according to monitoring site (FranklinBasin, TonyGrove, WaterLab, MainStreet, Mendon, BlackSmithFork) and year. The files were sourced by querying the Logan River Observatory relational database, and equivalent data could be obtained from the LRO website or on HydroShare. Additional information on sites, variables, and methods can be found on the LRO website (http://lrodata.usu.edu/tsa/) or HydroShare (https://www.hydroshare.org/search/?q=logan%20river%20observatory). Each file has the same structure indexed with a datetime column (mountain standard time) with three columns corresponding to each variable. Variable abbreviations and units are: - temp: water temperature, degrees C - cond: specific conductance, μS/cm - ph: pH, standard units - do: dissolved oxygen, mg/L - turb: turbidity, NTU - stage: stage height, cm

    For each variable, there are 3 columns: - Raw data value measured by the sensor (column header is the variable abbreviation). - Technician quality controlled (corrected) value (column header is the variable abbreviation appended with '_cor'). - Technician labels/qualifiers (column header is the variable abbreviation appended with '_qual').

  5. i

    Semantic network as a means of ensuring data quality - the Bridge of...

    • ieee-dataport.org
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Piotr Krajewski (2024). Semantic network as a means of ensuring data quality - the Bridge of Knowledge platform example [Dataset]. https://ieee-dataport.org/documents/semantic-network-means-ensuring-data-quality-bridge-knowledge-platform-example
    Explore at:
    Dataset updated
    Jul 8, 2024
    Authors
    Piotr Krajewski
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Our poster is essential for understanding the process of creating a community of practice in the context of Open Science. Building such a community and at the same time being part of the culture change that offers openness in science is challenging. No single researcher or librarian would be able to achieve those results alone. Gdańsk Tech Library’s strategy to popularise and practice Open Science requires many actions supported by a team of people with different competencies

  6. f

    Overview of the information contained in the quality summary and quality...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Derek E. Smith; Stefan Metzger; Jeffrey R. Taylor (2023). Overview of the information contained in the quality summary and quality report. [Dataset]. http://doi.org/10.1371/journal.pone.0112249.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Derek E. Smith; Stefan Metzger; Jeffrey R. Taylor
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This example displays the quality report and quality summary information for 15 sensor measurements and 3 arbitrary quality analyses. The quality report contains the individual quality flag outcomes for each sensor measurement, i.e., rows 1–15. The quality summary includes the corresponding quality metrics and the final quality flag information, i.e., the bottom row.Overview of the information contained in the quality summary and quality report.

  7. D

    Data Quality Tools Industry Report

    • marketreportanalytics.com
    doc, pdf, ppt
    Updated Apr 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Report Analytics (2025). Data Quality Tools Industry Report [Dataset]. https://www.marketreportanalytics.com/reports/data-quality-tools-industry-89686
    Explore at:
    pdf, ppt, docAvailable download formats
    Dataset updated
    Apr 21, 2025
    Dataset authored and provided by
    Market Report Analytics
    License

    https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Data Quality Tools market is experiencing robust growth, fueled by the increasing volume and complexity of data across diverse industries. The market, currently valued at an estimated $XX million in 2025 (assuming a logically derived value based on a 17.5% CAGR from a 2019 base year), is projected to reach $YY million by 2033. This substantial expansion is driven by several key factors. Firstly, the rising adoption of cloud-based solutions offers enhanced scalability, flexibility, and cost-effectiveness, attracting both small and medium enterprises (SMEs) and large enterprises. Secondly, the growing need for regulatory compliance (e.g., GDPR, CCPA) necessitates robust data quality management, pushing organizations to invest in advanced tools. Further, the increasing reliance on data-driven decision-making across sectors like BFSI, healthcare, and retail necessitates high-quality, reliable data, thus boosting market demand. The preference for software solutions over on-premise deployments and the substantial investments in services aimed at data integration and cleansing contribute to this growth. However, certain challenges restrain market expansion. High initial investment costs, the complexity of implementation, and the need for skilled professionals to manage these tools can act as barriers for some organizations, particularly SMEs. Furthermore, concerns related to data security and privacy continue to impact adoption rates. Despite these challenges, the long-term outlook for the Data Quality Tools market remains positive, driven by the ever-increasing importance of data quality in a rapidly digitalizing world. The market segmentation highlights significant opportunities across different deployment models, organizational sizes, and industry verticals, suggesting diverse avenues for growth and innovation in the coming years. Competition among established players like IBM, Informatica, and Oracle, alongside emerging players, is intensifying, driving innovation and providing diverse solutions to meet varied customer needs. Recent developments include: September 2022: MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) spin-off DataCebo announced the launch of a new tool, dubbed Synthetic Data (SD) Metrics, to help enterprises compare the quality of machine-generated synthetic data by pitching it against real data sets., May 2022: Pyramid Analytics, which developed its flagship platform, Pyramids Decision Intelligence, announced that it raised USD 120 million in a Series E round of funding. The Pyramid Decision Intelligence platform combines business analytics, data preparation, and data science capabilities with AI guidance functionality. It enables governed self-service analytics in a no-code environment.. Key drivers for this market are: Increasing Use of External Data Sources Owing to Mobile Connectivity Growth. Potential restraints include: Increasing Use of External Data Sources Owing to Mobile Connectivity Growth. Notable trends are: Healthcare is Expected to Witness Significant Growth.

  8. c

    Data Quality Assurance - Field Replicates

    • s.cnmilf.com
    • catalog.data.gov
    Updated Jul 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Data Quality Assurance - Field Replicates [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/data-quality-assurance-field-replicates
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    This dataset contains replicate samples collected in the field by community technicians. No field replicates were collected in 2012. Replicate constituents with differences less than 10 percent are considered acceptable.

  9. d

    Quality-Assurance and Quality-Control Data for Discrete Water-Quality...

    • catalog.data.gov
    • data.usgs.gov
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Quality-Assurance and Quality-Control Data for Discrete Water-Quality Samples Collected in McHenry County, Illinois, 2020 [Dataset]. https://catalog.data.gov/dataset/quality-assurance-and-quality-control-data-for-discrete-water-quality-samples-collected-in
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Illinois, McHenry County
    Description

    In June and July of 2020, 45 groundwater wells in McHenry County, Illinois, were sampled for water quality (field properties, major ions, nutrients, and trace metals) and 12 wells were sampled for contaminants of emerging concern (pharmaceuticals, pesticides, and wastewater indicator compounds). Quality-assurance and quality-control samples were collected during the June and July 2020 sampling that included equipment blanks, field blanks, and replicates. The results of these samples were used to understand the sources of bias and variability associated with sample collection, processing, storage, and shipping. This data release contains one comma separated values files containing the results of the quality-control sample collection for general water quality (metals, nutrients, and major ions) and contaminants of emerging concern (wastewater indicator compounds and pharmaceuticals). Water-quality data from the associated groundwater monitoring well data are available at the National Water Information System (NWIS) web database (https://doi.org/10.5066/F7P55KJN). Results and discussion of the water quality and contaminants of emerging concern can also be found in the associated scientific investigation report referenced.

  10. Z

    Conceptualization of public data ecosystems

    • data.niaid.nih.gov
    Updated Sep 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin, Lnenicka (2024). Conceptualization of public data ecosystems [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_13842001
    Explore at:
    Dataset updated
    Sep 26, 2024
    Dataset provided by
    Anastasija, Nikiforova
    Martin, Lnenicka
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains data collected during a study "Understanding the development of public data ecosystems: from a conceptual model to a six-generation model of the evolution of public data ecosystems" conducted by Martin Lnenicka (University of Hradec Králové, Czech Republic), Anastasija Nikiforova (University of Tartu, Estonia), Mariusz Luterek (University of Warsaw, Warsaw, Poland), Petar Milic (University of Pristina - Kosovska Mitrovica, Serbia), Daniel Rudmark (Swedish National Road and Transport Research Institute, Sweden), Sebastian Neumaier (St. Pölten University of Applied Sciences, Austria), Karlo Kević (University of Zagreb, Croatia), Anneke Zuiderwijk (Delft University of Technology, Delft, the Netherlands), Manuel Pedro Rodríguez Bolívar (University of Granada, Granada, Spain).

    As there is a lack of understanding of the elements that constitute different types of value-adding public data ecosystems and how these elements form and shape the development of these ecosystems over time, which can lead to misguided efforts to develop future public data ecosystems, the aim of the study is: (1) to explore how public data ecosystems have developed over time and (2) to identify the value-adding elements and formative characteristics of public data ecosystems. Using an exploratory retrospective analysis and a deductive approach, we systematically review 148 studies published between 1994 and 2023. Based on the results, this study presents a typology of public data ecosystems and develops a conceptual model of elements and formative characteristics that contribute most to value-adding public data ecosystems, and develops a conceptual model of the evolutionary generation of public data ecosystems represented by six generations called Evolutionary Model of Public Data Ecosystems (EMPDE). Finally, three avenues for a future research agenda are proposed.

    This dataset is being made public both to act as supplementary data for "Understanding the development of public data ecosystems: from a conceptual model to a six-generation model of the evolution of public data ecosystems ", Telematics and Informatics*, and its Systematic Literature Review component that informs the study.

    Description of the data in this data set

    PublicDataEcosystem_SLR provides the structure of the protocol

    Spreadsheet#1 provides the list of results after the search over three indexing databases and filtering out irrelevant studies

    Spreadsheets #2 provides the protocol structure.

    Spreadsheets #3 provides the filled protocol for relevant studies.

    The information on each selected study was collected in four categories:(1) descriptive information,(2) approach- and research design- related information,(3) quality-related information,(4) HVD determination-related information

    Descriptive Information

    Article number

    A study number, corresponding to the study number assigned in an Excel worksheet

    Complete reference

    The complete source information to refer to the study (in APA style), including the author(s) of the study, the year in which it was published, the study's title and other source information.

    Year of publication

    The year in which the study was published.

    Journal article / conference paper / book chapter

    The type of the paper, i.e., journal article, conference paper, or book chapter.

    Journal / conference / book

    Journal article, conference, where the paper is published.

    DOI / Website

    A link to the website where the study can be found.

    Number of words

    A number of words of the study.

    Number of citations in Scopus and WoS

    The number of citations of the paper in Scopus and WoS digital libraries.

    Availability in Open Access

    Availability of a study in the Open Access or Free / Full Access.

    Keywords

    Keywords of the paper as indicated by the authors (in the paper).

    Relevance for our study (high / medium / low)

    What is the relevance level of the paper for our study

    Approach- and research design-related information

    Approach- and research design-related information

    Objective / Aim / Goal / Purpose & Research Questions

    The research objective and established RQs.

    Research method (including unit of analysis)

    The methods used to collect data in the study, including the unit of analysis that refers to the country, organisation, or other specific unit that has been analysed such as the number of use-cases or policy documents, number and scope of the SLR etc.

    Study’s contributions

    The study’s contribution as defined by the authors

    Qualitative / quantitative / mixed method

    Whether the study uses a qualitative, quantitative, or mixed methods approach?

    Availability of the underlying research data

    Whether the paper has a reference to the public availability of the underlying research data e.g., transcriptions of interviews, collected data etc., or explains why these data are not openly shared?

    Period under investigation

    Period (or moment) in which the study was conducted (e.g., January 2021-March 2022)

    Use of theory / theoretical concepts / approaches? If yes, specify them

    Does the study mention any theory / theoretical concepts / approaches? If yes, what theory / concepts / approaches? If any theory is mentioned, how is theory used in the study? (e.g., mentioned to explain a certain phenomenon, used as a framework for analysis, tested theory, theory mentioned in the future research section).

    Quality-related information

    Quality concerns

    Whether there are any quality concerns (e.g., limited information about the research methods used)?

    Public Data Ecosystem-related information

    Public data ecosystem definition

    How is the public data ecosystem defined in the paper and any other equivalent term, mostly infrastructure. If an alternative term is used, how is the public data ecosystem called in the paper?

    Public data ecosystem evolution / development

    Does the paper define the evolution of the public data ecosystem? If yes, how is it defined and what factors affect it?

    What constitutes a public data ecosystem?

    What constitutes a public data ecosystem (components & relationships) - their "FORM / OUTPUT" presented in the paper (general description with more detailed answers to further additional questions).

    Components and relationships

    What components does the public data ecosystem consist of and what are the relationships between these components? Alternative names for components - element, construct, concept, item, helix, dimension etc. (detailed description).

    Stakeholders

    What stakeholders (e.g., governments, citizens, businesses, Non-Governmental Organisations (NGOs) etc.) does the public data ecosystem involve?

    Actors and their roles

    What actors does the public data ecosystem involve? What are their roles?

    Data (data types, data dynamism, data categories etc.)

    What data do the public data ecosystem cover (is intended / designed for)? Refer to all data-related aspects, including but not limited to data types, data dynamism (static data, dynamic, real-time data, stream), prevailing data categories / domains / topics etc.

    Processes / activities / dimensions, data lifecycle phases

    What processes, activities, dimensions and data lifecycle phases (e.g., locate, acquire, download, reuse, transform, etc.) does the public data ecosystem involve or refer to?

    Level (if relevant)

    What is the level of the public data ecosystem covered in the paper? (e.g., city, municipal, regional, national (=country), supranational, international).

    Other elements or relationships (if any)

    What other elements or relationships does the public data ecosystem consist of?

    Additional comments

    Additional comments (e.g., what other topics affected the public data ecosystems and their elements, what is expected to affect the public data ecosystems in the future, what were important topics by which the period was characterised etc.).

    New papers

    Does the study refer to any other potentially relevant papers?

    Additional references to potentially relevant papers that were found in the analysed paper (snowballing).

    Format of the file.xls, .csv (for the first spreadsheet only), .docx

    Licenses or restrictionsCC-BY

    For more info, see README.txt

  11. Environmental Monitoring Results for Radioactivity: Milk Samples

    • data.ct.gov
    • catalog.data.gov
    application/rdfxml +5
    Updated Jul 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Radiation Division, Bureau of Air Management, Connecticut Department of Energy and Environmental Protection (2025). Environmental Monitoring Results for Radioactivity: Milk Samples [Dataset]. https://data.ct.gov/Environment-and-Natural-Resources/Environmental-Monitoring-Results-for-Radioactivity/kqjv-vikd
    Explore at:
    csv, json, tsv, xml, application/rdfxml, application/rssxmlAvailable download formats
    Dataset updated
    Jul 2, 2025
    Dataset provided by
    Connecticut Department of Energy and Environmental Protectionhttps://www.ct.gov/deep/
    Authors
    Radiation Division, Bureau of Air Management, Connecticut Department of Energy and Environmental Protection
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description
    • Reporting units of sample results [where 1 picoCurie (pCi) = 1 trillionth (1E-12) Curie (Ci)]: • Milk Samples are reported in pCi/L.

    • Data Quality Disclaimer: This database is for informational use and is not a controlled quality database. Efforts have been made to ensure accuracy of data in the database; however, errors and omissions may occur.

    Examples of potential errors include: • Data entry errors. • Lab results not reported for entry into the database. • Missing results due to equipment failure or unable to retrieve samples due to lost or environmental hazards. • Translation errors – the data has been migrated to newer data platforms numerous times, and each time there have been errors and data losses.

    • Error Results are the calculated uncertainty for the sample measurement results and are reported as (+/-).

    • Environmental Sample Records are from the year 1998 until present. Prior to 1998 results were stored in hardcopy, in a non-database format.

    Requests for results from samples taken prior to 1998 or results subject to quality assurance are available from archived records and can be made through the DEEP Freedom of Information Act (FOIA) administrator at deep.foia@ct.gov. Information on FOIA requests can be found on the DEEP website.

    FOIA Administrator Office of the Commissioner Department of Energy and Environmental Protection 79 Elm Street, 3rd Floor Hartford, CT 06106

  12. AirNow Air Quality Monitoring Site Data (Last 24 hours)

    • gis-fema.hub.arcgis.com
    • hub.arcgis.com
    • +1more
    Updated Nov 21, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA (2018). AirNow Air Quality Monitoring Site Data (Last 24 hours) [Dataset]. https://gis-fema.hub.arcgis.com/datasets/394b9bf591e14596bb57b9085b425f7d
    Explore at:
    Dataset updated
    Nov 21, 2018
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Authors
    U.S. EPA
    Area covered
    Description

    This United States Environmental Protection Agency (US EPA) feature layer represents site data, updated hourly concentrations and Air Quality Index (AQI) values for the last 24 hours received from each monitoring site that reports to AirNow. NOTE: Time Animation is enabled by default on this layer.Map and forecast data are collected using federal reference or equivalent monitoring techniques or techniques approved by the state, local or tribal monitoring agencies. To maintain "real-time" maps, the data are displayed after the end of each hour. Although preliminary data quality assessments are performed, the data in AirNow are not fully verified and validated through the quality assurance procedures monitoring organizations used to officially submit and certify data on the EPA Air Quality System (AQS).This data sharing, and centralization creates a one-stop source for real-time and forecast air quality data. The benefits include quality control, national reporting consistency, access to automated mapping methods, and data distribution to the public and other data systems. The U.S. Environmental Protection Agency, National Oceanic and Atmospheric Administration, National Park Service, tribal, state, and local agencies developed the AirNow system to provide the public with easy access to national air quality information. State and local agencies report the Air Quality Index (AQI) for cities across the US and parts of Canada and Mexico. AirNow data are used only to report the AQI, not to formulate or support regulation, guidance or any other EPA decision or position.About the AQIThe Air Quality Index (AQI) is an index for reporting daily air quality. It tells you how clean or polluted your air is, and what associated health effects might be a concern for you. The AQI focuses on health effects you may experience within a few hours or days after breathing polluted air. EPA calculates the AQI for five major air pollutants regulated by the Clean Air Act: ground-level ozone, particle pollution (also known as particulate matter), carbon monoxide, sulfur dioxide, and nitrogen dioxide. For each of these pollutants, EPA has established national air quality standards to protect public health. Ground-level ozone and airborne particles (often referred to as "particulate matter") are the two pollutants that pose the greatest threat to human health in this country.A number of factors influence ozone formation, including emissions from cars, trucks, buses, power plants, and industries, along with weather conditions. Weather is especially favorable for ozone formation when it’s hot, dry and sunny, and winds are calm and light. Federal and state regulations, including regulations for power plants, vehicles and fuels, are helping reduce ozone pollution nationwide.Fine particle pollution (or "particulate matter") can be emitted directly from cars, trucks, buses, power plants and industries, along with wildfires and woodstoves. But it also forms from chemical reactions of other pollutants in the air. Particle pollution can be high at different times of year, depending on where you live. In some areas, for example, colder winters can lead to increased particle pollution emissions from woodstove use, and stagnant weather conditions with calm and light winds can trap PM2.5 pollution near emission sources. Federal and state rules are helping reduce fine particle pollution, including clean diesel rules for vehicles and fuels, and rules to reduce pollution from power plants, industries, locomotives, and marine vessels, among others.How Does the AQI Work?Think of the AQI as a yardstick that runs from 0 to 500. The higher the AQI value, the greater the level of air pollution and the greater the health concern. For example, an AQI value of 50 represents good air quality with little potential to affect public health, while an AQI value over 300 represents hazardous air quality.An AQI value of 100 generally corresponds to the national air quality standard for the pollutant, which is the level EPA has set to protect public health. AQI values below 100 are generally thought of as satisfactory. When AQI values are above 100, air quality is considered to be unhealthy-at first for certain sensitive groups of people, then for everyone as AQI values get higher.Understanding the AQIThe purpose of the AQI is to help you understand what local air quality means to your health. To make it easier to understand, the AQI is divided into six categories:Air Quality Index(AQI) ValuesLevels of Health ConcernColorsWhen the AQI is in this range:..air quality conditions are:...as symbolized by this color:0 to 50GoodGreen51 to 100ModerateYellow101 to 150Unhealthy for Sensitive GroupsOrange151 to 200UnhealthyRed201 to 300Very UnhealthyPurple301 to 500HazardousMaroonNote: Values above 500 are considered Beyond the AQI. Follow recommendations for the Hazardous category. Additional information on reducing exposure to extremely high levels of particle pollution is available here.Each category corresponds to a different level of health concern. The six levels of health concern and what they mean are:"Good" AQI is 0 to 50. Air quality is considered satisfactory, and air pollution poses little or no risk."Moderate" AQI is 51 to 100. Air quality is acceptable; however, for some pollutants there may be a moderate health concern for a very small number of people. For example, people who are unusually sensitive to ozone may experience respiratory symptoms."Unhealthy for Sensitive Groups" AQI is 101 to 150. Although general public is not likely to be affected at this AQI range, people with lung disease, older adults and children are at a greater risk from exposure to ozone, whereas persons with heart and lung disease, older adults and children are at greater risk from the presence of particles in the air."Unhealthy" AQI is 151 to 200. Everyone may begin to experience some adverse health effects, and members of the sensitive groups may experience more serious effects."Very Unhealthy" AQI is 201 to 300. This would trigger a health alert signifying that everyone may experience more serious health effects."Hazardous" AQI greater than 300. This would trigger a health warnings of emergency conditions. The entire population is more likely to be affected.AQI colorsEPA has assigned a specific color to each AQI category to make it easier for people to understand quickly whether air pollution is reaching unhealthy levels in their communities. For example, the color orange means that conditions are "unhealthy for sensitive groups," while red means that conditions may be "unhealthy for everyone," and so on.Air Quality Index Levels of Health ConcernNumericalValueMeaningGood0 to 50Air quality is considered satisfactory, and air pollution poses little or no risk.Moderate51 to 100Air quality is acceptable; however, for some pollutants there may be a moderate health concern for a very small number of people who are unusually sensitive to air pollution.Unhealthy for Sensitive Groups101 to 150Members of sensitive groups may experience health effects. The general public is not likely to be affected.Unhealthy151 to 200Everyone may begin to experience health effects; members of sensitive groups may experience more serious health effects.Very Unhealthy201 to 300Health alert: everyone may experience more serious health effects.Hazardous301 to 500Health warnings of emergency conditions. The entire population is more likely to be affected.Note: Values above 500 are considered Beyond the AQI. Follow recommendations for the "Hazardous category." Additional information on reducing exposure to extremely high levels of particle pollution is available here.

  13. d

    Research Ship Roger Revelle Underway Meteorological Data, Quality Controlled...

    • catalog.data.gov
    Updated Jun 10, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shipboard Automated Meteorological and Oceanographic System (SAMOS) (Point of Contact) (2023). Research Ship Roger Revelle Underway Meteorological Data, Quality Controlled [Dataset]. https://catalog.data.gov/dataset/research-ship-roger-revelle-underway-meteorological-data-quality-controlled
    Explore at:
    Dataset updated
    Jun 10, 2023
    Dataset provided by
    Shipboard Automated Meteorological and Oceanographic System (SAMOS) (Point of Contact)
    Description

    Research Ship Roger Revelle Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~"ZZZ........Z." in your query. '=~' indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '' says to match the previous character 0 or more times. (Don't include backslashes in your query.) See the tutorial for regular expressions at https://www.vogella.com/tutorials/JavaRegularExpressions/article.html

  14. e

    Data from: RawBeans: a simple, vendor independent, raw-data quality control...

    • ebi.ac.uk
    • data.niaid.nih.gov
    • +2more
    Updated Nov 3, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yishai Levin (2021). RawBeans: a simple, vendor independent, raw-data quality control tool [Dataset]. https://www.ebi.ac.uk/pride/archive/projects/PXD022816
    Explore at:
    Dataset updated
    Nov 3, 2021
    Authors
    Yishai Levin
    Variables measured
    Proteomics
    Description

    Every laboratory performing mass spectrometry based proteomics strives to generate high quality data. Among the many factors that influence the outcome of any experiment in proteomics is performance of the LC-MS system, which should be monitored continuously. This process is termed quality control (QC). We present an easy to use, rapid tool, which produces a visual, HTML based report that includes the key parameters needed to monitor LC-MS system perfromance. The tool, named RawBeans, can generate a report for individual files, or for a set of samples from a whole experiment. We anticipate it will help proteomics users and experts evaluate raw data quality, independent of data processing. The tool is available here: https://bitbucket.org/incpm/prot-qc/downloads.

  15. Heidelberg Tributary Loading Program (HTLP) Dataset

    • zenodo.org
    • explore.openaire.eu
    bin, png
    Updated Jul 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NCWQR; NCWQR (2024). Heidelberg Tributary Loading Program (HTLP) Dataset [Dataset]. http://doi.org/10.5281/zenodo.6606950
    Explore at:
    bin, pngAvailable download formats
    Dataset updated
    Jul 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    NCWQR; NCWQR
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is updated more frequently and can be visualized on NCWQR's data portal.

    If you have any questions, please contact Dr. Laura Johnson or Dr. Nathan Manning.

    The National Center for Water Quality Research (NCWQR) is a research laboratory at Heidelberg University in Tiffin, Ohio, USA. Our primary research program is the Heidelberg Tributary Loading Program (HTLP), where we currently monitor water quality at 22 river locations throughout Ohio and Michigan, effectively covering ~half of the land area of Ohio. The goal of the program is to accurately measure the total amounts (loads) of pollutants exported from watersheds by rivers and streams. Thus these data are used to assess different sources (nonpoint vs point), forms, and timing of pollutant export from watersheds. The HTLP officially began with high-frequency monitoring for sediment and nutrients from the Sandusky and Maumee rivers in 1974, and has continually expanded since then.

    Each station where samples are collected for water quality is paired with a US Geological Survey gage for quantifying discharge (http://waterdata.usgs.gov/usa/nwis/rt). Our stations cover a wide range of watershed areas upstream of the sampling point from 11.0 km2 for the unnamed tributary to Lost Creek to 19,215 km2 for the Muskingum River. These rivers also drain a variety of land uses, though a majority of the stations drain over 50% row-crop agriculture.

    At most sampling stations, submersible pumps located on the stream bottom continuously pump water into sampling wells inside heated buildings where automatic samplers collect discrete samples (4 unrefrigerated samples/d at 6-h intervals, 1974–1987; 3 refrigerated samples/d at 8-h intervals, 1988-current). At weekly intervals the samples are returned to the NCWQR laboratories for analysis. When samples either have high turbidity from suspended solids or are collected during high flow conditions, all samples for each day are analyzed. As stream flows and/or turbidity decreases, analysis frequency shifts to one sample per day. At the River Raisin and Muskingum River, a cooperator collects a grab sample from a bridge at or near the USGS station approximately daily and all samples are analyzed. Each sample bottle contains sufficient volume to support analyses of total phosphorus (TP), dissolved reactive phosphorus (DRP), suspended solids (SS), total Kjeldahl nitrogen (TKN), ammonium-N (NH4), nitrate-N and nitrite-N (NO2+3), chloride, fluoride, and sulfate. Nitrate and nitrite are commonly added together when presented; henceforth we refer to the sum as nitrate.

    Upon return to the laboratory, all water samples are analyzed within 72h for the nutrients listed below using standard EPA methods. For dissolved nutrients, samples are filtered through a 0.45 um membrane filter prior to analysis. We currently use a Seal AutoAnalyzer 3 for DRP, silica, NH4, TP, and TKN colorimetry, and a DIONEX Ion Chromatograph with AG18 and AS18 columns for anions. Prior to 2014, we used a Seal TRAACs for all colorimetry.

    2017 Ohio EPA Project Study Plan and Quality Assurance Plan

    Project Study Plan

    Quality Assurance Plan

    Data quality control and data screening

    The data provided in the River Data files have all been screened by NCWQR staff. The purpose of the screening is to remove outliers that staff deem likely to reflect sampling or analytical errors rather than outliers that reflect the real variability in stream chemistry. Often, in the screening process, the causes of the outlier values can be determined and appropriate corrective actions taken. These may involve correction of sample concentrations or deletion of those data points.

    This micro-site contains data for approximately 126,000 water samples collected beginning in 1974. We cannot guarantee that each data point is free from sampling bias/error, analytical errors, or transcription errors. However, since its beginnings, the NCWQR has operated a substantial internal quality control program and has participated in numerous external quality control reviews and sample exchange programs. These programs have consistently demonstrated that data produced by the NCWQR is of high quality.

    A note on detection limits and zero and negative concentrations

    It is routine practice in analytical chemistry to determine method detection limits and/or limits of quantitation, below which analytical results are considered less reliable or unreliable. This is something that we also do as part of our standard procedures. Many laboratories, especially those associated with agencies such as the U.S. EPA, do not report individual values that are less than the detection limit, even if the analytical equipment returns such values. This is in part because as individual measurements they may not be considered valid under litigation.

    The measured concentration consists of the true but unknown concentration plus random instrument error, which is usually small compared to the range of expected environmental values. In a sample for which the true concentration is very small, perhaps even essentially zero, it is possible to obtain an analytical result of 0 or even a small negative concentration. Results of this sort are often “censored” and replaced with the statement “

    Censoring these low values creates a number of problems for data analysis. How do you take an average? If you leave out these numbers, you get a biased result because you did not toss out any other (higher) values. Even if you replace negative concentrations with 0, a bias ensues, because you’ve chopped off some portion of the lower end of the distribution of random instrument error.

    For these reasons, we do not censor our data. Values of -9 and -1 are used as missing value codes, but all other negative and zero concentrations are actual, valid results. Negative concentrations make no physical sense, but they make analytical and statistical sense. Users should be aware of this, and if necessary make their own decisions about how to use these values. Particularly if log transformations are to be used, some decision on the part of the user will be required.

    Analyte Detection Limits

    https://ncwqr.files.wordpress.com/2021/12/mdl-june-2019-epa-methods.jpg?w=1024

    For more information, please visit https://ncwqr.org/

  16. d

    NOAA Ship Rainier Underway Meteorological Data, Quality...

    • datadiscoverystudio.org
    opendap v.dap/2.0
    Updated Nov 15, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). NOAA Ship Rainier Underway Meteorological Data, Quality Controlledcoastwatch.pfeg.noaa.gov [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/9fb89cecbfc04275999598f28eb796e6/html
    Explore at:
    opendap v.dap/2.0Available download formats
    Dataset updated
    Nov 15, 2018
    Area covered
    Description

    NOAA Ship Rainier Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Rainier Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Rainier Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Rainier Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.html

  17. Data from: Untargeted metabolomics workshop report: quality control...

    • data.niaid.nih.gov
    xml
    Updated Dec 17, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prasad Phapale (2020). Untargeted metabolomics workshop report: quality control considerations from sample preparation to data analysis [Dataset]. https://data.niaid.nih.gov/resources?id=mtbls1301
    Explore at:
    xmlAvailable download formats
    Dataset updated
    Dec 17, 2020
    Dataset provided by
    EMBL
    Authors
    Prasad Phapale
    Variables measured
    tumor, Metabolomics
    Description

    The Metabolomics workshop on experimental and data analysis training for untargeted metabolomics was hosted by the Proteomics Society of India in December 2019. The Workshop included six tutorial lectures and hands-on data analysis training sessions presented by seven speakers. The tutorials and hands-on data analysis sessions focused on workflows for liquid chromatography-mass spectrometry (LC-MS) based on untargeted metabolomics. We review here three main topics from the workshop which were uniquely identified as bottlenecks for new researchers: a) experimental design, b) quality controls during sample preparation and instrumental analysis and c) data quality evaluation. Our objective here is to present common challenges faced by novice researchers and present possible guidelines and resources to address them. We provide resources and good practices for researchers who are at the initial stage of setting up metabolomics workflows in their labs.

    Complete detailed metabolomics/lipidomics protocols are available online at EMBL-MCF protocol including video tutorials.

  18. Additional file 2 of A method for interoperable knowledge-based data quality...

    • figshare.com
    • springernature.figshare.com
    txt
    Updated Jun 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Erik Tute; Irina Scheffner; Michael Marschollek (2023). Additional file 2 of A method for interoperable knowledge-based data quality assessment [Dataset]. http://doi.org/10.6084/m9.figshare.14190090.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Erik Tute; Irina Scheffner; Michael Marschollek
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Additional file 2: Appendix B. Example AQL.

  19. d

    NOAA Ship Bell M. Shimada Underway Meteorological Data, Quality...

    • datadiscoverystudio.org
    opendap v.dap/2.0
    Updated Nov 15, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). NOAA Ship Bell M. Shimada Underway Meteorological Data, Quality Controlledcoastwatch.pfeg.noaa.gov [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/f89580f3d7e441ec9e4d685be27ea7e6/html
    Explore at:
    opendap v.dap/2.0Available download formats
    Dataset updated
    Nov 15, 2018
    Area covered
    Description

    NOAA Ship Bell M. Shimada Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Bell M. Shimada Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Bell M. Shimada Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Bell M. Shimada Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.html

  20. d

    Water Quality Data from the Yukon River Basin in Alaska and Canada Data...

    • datasets.ai
    • data.usgs.gov
    • +2more
    55
    Updated Aug 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department of the Interior (2024). Water Quality Data from the Yukon River Basin in Alaska and Canada Data Quality Assurance Field Blanks [Dataset]. https://datasets.ai/datasets/water-quality-data-from-the-yukon-river-basin-in-alaska-and-canada-data-quality-assurance-
    Explore at:
    55Available download formats
    Dataset updated
    Aug 6, 2024
    Dataset authored and provided by
    Department of the Interior
    Area covered
    Yukon River, Canada, Alaska
    Description

    This dataset contains data collected from field blanks. Field blanks are deionized water processed in the field by community technicians using processing methods identical to those for surface water samples. Field blanks are then analyzed in the laboratory following procedures identical to those for surface water samples.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. Geological Survey (2024). Data Quality Assurance - Instrument Detection Limits [Dataset]. https://catalog.data.gov/dataset/data-quality-assurance-instrument-detection-limits

Data Quality Assurance - Instrument Detection Limits

Explore at:
Dataset updated
Jul 6, 2024
Dataset provided by
United States Geological Surveyhttp://www.usgs.gov/
Description

This dataset includes laboratory instrument detection limit data associated with laboratory instruments used in the analysis of surface water samples collected as part of the USGS - Yukon River Inter-Tribal Watershed Council collaborative water quality monitoring project.

Search
Clear search
Close search
Google apps
Main menu