Research Ship Laurence M. Gould Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~"ZZZ........Z." in your query. '=~' indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '' says to match the previous character 0 or more times. (Don't include backslashes in your query.) See the tutorial for regular expressions at https://www.vogella.com/tutorials/JavaRegularExpressions/article.html
NOAA Ship Pisces Underway Meteorological Data (Near Real Time, updated daily) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Pisces Underway Meteorological Data (Near Real Time, updated daily) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Pisces Underway Meteorological Data (Near Real Time, updated daily) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Pisces Underway Meteorological Data (Near Real Time, updated daily) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.html
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In the SANDBOX research project, we investigated the natural dynamics of the North Sea bed. As part of this research, we conducted multiple research cruises on the North Sea. The documents in this dataset explain which data was collected, when it was collected and the structure of the data repository (svn.citg.tudelft.nl/sandbox).
NOAA Ship Ferdinand Hassler Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Ferdinand Hassler Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Ferdinand Hassler Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.htmlNOAA Ship Ferdinand Hassler Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~'ZZZ........Z.* 'in your query. '=~ 'indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '*' says to match the previous character 0 or more times. See the tutorial for regular expressions at http://www.vogella.com/tutorials/JavaRegularExpressions/article.html
The Sloan Digital Sky Survey (SDSS) Moving Object Catalog 4th release lists astrometric and photometric data for moving objects detected in the SDSS. The catalog includes various identification parameters, SDSS astrometric measurements (five SDSS magnitudes and their errors), and orbital elements for previously cataloged asteroids. The data set also includes a list of the runs from which data are included, and filter response curves.
Research Ship Knorr Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~"ZZZ........Z." in your query. '=~' indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '' says to match the previous character 0 or more times. (Don't include backslashes in your query.) See the tutorial for regular expressions at https://www.vogella.com/tutorials/JavaRegularExpressions/article.html
NOAA Ship Ronald Brown Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable s metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag= Z ) and which data isn t. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are good data, include this constraint in your ERDDAP query: flag=~ ZZZ........Z.* in your query. =~ indicates this is a regular expression constraint. The Z s are literal characters. In this dataset, Z indicates good data. The . s say to match any character. The * says to match the previous character 0 or more times. See the tutorial for regular expressions at www.vogella.de/articles/JavaRegularExpressions/article.html
NOAA Ship Henry B. Bigelow Underway Meteorological Data (Near Real Time, updated daily) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~"ZZZ........Z." in your query. "=~" indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '' says to match the previous character 0 or more times. See the tutorial for regular expressions at https://www.vogella.com/tutorials/JavaRegularExpressions/article.html
NOAA Ship Fairweather Underway Meteorological Data (Near Real Time, updated daily) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program.
IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query:
flag=~"ZZZ........Z.*"
in your query.
"=~" indicates this is a regular expression constraint.
The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data.
The '.'s say to match any character.
The '*' says to match the previous character 0 or more times.
See the tutorial for regular expressions at
https://www.vogella.com/tutorials/JavaRegularExpressions/article.html
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
Water companies in the UK are responsible for testing the quality of drinking water. This dataset contains the results of samples taken from the taps in domestic households to make sure they meet the standards set out by UK and European legislation. This data shows the location, date, and measured levels of determinands set out by the Drinking Water Inspectorate (DWI).
Key Definitions
Aggregation
Process involving summarizing or grouping data to obtain a single or reduced set of information, often for analysis or reporting purposes
Anonymisation
Anonymised data is a type of information sanitization in which data anonymisation tools encrypt or remove personally identifiable information from datasets for the purpose of preserving a data subject's privacy
Dataset
Structured and organized collection of related elements, often stored digitally, used for analysis and interpretation in various fields.
Determinand
A constituent or property of drinking water which can be determined or estimated.
DWI
Drinking Water Inspectorate, an organisation “providing independent reassurance that water supplies in England and Wales are safe and drinking water quality is acceptable to consumers.”
DWI Determinands
Constituents or properties that are tested for when evaluating a sample for its quality as per the guidance of the DWI. For this dataset, only determinands with “point of compliance” as “customer taps” are included.
Granularity
Data granularity is a measure of the level of detail in a data structure. In time-series data, for example, the granularity of measurement might be based on intervals of years, months, weeks, days, or hours
ID
Abbreviation for Identification that refers to any means of verifying the unique identifier assigned to each asset for the purposes of tracking, management, and maintenance.
LSOA
Lower-Level Super Output Area is made up of small geographic areas used for statistical and administrative purposes by the Office for National Statistics. It is designed to have homogeneous populations in terms of population size, making them suitable for statistical analysis and reporting. Each LSOA is built from groups of contiguous Output Areas with an average of about 1,500 residents or 650 households allowing for granular data collection useful for analysis, planning and policy- making while ensuring privacy.
ONS
Office for National Statistics
Open Data Triage
The process carried out by a Data Custodian to determine if there is any evidence of sensitivities associated with Data Assets, their associated Metadata and Software Scripts used to process Data Assets if they are used as Open Data. <
Sample
A sample is a representative segment or portion of water taken from a larger whole for the purpose of analysing or testing to ensure compliance with safety and quality standards.
Schema
Structure for organizing and handling data within a dataset, defining the attributes, their data types, and the relationships between different entities. It acts as a framework that ensures data integrity and consistency by specifying permissible data types and constraints for each attribute.
Units
Standard measurements used to quantify and compare different physical quantities.
Water Quality
The chemical, physical, biological, and radiological characteristics of water, typically in relation to its suitability for a specific purpose, such as drinking, swimming, or ecological health. It is determined by assessing a variety of parameters, including but not limited to pH, turbidity, microbial content, dissolved oxygen, presence of substances and temperature.
Data History
Data Origin
These samples were taken from customer taps. They were then analysed for water quality, and the results were uploaded to a database. This dataset is an extract from this database.
Data Triage Considerations
Granularity
Is it useful to share results as averages or individual?
We decided to share as individual results as the lowest level of granularity
Anonymisation
It is a requirement that this data cannot be used to identify a singular person or household. We discussed many options for aggregating the data to a specific geography to ensure this requirement is met. The following geographical aggregations were discussed:
<!--·
Water Supply Zone (WSZ) - Limits interoperability
with other datasets
<!--·
Postcode – Some postcodes contain very few
households and may not offer necessary anonymisation
<!--·
Postal Sector – Deemed not granular enough in
highly populated areas
<!--·
Rounded Co-ordinates – Not a recognised standard
and may cause overlapping areas
<!--·
MSOA – Deemed not granular enough
<!--·
LSOA – Agreed as a recognised standard appropriate
for England and Wales
<!--·
Data Zones – Agreed as a recognised standard
appropriate for Scotland
Data Specifications
Each dataset will cover a calendar year of samples
This dataset will be published annually
Historical datasets will be published as far back as 2016 from the introduction of of The Water Supply (Water Quality) Regulations 2016
The Determinands included in the dataset are as per the list that is required to be reported to the Drinking Water Inspectorate.
Context
Many UK water companies provide a search tool on their websites where you can search for water quality in your area by postcode. The results of the search may identify the water supply zone that supplies the postcode searched. Water supply zones are not linked to LSOAs which means the results may differ to this dataset
Some sample results are influenced by internal plumbing and may not be representative of drinking water quality in the wider area.
Some samples are tested on site and others are sent to scientific laboratories.
Data Publish Frequency
Annually
Data Triage Review Frequency
Annually unless otherwise requested
Supplementary information
Below is a curated selection of links for additional reading, which provide a deeper understanding of this dataset.
<!--1.
Drinking Water
Inspectorate Standards and Regulations:
<!--2.
https://www.dwi.gov.uk/drinking-water-standards-and-regulations/
<!--3.
LSOA (England
and Wales) and Data Zone (Scotland):
<!--5.
Description
for LSOA boundaries by the ONS: Census
2021 geographies - Office for National Statistics (ons.gov.uk)
<!--[6.
Postcode to
LSOA lookup tables: Postcode
to 2021 Census Output Area to Lower Layer Super Output Area to Middle Layer
Super Output Area to Local Authority District (August 2023) Lookup in the UK
(statistics.gov.uk)
<!--7.
Legislation history: Legislation -
Drinking Water Inspectorate (dwi.gov.uk)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Lebanon LB: Proportion of People Living Below 50 Percent Of Median Income: % data was reported at 10.700 % in 2011. Lebanon LB: Proportion of People Living Below 50 Percent Of Median Income: % data is updated yearly, averaging 10.700 % from Dec 2011 (Median) to 2011, with 1 observations. The data reached an all-time high of 10.700 % in 2011 and a record low of 10.700 % in 2011. Lebanon LB: Proportion of People Living Below 50 Percent Of Median Income: % data remains active status in CEIC and is reported by World Bank. The data is categorized under Global Database’s Lebanon – Table LB.World Bank.WDI: Social: Poverty and Inequality. The percentage of people in the population who live in households whose per capita income or consumption is below half of the median income or consumption per capita. The median is measured at 2017 Purchasing Power Parity (PPP) using the Poverty and Inequality Platform (http://www.pip.worldbank.org). For some countries, medians are not reported due to grouped and/or confidential data. The reference year is the year in which the underlying household survey data was collected. In cases for which the data collection period bridged two calendar years, the first year in which data were collected is reported.;World Bank, Poverty and Inequality Platform. Data are based on primary household survey data obtained from government statistical agencies and World Bank country departments. Data for high-income economies are mostly from the Luxembourg Income Study database. For more information and methodology, please see http://pip.worldbank.org.;;The World Bank’s internationally comparable poverty monitoring database now draws on income or detailed consumption data from more than 2000 household surveys across 169 countries. See the Poverty and Inequality Platform (PIP) for details (www.pip.worldbank.org).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Vegetation and elevation data were collected by the Partnership for the Delaware Estuary. Topographic Change: Transects were spaced at 3m intervals creating a 3m2 survey grid covering~0.12 hectares (1,170m2). This area included the marsh platform and mudflat to ~5m waterward of the installed breakwaters. Fixed Monitoring Plots Analyses: Mean +/- SEM for elevation, vegetation robustness, bearing capacity, and mussel density were calculated as the average of three replicate plots per position per sub-area per year. Bearing Capacity (Substrate Firmness): Measured as the penetrative capacity of a slide hammer after 5 blows. Vegetation Robustness in Fixed Monitoring Plots: Vegetation robustness integrates the horizontal and vertical obstruction of a parcel of vegetation into an overall unit-less index between 0-1 and is reported as a percentage. A score of 0% indicates no robustness, and a score of 100% indicates full robustness. By integrating the horizontal and vertical obstruction through the marsh canopy, a full picture of the three-dimensional structure of the vegetation with in the parcel is obtained. The formula for calculation was: 𝑉𝑒𝑔𝑒𝑡𝑎𝑡𝑖𝑜𝑛 𝑅𝑜𝑏𝑢𝑠𝑡𝑛𝑒𝑠𝑠 = 𝐻𝑜𝑟𝑖𝑧𝑜𝑛𝑡𝑎𝑙 𝑉𝑒𝑔𝑒𝑡𝑎𝑡𝑖𝑜𝑛 𝐷𝑒𝑛𝑠𝑖𝑡𝑦+𝑉𝑒𝑟𝑡𝑖𝑐𝑎𝑙 𝑉𝑒𝑔𝑒𝑡𝑎𝑡𝑖𝑜𝑛 𝐷𝑒𝑛𝑠𝑖𝑡𝑦2 • Horizontal Vegetation Density: measured horizontal density by counting the number of bars visible (out of 10; 10cm width each) on a 1m obstruction board from 3m away within the same band of vegetation. The count was conducted at three heights: 0.25m; 0.50m; 0.75m. The height to which data was used for calculations (number of bars available; max=30, 10 at each height) was determined by the max vegetation height as measured by Blade Height below. Calculations were as follows: 𝐻𝑜𝑟𝑖𝑧𝑜𝑛𝑡𝑎𝑙 𝑉𝑒𝑔𝑒𝑡𝑎𝑡𝑖𝑜𝑛 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑏𝑎𝑟𝑠 𝑎𝑣𝑎𝑖𝑙𝑎𝑏𝑙𝑒 − 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑏𝑎𝑟𝑠 𝑣𝑖𝑠𝑖𝑏𝑙𝑒 /𝑛𝑢𝑚𝑏𝑒𝑟𝑜𝑓 𝑏𝑎𝑟𝑠 𝑎𝑣𝑎𝑖𝑙𝑎𝑏𝑙𝑒 • Blade Height: Twenty-five stems were measured, moving from the waterward corner towards the interior. Max blade height was used for the Vegetation Robustness calculation. • Vertical Vegetation Density (Canopy Cover): Five measurements of ambient light were taken above each plot (corners and center) and at the ground level (penetrative light) beneath canopy using a light meter. Calculations were as follows: 𝑉𝑒𝑟𝑡𝑖𝑐𝑎𝑙 𝑉𝑒𝑔𝑒𝑡𝑎𝑡𝑖𝑜𝑛 𝐷𝑒𝑛𝑠𝑖𝑡𝑦 = 1 − (ratio of penetrative light: ambient light)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Djibouti DJ: Proportion of People Living Below 50 Percent Of Median Income: % data was reported at 17.200 % in 2017. This records a decrease from the previous number of 18.900 % for 2013. Djibouti DJ: Proportion of People Living Below 50 Percent Of Median Income: % data is updated yearly, averaging 18.050 % from Dec 2002 (Median) to 2017, with 4 observations. The data reached an all-time high of 18.900 % in 2013 and a record low of 15.400 % in 2002. Djibouti DJ: Proportion of People Living Below 50 Percent Of Median Income: % data remains active status in CEIC and is reported by World Bank. The data is categorized under Global Database’s Djibouti – Table DJ.World Bank.WDI: Social: Poverty and Inequality. The percentage of people in the population who live in households whose per capita income or consumption is below half of the median income or consumption per capita. The median is measured at 2017 Purchasing Power Parity (PPP) using the Poverty and Inequality Platform (http://www.pip.worldbank.org). For some countries, medians are not reported due to grouped and/or confidential data. The reference year is the year in which the underlying household survey data was collected. In cases for which the data collection period bridged two calendar years, the first year in which data were collected is reported.;World Bank, Poverty and Inequality Platform. Data are based on primary household survey data obtained from government statistical agencies and World Bank country departments. Data for high-income economies are mostly from the Luxembourg Income Study database. For more information and methodology, please see http://pip.worldbank.org.;;The World Bank’s internationally comparable poverty monitoring database now draws on income or detailed consumption data from more than 2000 household surveys across 169 countries. See the Poverty and Inequality Platform (PIP) for details (www.pip.worldbank.org).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These supplementary data files were created in 2021 as part of a research paper describing the results of a research study for which data was collected in 2020. It is made up of two documents: one PDF containing the calculation of the phantomless calibration factor, and one excel file in which the raw data is presented on multiple worksheets.
Global Population of the World (GPW) translates census population data to a latitude-longitude grid so that population data may be used in cross-disciplinary studies. There are three data files with this data set for the reference years 1990 and 1995. Over 127,000 administrative units and population counts were collected and integrated from various sources to create the gridded data. In brief, GPW was created using the following steps: * Population data were estimated for the product reference years, 1990 and 1995, either by the data source or by interpolating or extrapolating the given estimates for other years. * Additional population estimates were created by adjusting the source population data to match UN national population estimates for the reference years. * Borders and coastlines of the spatial data were matched to the Digital Chart of the World where appropriate and lakes from the Digital Chart of the World were added. * The resulting data were then transformed into grids of UN-adjusted and unadjusted population counts for the reference years. * Grids containing the area of administrative boundary data in each cell (net of lakes) were created and used with the count grids to produce population densities.As with any global data set based on multiple data sources, the spatial and attribute precision of GPW is variable. The level of detail and accuracy, both in time and space, vary among the countries for which data were obtained.
GLAH06 is used in conjunction with GLAH05 to create the Level-2 altimetry products. Level-2 altimetry data provide surface elevations for ice sheets (GLAH12), sea ice (GLAH13), land (GLAH14), and oceans (GLAH15). Data also include the laser footprint geolocation and reflectance, as well as geodetic, instrument, and atmospheric corrections for range measurements. The Level-2 elevation products, are regional products archived at 14 orbits per granule, starting and stopping at the same demarcation ( 50 latitude) as GLAH05 and GLAH06. Each regional product is processed with algorithms specific to that surface type. Surface type masks define which data are written to each of the products. If any data within a given record fall within a specific mask, the entire record is written to the product. Masks can overlap: for example, non-land data in the sea ice region may be written to the sea ice and ocean products. This means that an algorithm may write the same data to more than one Level-2 product. In this case, different algorithms calculate the elevations in their respective products. The surface type masks are versioned and archived at NSIDC, so users can tell which data to expect in each product. Each data granule has an associated browse product.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary material to DESI's publication "New measurements of the Lyman-alpha forest continuum and effective optical depth with LyCAN and DESI Y1 data" to comply with the data management plan. Data for each figure is provided. See the README file for information on which data file corresponds to each figure.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global data entry service market size is poised to experience significant growth, with the market expected to rise from USD 2.5 billion in 2023 to USD 4.8 billion by 2032, achieving a Compound Annual Growth Rate (CAGR) of 7.5% over the forecast period. This growth can be attributed to several factors including the increasing adoption of digital technologies, the rising demand for data accuracy and integrity, and the need for businesses to manage vast amounts of data efficiently.
One of the key growth factors driving the data entry service market is the rapid digital transformation across various industries. As businesses continue to digitize their operations, the volume of data generated has increased exponentially. This data needs to be accurately entered, processed, and managed to derive meaningful insights. The demand for data entry services has surged as companies seek to outsource these non-core activities, enabling them to focus on their primary business operations. Additionally, the widespread adoption of cloud-based solutions and big data analytics has further fueled the demand for efficient data management services.
Another significant driver of market growth is the increasing need for data accuracy and integrity. Inaccurate or incomplete data can lead to poor decision-making, financial losses, and a decrease in operational efficiency. Organizations are increasingly recognizing the importance of maintaining high-quality data and are investing in data entry services to ensure that their databases are accurate, up-to-date, and reliable. This is particularly crucial for industries such as healthcare, BFSI, and retail, where precise data is essential for regulatory compliance, customer relationship management, and operational efficiency.
The cost-effectiveness of outsourcing data entry services is also contributing to market growth. By outsourcing these tasks to specialized service providers, organizations can save on labor costs, reduce operational expenses, and improve productivity. Service providers often have access to advanced tools and technologies, as well as skilled professionals who can perform data entry tasks more efficiently and accurately. This not only leads to cost savings but also allows businesses to reallocate resources to more strategic activities, driving overall growth.
From a regional perspective, the Asia Pacific region is expected to witness the highest growth in the data entry service market during the forecast period. This can be attributed to the region's strong IT infrastructure, the presence of numerous outsourcing service providers, and the growing adoption of digital technologies across various industries. North America and Europe are also significant markets, driven by the high demand for data management services in sectors such as healthcare, BFSI, and retail. The Middle East & Africa and Latin America are anticipated to experience steady growth, supported by increasing investments in digital infrastructure and the rising awareness of the benefits of data entry services.
The data entry service market can be segmented into various service types, including online data entry, offline data entry, data processing, data conversion, data cleansing, and others. Each of these service types plays a crucial role in ensuring the accuracy, integrity, and usability of data. Online data entry services involve entering data directly into an online system or database, which is essential for real-time data management and accessibility. This service type is particularly popular in industries such as e-commerce, where timely and accurate data entry is critical for inventory management and customer service.
Offline data entry services, on the other hand, involve entering data into offline systems or databases, which are later synchronized with online systems. This service type is often used in industries where internet connectivity may be unreliable or where data security is a primary concern. Offline data entry is also essential for processing historical data or data that is collected through physical forms and documents. The demand for offline data entry services is driven by the need for accurate and timely data entry in sectors such as manufacturing, government, and healthcare.
Data processing services involve the manipulation, transformation, and analysis of raw data to produce meaningful information. This includes tasks such as data validation, data sorting, data aggregation, and data analysis. Data processing is a critical componen
Tower flux measurements of carbon dioxide,water vapor, heat, and meteorological variables were obtained at the Tapajos National Forest, km 83 site, Santarem, Para, Brazil. For the period June 29, 2000 through March 11, 2004, 30-minute averaged and calculated quantities of fluxes of momentum, heat, water vapor, and carbon dioxide, storage of carbon dioxide in the air column, are reported. Data are reported in three comma separated files: (1) 30 minute-averages, (2) the daily (24 hour) averages, and (3) the monthly (calendar) averages.The variables measured on the 67 m tower relate to meteorology, soil moisture, respiration, fluxes of momentum, heat, water vapor, and carbon dioxide, and were used to calculate storage of carbon dioxide, Net Ecosystem Exchange, and Gross Primary Productivity. Most of the variables have not been gap filled. However, CO2 flux and storage have been filled to avoid biases in Net Ecosystem Exchange; a fill index flag is included to indicate which data points were filled. Variables derived from the filled variables (respiration, NEE, GPP) are essentially filled also. Net ecosystem exchange has been filtered for calm nighttime periods.
https://qdr.syr.edu/policies/qdr-standard-access-conditionshttps://qdr.syr.edu/policies/qdr-standard-access-conditions
This is an Active Citation data project. Active Citation is a precursor approach toAnnotation for Transparent Inquiry (ATI). It has now been converted to the ATI format. The annotated article can be viewed on the publisher's website. Project Summary This project develops and tests a new theory to explain left-right polarization in newer democracies, emphasizing the quality of governance and how it shapes incentives for radical parties to moderate. High-quality governance increases the relative salience of left-right programmatic appeals and makes coalitions with status quo parties attractive, creating centripetal incentives for radical parties and empowering moderate factions within those parties. Low-quality governance decreases the relative salience of left-right programmatic appeals and makes coalitions with status quo parties potentially poisonous, creating centrifugal incentives for radical parties and empowering extremist factions. The project employs a nested research design. Case studies of Venezuela and Brazil illustrate the mechanisms of the theory and evaluate its key propositions through process-tracing. These cases were selected because they capture significant variation on the dependent variable, because they are seen as particularly critical for understanding subjects such as the rise of the left in Latin America and the dynamics of programmatic polarization, and because they were the first three countries in the region where the left came to power. In the large-n portion of the research design, statistical analysis is utilized to assess the relationship between governance levels and left-right programmatic polarization across Latin America between 1994 and 2010. This relationship is substantively strong and robust to a variety of different modeling choices. Data Abstract The author primarily draws upon two original databases of qualitative materials collected for the project: a collection of roughly 500 “left party-related” sources (party documents, editorials and memoirs of left party leaders, etc.) and a compilation of roughly 900 news sources related to left parties and their factional conflicts. The data were collected from 2008 and 2013 and cover the period from 1985 and 2010. The “left party-related” sources were gathered through archives and libraries. A substantial archive of documents related to the Partido Dos Trabalhadores is housed at the Fundação Perseu Abramo, in Sao Paolo, Brazil. The University of Notre Dame acquired a microfilm copy of this entire archive (93 reels). The author examined the whole archive at Notre Dame, searching for documents and other information that bore directly on the concerns of the project. Because in Venezuela no central archive existed for the left parties involved, the author collated party-related documents and other information from diverse sources, mainly relatively rare books (usually published in Venezuela with small presses) that collected these documents. The database of news articles was generated in the following manner. For each country a newspaper or weekly magazine was selected that was known to provide in-depth political coverage from a relatively centrist perspective: El Universal in Venezuela and Folha de Sao Paulo in Brazil. The author then defined the time period for each case during which major factional conflicts within left parties occurred and were resolved: 1993-1998 in Venezuela and 1994-2002 in Brazil. The next step was to collect all stories from each news source in the defined time period that related to left parties, with an emphasis on their factional conflicts and its resolution. The process of doing so differed somewhat according to the medium in which the news source was available. Venezuela’s El Universal was only available on microfilm, requiring the author and a research assistant to review each daily issue, capturing stories to PDF according to defined criteria. Brazil’s Folha’s archives are available online, allowing its database to be searched by sets of key words, downloading the group of stories produced by these searches, and then including the individual stories in the database according to defined criteria. The interviews the author conducted as part of the broader project are not being shared at this time, but might be included in a planned second deposit to QDR of a larger stand-alone data collection. Files Description Each case study for which data are being shared contains a two-stage causal argument: (1) governance levels decisively affected factional dynamics within the major left party of that country; (2) the resolution of factional conflict bore strongly on the level of polarization in the emerging party system. For each case, each stage of this argument is supported by several pieces of diagnostic evidence original to the project – both party-related and news sources – that were either scanned (if available only in hard copy) or printed to PDF (if available on microfilm or the Web). The...
Research Ship Laurence M. Gould Underway Meteorological Data (delayed ~10 days for quality control) are from the Shipboard Automated Meteorological and Oceanographic System (SAMOS) program. IMPORTANT: ALWAYS USE THE QUALITY FLAG DATA! Each data variable's metadata includes a qcindex attribute which indicates a character number in the flag data. ALWAYS check the flag data for each row of data to see which data is good (flag='Z') and which data isn't. For example, to extract just data where time (qcindex=1), latitude (qcindex=2), longitude (qcindex=3), and airTemperature (qcindex=12) are 'good' data, include this constraint in your ERDDAP query: flag=~"ZZZ........Z." in your query. '=~' indicates this is a regular expression constraint. The 'Z's are literal characters. In this dataset, 'Z' indicates 'good' data. The '.'s say to match any character. The '' says to match the previous character 0 or more times. (Don't include backslashes in your query.) See the tutorial for regular expressions at https://www.vogella.com/tutorials/JavaRegularExpressions/article.html