59 datasets found
  1. v

    Geospatial Data Standard - Administrative Boundaries

    • vgin.vdem.virginia.gov
    Updated Mar 29, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Virginia Geographic Information Network (2016). Geospatial Data Standard - Administrative Boundaries [Dataset]. https://vgin.vdem.virginia.gov/documents/05d6a7b182bc491cb3ead5f43137dbc1
    Explore at:
    Dataset updated
    Mar 29, 2016
    Dataset authored and provided by
    Virginia Geographic Information Network
    Description

    The purpose of the Virginia Administrative Boundary Geospatial Data Standard is to implement, as a Commonwealth ITRM Standard, the data file naming conventions, geometry, map projection system, common set of attributes, dataset type and specifications, and level of precision for the Virginia Administrative Boundaries Dataset, which will be the data source of record at the state level for administrative boundary spatial features within the Commonwealth of Virginia.

  2. e

    ICT use large companies; degree of standardisation data definitions

    • data.europa.eu
    atom feed, json
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ICT use large companies; degree of standardisation data definitions [Dataset]. https://data.europa.eu/data/datasets/1744-ict-gebruik-grote-bedrijven-mate-van-standaardisatie-gegevensdefinities
    Explore at:
    json, atom feedAvailable download formats
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This table provides information on the degree of standardisation of data definitions for large companies, by industry. This concerns the extent to which agreements exist on the data format (the form and extent in which the data is stored), the meaning of data within the company and agreements on this with third parties outside the company. With large companies, this table refers to: companies with 250 and more people employed. The data shall be broken down by industry.

    Data available for the period 2006-2008.

    Status of the figures: The figures are final.

    Changes as of 6 April 2011 This table has been discontinued.

    When are new figures coming? The subject in this table was requested only in 2006, 2007 and 2008.
    The data in this table are no longer updated.

  3. d

    Lidar - California - Leosphere Windcube 866 (120), Humboldt - Processed Data...

    • catalog.data.gov
    • data.openei.org
    Updated Jul 26, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wind Energy Technologies Office (WETO) (2022). Lidar - California - Leosphere Windcube 866 (120), Humboldt - Processed Data [Dataset]. https://catalog.data.gov/dataset/lidar-california-leosphere-windcube-866-120-humboldt-raw-data-635d9
    Explore at:
    Dataset updated
    Jul 26, 2022
    Dataset provided by
    Wind Energy Technologies Office (WETO)
    Area covered
    California, Humboldt County
    Description

    Overview The purpose of this dataset is to provide preliminary filtered, averaged lidar data and standardize the data format of various datastreams from the buoy into NetCDF. Data Quality Standard filtering thresholds on the averaged data were applied and several data format issues of the raw data were streamlined to create a standardized NetCDF format data. Uncertainty The uncertainty of lidar data has not been analyzed, but they are not expected to deviate from instrument technical specifications.

  4. Job Postings Dataset for Labour Market Research and Insights

    • datarade.ai
    Updated Sep 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oxylabs (2023). Job Postings Dataset for Labour Market Research and Insights [Dataset]. https://datarade.ai/data-products/job-postings-dataset-for-labour-market-research-and-insights-oxylabs
    Explore at:
    .json, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Sep 20, 2023
    Dataset authored and provided by
    Oxylabs
    Area covered
    Luxembourg, Zambia, Togo, British Indian Ocean Territory, Kyrgyzstan, Anguilla, Jamaica, Tajikistan, Sierra Leone, Switzerland
    Description

    Introducing Job Posting Datasets: Uncover labor market insights!

    Elevate your recruitment strategies, forecast future labor industry trends, and unearth investment opportunities with Job Posting Datasets.

    Job Posting Datasets Source:

    1. Indeed: Access datasets from Indeed, a leading employment website known for its comprehensive job listings.

    2. Glassdoor: Receive ready-to-use employee reviews, salary ranges, and job openings from Glassdoor.

    3. StackShare: Access StackShare datasets to make data-driven technology decisions.

    Job Posting Datasets provide meticulously acquired and parsed data, freeing you to focus on analysis. You'll receive clean, structured, ready-to-use job posting data, including job titles, company names, seniority levels, industries, locations, salaries, and employment types.

    Choose your preferred dataset delivery options for convenience:

    Receive datasets in various formats, including CSV, JSON, and more. Opt for storage solutions such as AWS S3, Google Cloud Storage, and more. Customize data delivery frequencies, whether one-time or per your agreed schedule.

    Why Choose Oxylabs Job Posting Datasets:

    1. Fresh and accurate data: Access clean and structured job posting datasets collected by our seasoned web scraping professionals, enabling you to dive into analysis.

    2. Time and resource savings: Focus on data analysis and your core business objectives while we efficiently handle the data extraction process cost-effectively.

    3. Customized solutions: Tailor our approach to your business needs, ensuring your goals are met.

    4. Legal compliance: Partner with a trusted leader in ethical data collection. Oxylabs is a founding member of the Ethical Web Data Collection Initiative, aligning with GDPR and CCPA best practices.

    Pricing Options:

    Standard Datasets: choose from various ready-to-use datasets with standardized data schemas, priced from $1,000/month.

    Custom Datasets: Tailor datasets from any public web domain to your unique business needs. Contact our sales team for custom pricing.

    Experience a seamless journey with Oxylabs:

    • Understanding your data needs: We work closely to understand your business nature and daily operations, defining your unique data requirements.
    • Developing a customized solution: Our experts create a custom framework to extract public data using our in-house web scraping infrastructure.
    • Delivering data sample: We provide a sample for your feedback on data quality and the entire delivery process.
    • Continuous data delivery: We continuously collect public data and deliver custom datasets per the agreed frequency.

    Effortlessly access fresh job posting data with Oxylabs Job Posting Datasets.

  5. Developer Community and Code Datasets

    • datarade.ai
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oxylabs, Developer Community and Code Datasets [Dataset]. https://datarade.ai/data-products/developer-community-and-code-datasets-oxylabs
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset authored and provided by
    Oxylabs
    Area covered
    El Salvador, Tuvalu, Philippines, Bahamas, Guyana, Saint Pierre and Miquelon, South Sudan, Marshall Islands, United Kingdom, Djibouti
    Description

    Unlock the power of ready-to-use data sourced from developer communities and repositories with Developer Community and Code Datasets.

    Data Sources:

    1. GitHub: Access comprehensive data about GitHub repositories, developer profiles, contributions, issues, social interactions, and more.

    2. StackShare: Receive information about companies, their technology stacks, reviews, tools, services, trends, and more.

    3. DockerHub: Dive into data from container images, repositories, developer profiles, contributions, usage statistics, and more.

    Developer Community and Code Datasets are a treasure trove of public data points gathered from tech communities and code repositories across the web.

    With our datasets, you'll receive:

    • Usernames;
    • Companies;
    • Locations;
    • Job Titles;
    • Follower Counts;
    • Contact Details;
    • Employability Statuses;
    • And More.

    Choose from various output formats, storage options, and delivery frequencies:

    • Get datasets in CSV, JSON, or other preferred formats.
    • Opt for data delivery via SFTP or directly to your cloud storage, such as AWS S3.
    • Receive datasets either once or as per your agreed-upon schedule.

    Why choose our Datasets?

    1. Fresh and accurate data: Access complete, clean, and structured data from scraping professionals, ensuring the highest quality.

    2. Time and resource savings: Let us handle data extraction and processing cost-effectively, freeing your resources for strategic tasks.

    3. Customized solutions: Share your unique data needs, and we'll tailor our data harvesting approach to fit your requirements perfectly.

    4. Legal compliance: Partner with a trusted leader in ethical data collection. Oxylabs is trusted by Fortune 500 companies and adheres to GDPR and CCPA standards.

    Pricing Options:

    Standard Datasets: choose from various ready-to-use datasets with standardized data schemas, priced from $1,000/month.

    Custom Datasets: Tailor datasets from any public web domain to your unique business needs. Contact our sales team for custom pricing.

    Experience a seamless journey with Oxylabs:

    • Understanding your data needs: We work closely to understand your business nature and daily operations, defining your unique data requirements.
    • Developing a customized solution: Our experts create a custom framework to extract public data using our in-house web scraping infrastructure.
    • Delivering data sample: We provide a sample for your feedback on data quality and the entire delivery process.
    • Continuous data delivery: We continuously collect public data and deliver custom datasets per the agreed frequency.

    Empower your data-driven decisions with Oxylabs Developer Community and Code Datasets!

  6. Data from: Estimating bird detection distances in sound recordings for...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    Updated Jun 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kevin Darras; Brett Furnas; Irfan Fitriawan; Yeni Mulyani; Teja Tscharntke; Kevin Darras; Brett Furnas; Irfan Fitriawan; Yeni Mulyani; Teja Tscharntke (2022). Data from: Estimating bird detection distances in sound recordings for standardising detection ranges and distance sampling [Dataset]. http://doi.org/10.5061/dryad.h0qg353
    Explore at:
    Dataset updated
    Jun 1, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Kevin Darras; Brett Furnas; Irfan Fitriawan; Yeni Mulyani; Teja Tscharntke; Kevin Darras; Brett Furnas; Irfan Fitriawan; Yeni Mulyani; Teja Tscharntke
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    1) Autonomous sound recorders are increasingly used to survey birds, and other wildlife taxa. Species richness estimates from sound recordings are usually compared with estimates obtained from established methods like point counts, but so far the comparisons were biased: Detection ranges usually differ between the survey methods, and bird detection distance data are needed for standardizing data from sound recordings. 2) We devised and tested a method for estimating bird detection distances from sound recordings, using a reference recording of test sounds at different frequencies, emitted from known distances. We used our method to estimate bird detection distances in sound recordings from tropical forest sites where point counts were also used. We derived bird abundance and richness measures and compared them between point counts and sound recordings using unlimited radius and fixed radius counts, as well as distance sampling modelling. 3) First we show that it is possible to accurately estimate bird detection distances in sound recordings. We then demonstrate that these data can be used to standardize the detection ranges between point counts and sound recordings with a fixed-radius approach, leading to higher abundance and richness estimates for sound recordings. Our distance-sampling approach also revealed that sound recorders sampled significantly higher bird densities than human point counts. 4) We show for the first time that it is possible to standardize detection ranges in sound recordings and that distance sampling can successfully be used too. We revealed that birds were flushed by human observers and that this possibly leads to lower density estimates in point counts, although sound recorders could also have sampled more birds because of their earlier deployment times. Sound recordings are more amenable to distance-sampling modelling than point counts as they do not exhibit an observer-induced avoidance effect, and they can easily collect more replicates for obtaining more accurate bird density estimates. Quantifying bird detection distances was so far one important shortcoming that hindered the adoption of modern autonomous sound recording methods for ecological surveys.

  7. l

    Exploring soil sample variability through Principal Component Analysis (PCA)...

    • metadatacatalogue.lifewatch.eu
    Updated Jul 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Exploring soil sample variability through Principal Component Analysis (PCA) using database-stored data [Dataset]. https://metadatacatalogue.lifewatch.eu/srv/search?keyword=Standardization
    Explore at:
    Dataset updated
    Jul 2, 2024
    Description

    This workflow focuses on analyzing diverse soil datasets using PCA to understand their physicochemical properties. It connects to a MongoDB database to retrieve soil samples based on user-defined filters. Key objectives include variable selection, data quality improvement, standardization, and conducting PCA for data variance and pattern analysis. The workflow generates graphical representations, such as covariance and correlation matrices, scree plots, and scatter plots, to enhance data interpretability. This facilitates the identification of significant variables, data structure exploration, and optimal component determination for effective soil analysis. Background - Understanding the intricate relationships and patterns within soil samples is crucial for various environmental and agricultural applications. Principal Component Analysis (PCA) serves as a powerful tool in unraveling the complexity of multivariate soil datasets. Soil datasets often consist of numerous variables representing diverse physicochemical properties, making PCA an invaluable method for: ∙Dimensionality Reduction: Simplifying the analysis without compromising data integrity by reducing the dimensionality of large soil datasets. ∙Identification of Dominant Patterns: Revealing dominant patterns or trends within the data, providing insights into key factors contributing to overall variability. ∙Exploration of Variable Interactions: Enabling the exploration of complex interactions between different soil attributes, enhancing understanding of their relationships. ∙Interpretability of Data Variance: Clarifying how much variance is explained by each principal component, aiding in discerning the significance of different components and variables. ∙Visualization of Data Structure: Facilitating intuitive comprehension of data structure through plots such as scatter plots of principal components, helping identify clusters, trends, and outliers. ∙Decision Support for Subsequent Analyses: Providing a foundation for subsequent analyses by guiding decision-making, whether in identifying influential variables, understanding data patterns, or selecting components for further modeling. Introduction The motivation behind this workflow is rooted in the imperative need to conduct a thorough analysis of a diverse soil dataset, characterized by an array of physicochemical variables. Comprising multiple rows, each representing distinct soil samples, the dataset encompasses variables such as percentage of coarse sands, percentage of organic matter, hydrophobicity, and others. The intricacies of this dataset demand a strategic approach to preprocessing, analysis, and visualization. This workflow introduces a novel approach by connecting to a MongoDB, an agile and scalable NoSQL database, to retrieve soil samples based on user-defined filters. These filters can range from the natural site where the samples were collected to the specific date of collection. Furthermore, the workflow is designed to empower users in the selection of relevant variables, a task facilitated by user-defined parameters. This flexibility allows for a focused and tailored dataset, essential for meaningful analysis. Acknowledging the inherent challenges of missing data, the workflow offers options for data quality improvement, including optional interpolation of missing values or the removal of rows containing such values. Standardizing the dataset and specifying the target variable are crucial, establishing a robust foundation for subsequent statistical analyses. Incorporating PCA offers a sophisticated approach, enabling users to explore inherent patterns and structures within the data. The adaptability of PCA allows users to customize the analysis by specifying the number of components or desired variance. The workflow concludes with practical graphical representations, including covariance and correlation matrices, a scree plot, and a scatter plot, offering users valuable visual insights into the complexities of the soil dataset. Aims The primary objectives of this workflow are tailored to address specific challenges and goals inherent in the analysis of diverse soil samples: ∙Connect to MongoDB and retrieve data: Dynamically connect to a MongoDB database, allowing users to download soil samples based on user-defined filters. ∙Variable selection: Empower users to extract relevant variables based on user-defined parameters, facilitating a focused and tailored dataset. ∙Data quality improvement: Provide options for interpolation or removal of missing values to ensure dataset integrity for downstream analyses. ∙Standardization and target specification: Standardize the dataset values and designate the target variable, laying the groundwork for subsequent statistical analyses. ∙PCA: Conduct PCA with flexibility, allowing users to specify the number of components or desired variance for a comprehensive understanding of data variance and patterns. ∙Graphical representations: Generate visual outputs, including covariance and correlation matrices, a scree plot, and a scatter plot, enhancing the interpretability of the soil dataset. Scientific questions - This workflow addresses critical scientific questions related to soil analysis: ∙Facilitate Data Access: To streamline the retrieval of systematically stored soil sample data from the MongoDB database, aiding researchers in accessing organized data previously stored. ∙Variable importance: Identify variables contributing significantly to principal components through the covariance matrix and PCA. ∙Data structure: Explore correlations between variables and gain insights from the correlation matrix. ∙Optimal component number: Determine the optimal number of principal components using the scree plot for effective representation of data variance. ∙Target-related patterns: Analyze how selected principal components correlate with the target variable in the scatter plot, revealing patterns based on target variable values.

  8. Data from: The importance of standardization for biodiversity comparisons: a...

    • data.niaid.nih.gov
    • datadryad.org
    • +1more
    zip
    Updated Mar 23, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emma Ransome; Jonathan B. Geller; Molly Timmers; Matthieu Leray; Angka Mahardini; Andrianus Sembiring; Allen G. Collins; Christopher P. Meyer (2018). The importance of standardization for biodiversity comparisons: a case study using Autonomous Reef Monitoring Structures (ARMS) and metabarcoding to measure cryptic diversity on Mo'orea coral reefs, French Polynesia [Dataset]. http://doi.org/10.5061/dryad.d47fm
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 23, 2018
    Dataset provided by
    Smithsonian Tropical Research Institute
    Udayana University
    University of Hawaiʻi at Mānoa
    Moss Landing Marine Labs, Moss Landing, California, United States of America
    Indonesian Biodiversity Research Center, Denpasar, Bali, Indonesia
    National Museum
    Authors
    Emma Ransome; Jonathan B. Geller; Molly Timmers; Matthieu Leray; Angka Mahardini; Andrianus Sembiring; Allen G. Collins; Christopher P. Meyer
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    French Polynesia, Mo'orea
    Description

    The advancement of metabarcoding techniques, declining costs of high-throughput sequencing and development of systematic sampling devices, such as autonomous reef monitoring structures (ARMS), have provided the means to gather a vast amount of diversity data from cryptic marine communities. However, such increased capability could also lead to analytical challenges if the methods used to examine these communities across local and global scales are not standardized. Here we compare and assess the underlying biases of four ARMS field processing methods, preservation media, and current bioinformatic pipelines in evaluating diversity from cytochrome c oxidase I metabarcoding data. Illustrating the ability of ARMS-based metabarcoding to capture a wide spectrum of biodiversity, 3,372 OTUs and twenty-eight phyla, including 17 of 33 marine metazoan phyla, were detected from 3 ARMS (2.607 m2 area) collected on coral reefs in Mo'orea, French Polynesia. Significant differences were found between processing and preservation methods, demonstrating the need to standardize methods for biodiversity comparisons. We recommend the use of a standardized protocol (NOAA method) combined with DMSO preservation of tissues for sessile macroorganisms because it gave a more accurate representation of the underlying communities, is cost effective and removes chemical restrictions associated with sample transportation. We found that sequences identified at ? 97% similarity increased more than 7-fold (5.1% to 38.6%) using a geographically local barcode inventory, highlighting the importance of local species inventories. Phylogenetic approaches that assign higher taxonomic ranks accrued phylum identification errors (9.7%) due to sparse taxonomic coverage of the understudied cryptic coral reef community in public databases. However, a ? 85% sequence identity cut-off provided more accurate results (0.7% errors) and enabled phylum level identifications of 86.3% of the sequence reads. With over 1600 ARMS deployed, standardizing methods and improving databases are imperative to provide unprecedented global baseline assessments of understudied cryptic marine species in a rapidly changing world.

  9. f

    Data_Sheet_1_Best Practice Data Standards for Discrete Chemical...

    • frontiersin.figshare.com
    • figshare.com
    txt
    Updated Jun 4, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Li-Qing Jiang; Denis Pierrot; Rik Wanninkhof; Richard A. Feely; Bronte Tilbrook; Simone Alin; Leticia Barbero; Robert H. Byrne; Brendan R. Carter; Andrew G. Dickson; Jean-Pierre Gattuso; Dana Greeley; Mario Hoppema; Matthew P. Humphreys; Johannes Karstensen; Nico Lange; Siv K. Lauvset; Ernie R. Lewis; Are Olsen; Fiz F. Pérez; Christopher Sabine; Jonathan D. Sharp; Toste Tanhua; Thomas W. Trull; Anton Velo; Andrew J. Allegra; Paul Barker; Eugene Burger; Wei-Jun Cai; Chen-Tung A. Chen; Jessica Cross; Hernan Garcia; Jose Martin Hernandez-Ayon; Xinping Hu; Alex Kozyr; Chris Langdon; Kitack Lee; Joe Salisbury; Zhaohui Aleck Wang; Liang Xue (2023). Data_Sheet_1_Best Practice Data Standards for Discrete Chemical Oceanographic Observations.csv [Dataset]. http://doi.org/10.3389/fmars.2021.705638.s001
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    Frontiers
    Authors
    Li-Qing Jiang; Denis Pierrot; Rik Wanninkhof; Richard A. Feely; Bronte Tilbrook; Simone Alin; Leticia Barbero; Robert H. Byrne; Brendan R. Carter; Andrew G. Dickson; Jean-Pierre Gattuso; Dana Greeley; Mario Hoppema; Matthew P. Humphreys; Johannes Karstensen; Nico Lange; Siv K. Lauvset; Ernie R. Lewis; Are Olsen; Fiz F. Pérez; Christopher Sabine; Jonathan D. Sharp; Toste Tanhua; Thomas W. Trull; Anton Velo; Andrew J. Allegra; Paul Barker; Eugene Burger; Wei-Jun Cai; Chen-Tung A. Chen; Jessica Cross; Hernan Garcia; Jose Martin Hernandez-Ayon; Xinping Hu; Alex Kozyr; Chris Langdon; Kitack Lee; Joe Salisbury; Zhaohui Aleck Wang; Liang Xue
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Effective data management plays a key role in oceanographic research as cruise-based data, collected from different laboratories and expeditions, are commonly compiled to investigate regional to global oceanographic processes. Here we describe new and updated best practice data standards for discrete chemical oceanographic observations, specifically those dealing with column header abbreviations, quality control flags, missing value indicators, and standardized calculation of certain properties. These data standards have been developed with the goals of improving the current practices of the scientific community and promoting their international usage. These guidelines are intended to standardize data files for data sharing and submission into permanent archives. They will facilitate future quality control and synthesis efforts and lead to better data interpretation. In turn, this will promote research in ocean biogeochemistry, such as studies of carbon cycling and ocean acidification, on regional to global scales. These best practice standards are not mandatory. Agencies, institutes, universities, or research vessels can continue using different data standards if it is important for them to maintain historical consistency. However, it is hoped that they will be adopted as widely as possible to facilitate consistency and to achieve the goals stated above.

  10. NPS - Points of Interest (POIs) - Geographic Coordinate System

    • public-nps.opendata.arcgis.com
    • mapdirect-fdep.opendata.arcgis.com
    • +1more
    Updated Mar 29, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2018). NPS - Points of Interest (POIs) - Geographic Coordinate System [Dataset]. https://public-nps.opendata.arcgis.com/items/9e828162f2ee47ab820cfdee94fbbf7e
    Explore at:
    Dataset updated
    Mar 29, 2018
    Dataset authored and provided by
    National Park Servicehttp://www.nps.gov/
    Area covered
    North Pacific Ocean, Pacific Ocean
    Description

    The purpose of creating and utilizing a spatial data standard is to consolidate spatial data and integrate the existing feature attribute information into a national database for reporting, planning, analysis and sharing purposes.

    The primary benefit of using a spatial data standard remains the organization and documentation of data to allow users to share spatial data between parks, regions, programs, other federal agencies, and the public, at the national level.

    Ultimately, the point of interest container will go through a formal data standard development process. This will lead to wider use and more comprehensive access to all of our available point of interest data and provide a more integrated approach to point of interest data management across the NPS and at all levels: park, region, program, and national. Until then, the point of interest spatial data container will serve as a pseudo-standard and enable data stewards to begin standardizing point of interest data.

  11. U

    Field portable X-ray fluorescence data on standard reference materials...

    • data.usgs.gov
    • catalog.data.gov
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Krishangi Groover; John Izbicki, Field portable X-ray fluorescence data on standard reference materials associated with data in San Bernardino County, California [Dataset]. http://doi.org/10.5066/P9CU0EH3
    Explore at:
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Krishangi Groover; John Izbicki
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Feb 1, 2015 - May 23, 2018
    Area covered
    San Bernardino County, California
    Description

    These data were collected using field portable (handheld) X-ray fluorescence (pXRF) equipped with a 4-watt Ta/Au X-ray tube on two National Institute of Standards and Technology (NIST) certified standard reference materials 2710a and 2711a, a U.S. Geological Survey (USGS) certified standard reference material BHVO-2, and a silicon dioxide blank. These quality assurance data were collected as part of detailed pXRF studies in Hinkley and Water Valleys, 140 kilometers (km) northeast of Los Angeles, California, and as part of a regional geochemical survey in the western Mojave Desert, between 60 to 210 km northeast of Los Angeles. Measurements on National Institute of Standards and Technology (NIST) and U.S. Geological Survey standard reference materials indicated the pXRF was sufficiently accurate for the purposes of these studies for chromium and selected trace elements. Results showed consistent clean (few to no measurable elements) measurements on a silica dioxide blank. Standard ...

  12. Investigation of metadata standard use by geoscience data repositories

    • data.ucar.edu
    • gdex.ucar.edu
    ascii
    Updated Jan 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Liapich, Yauheniya; Mayernik, Matthew (2023). Investigation of metadata standard use by geoscience data repositories [Dataset]. http://doi.org/10.5065/z9ch-wk24
    Explore at:
    asciiAvailable download formats
    Dataset updated
    Jan 4, 2023
    Dataset provided by
    University Corporation for Atmospheric Research
    Authors
    Liapich, Yauheniya; Mayernik, Matthew
    Description

    This dataset supports a paper being written about metadata standard use by geoscience data repositories. The study is being done to better understand which metadata standards and keyword vocabularies are prominent within the geoscience data repository landscape. The findings should be useful for NCAR's evaluation of metadata standards within our own systems, as well as by external data repository staff. The guiding questions of the study are as follows: 1. What metadata standards are geoscience data repositories using? 2. What keyword / subject term vocabularies are they using? 3. What interoperability challenges are present in the use of metadata and keyword vocabulary standards within the geoscience repository community?

  13. Science Standard Error (SE) PR USVI (Image Service)

    • agdatacommons.nal.usda.gov
    • catalog.data.gov
    • +1more
    bin
    Updated Oct 31, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Forest Service (2024). Science Standard Error (SE) PR USVI (Image Service) [Dataset]. https://agdatacommons.nal.usda.gov/articles/dataset/Science_Standard_Error_SE_PR_USVI_Image_Service_/25972912
    Explore at:
    binAvailable download formats
    Dataset updated
    Oct 31, 2024
    Dataset provided by
    U.S. Department of Agriculture Forest Servicehttp://fs.fed.us/
    Authors
    U.S. Forest Service
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    U.S. Virgin Islands
    Description

    The USDA Forest Service (USFS) builds two versions of percent tree canopy cover (TCC) data to serve needs of multiple user communities. These datasets encompass the conterminous United States (CONUS), Coastal Alaska, Hawaii, and Puerto Rico and U.S. Virgin Islands (PRUSVI). The two versions of data within the v2021-4 TCC product suite include: - The raw model outputs referred to as the annual Science data; and - A modified version built for the National Land Cover Database referred to as NLCD data. They are available at the following locations: https://data.fs.usda.gov/geodata/rastergateway/treecanopycover https://apps.fs.usda.gov/fsgisx01/rest/services/RDW_LandscapeAndWildlife NLCD: https://www.mrlc.gov/datahttps://apps.fs.usda.gov/fsgisx01/rest/services/RDW_LandscapeAndWildlife. The Science data are the initial annual model outputs that consist of two images: percent tree canopy cover (TCC) and standard error. These data are best suited for users who will carry out their own detailed statistical and uncertainty analyses on the dataset, and place lower priority on the visual appearance of the dataset for cartographic purposes. Datasets for the nominal years of 2008 through 2021 are available. The Science data were produced using a random forests regression algorithm. For standard error data, the initial standard error estimates that ranged from 0 to approximately 45 were multiplied by 100 to maintain data precision in unsigned 16 bit space (e.g., 45 = 4500). Therefore, standard error estimates pixel values range from 0 to approximately 4500. The value 65534 represents the non-processing area mask where no cloud or cloud shadow-free data are available to produce an output, and 65535 represents the background value. The Science data are accessible for multiple user communities, through multiple channels and platforms. For information on the NLCD TCC data and processing steps see the NLCD metadata. Information on the Science data and processing steps are included here.This record was taken from the USDA Enterprise Data Inventory that feeds into the https://data.gov catalog. Data for this record includes the following resources: ISO-19139 metadata ArcGIS Hub Dataset ArcGIS GeoService For complete information, please visit https://data.gov.

  14. NPS - Points of Interest (POIs) - Web Mercator

    • arc-gis-hub-home-arcgishub.hub.arcgis.com
    • mapdirect-fdep.opendata.arcgis.com
    • +3more
    Updated Mar 29, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2018). NPS - Points of Interest (POIs) - Web Mercator [Dataset]. https://arc-gis-hub-home-arcgishub.hub.arcgis.com/datasets/a40e2faa953b4c5cb7fe10004dc3008e
    Explore at:
    Dataset updated
    Mar 29, 2018
    Dataset authored and provided by
    National Park Servicehttp://www.nps.gov/
    Area covered
    North Pacific Ocean, Pacific Ocean
    Description

    The purpose of creating and utilizing a spatial data standard is to consolidate spatial data and integrate the existing feature attribute information into a national database for reporting, planning, analysis and sharing purposes.

    The primary benefit of using a spatial data standard remains the organization and documentation of data to allow users to share spatial data between parks, regions, programs, other federal agencies, and the public, at the national level.

    Ultimately, the point of interest container will go through a formal data standard development process. This will lead to wider use and more comprehensive access to all of our available point of interest data and provide a more integrated approach to point of interest data management across the NPS and at all levels: park, region, program, and national. Until then, the point of interest spatial data container will serve as a pseudo-standard and enable data stewards to begin standardizing point of interest data.

  15. f

    Summary of inclusion and exclusion criteria.

    • plos.figshare.com
    • figshare.com
    xls
    Updated Feb 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Corinne E. Zachry; Rory P. O’Brien; Kirsty A. Clark; Marissa L. Ding; John R. Blosnich (2025). Summary of inclusion and exclusion criteria. [Dataset]. http://doi.org/10.1371/journal.pone.0307688.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Feb 3, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Corinne E. Zachry; Rory P. O’Brien; Kirsty A. Clark; Marissa L. Ding; John R. Blosnich
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sexual and gender minority (SGM) populations experience elevated rates of negative health outcomes (e.g., suicidality) and social determinants (e.g., poverty), which have been associated with general population mortality risk. Despite evidence of disparities in threats to well-being, it remains unclear whether SGM individuals have greater risk of mortality. This systematic review synthesized evidence on mortality among studies that included information about SGM. Three independent coders examined 6,255 abstracts, full-text reviewed 107 articles, and determined that 38 met inclusion criteria: 1) contained a sexual orientation or gender identity (SOGI) measure; 2) focused on a mortality outcome; 3) provided SGM vs non-SGM (i.e., exclusively heterosexual and cisgender) or general population comparisons of mortality outcomes; 4) were peer-reviewed; and 5) were available in English. A search of included articles’ references yielded 5 additional studies (total n = 43). The authors used the NIH’s Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies to assess included studies. Mortality outcomes included all-cause (n = 27), suicide/intentional self harm (n = 23), homicide (n = 7), and causes related to drug use (n = 3). Compared to non-SGM people, 14 studies (32.6%) supported higher mortality for SGM, 28 studies (65.1%) provided partial support of higher mortality for SGM (e.g., greater mortality from one cause but not another), one study (2.3%) found no evidence of higher mortality for SGM. There was considerable heterogeneity in operational definitions of SGM populations across studies. Although mixed, findings suggest elevated mortality for SGM versus non-SGM populations. Integrating SOGI measures into mortality surveillance would enhance understanding of disparities by standardizing data collection, thereby reducing heterogeneity and increasing capacity to aggregate results (e.g., meta-analyses).

  16. H

    Data from: The Standardized World Income Inequality Database, Versions 8-9

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Dec 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Frederick Solt (2024). The Standardized World Income Inequality Database, Versions 8-9 [Dataset]. http://doi.org/10.7910/DVN/LM4OWF
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 26, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Frederick Solt
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    1960 - 2023
    Dataset funded by
    NSF
    Description

    Cross-national research on the causes and consequences of income inequality has been hindered by the limitations of the existing inequality datasets: greater coverage across countries and over time has been available from these sources only at the cost of significantly reduced comparability across observations. The goal of the Standardized World Income Inequality Database (SWIID) is to meet the needs of those engaged in broadly cross-national research by maximizing the comparability of income inequality data while maintaining the widest possible coverage across countries and over time. The SWIID’s income inequality estimates are based on thousands of reported Gini indices from hundreds of published sources, including the OECD Income Distribution Database, the Socio-Economic Database for Latin America and the Caribbean generated by CEDLAS and the World Bank, Eurostat, the World Bank’s PovcalNet, the UN Economic Commission for Latin America and the Caribbean, national statistical offices around the world, and academic studies while minimizing reliance on problematic assumptions by using as much information as possible from proximate years within the same country. The data collected and harmonized by the Luxembourg Income Study is employed as the standard. The SWIID currently incorporates comparable Gini indices of disposable and market income inequality for 199 countries for as many years as possible from 1960 to the present; it also includes information on absolute and relative redistribution.

  17. W

    Action Aims Groups Types Data Standard Controlled List

    • cloud.csiss.gmu.edu
    • environment.data.gov.uk
    Updated Dec 25, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United Kingdom (2019). Action Aims Groups Types Data Standard Controlled List [Dataset]. https://cloud.csiss.gmu.edu/uddi/dataset/action-aims-groups-types-data-standard-controlled-list
    Explore at:
    Dataset updated
    Dec 25, 2019
    Dataset provided by
    United Kingdom
    Description

    Action Aims Groups Types Data Standard Controlled List. It specifies a classification system for categorising high level environmental aims and associated activities undertaken to meet those aims. It currently covers Water Land and Biodiversity aims, and in particular those activities to achieve river basin outcomes of preventing deterioration, achieving protected area objectives or achieving water body objectives. Attribution statement: © Environment Agency copyright and/or database right 2017.

  18. A common standard template for representing stable isotope results and...

    • data.csiro.au
    Updated Oct 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nina Welti; Lian Flick; Stephanie Hawkins; Geoff Fraser; Kathryn Waltenberg; Jagoda Crawford; Cath Hughes; Athina Puccini; Steve Szarvas; Christoph Gerber; Axel Suckow; Paul Abhijit; Fong Liu (2024). A common standard template for representing stable isotope results and associated metadata [Dataset]. http://doi.org/10.25919/0e5m-q876
    Explore at:
    Dataset updated
    Oct 31, 2024
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    Nina Welti; Lian Flick; Stephanie Hawkins; Geoff Fraser; Kathryn Waltenberg; Jagoda Crawford; Cath Hughes; Athina Puccini; Steve Szarvas; Christoph Gerber; Axel Suckow; Paul Abhijit; Fong Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    Geoscience Australiahttp://ga.gov.au/
    CSIROhttp://www.csiro.au/
    National Measurement Institute
    ANSTO
    Description

    This dataset contains a common standard template for representing the metadata of stable isotope results environmental samples (e.g., soils, rocks, water, gases) and a CSIRO-specific vocabulary for use across CSIRO research activities. The templates includes core properties of stable isotope results, analytical methods, and uncertainty of analyses, as well as associated metadata such as such as their name, identifier, type, and location. The templates enables users with disparate data to find common ground regardless of differences within the data itself i.e. sample types, collections. The standardized templates can prevent duplicate sample metadata entry and lower metadata redundancy, thereby improving the stable isotope data curation and discovery. They have been developed iteratively, revised, and improved based on feedback from researchers and lab technicians. Use of this template and vocabularies will facilitate interoperable and machine-readable platform-ready data collections.

    Lineage: CSIRO, in partnership with the Australian Nuclear Science and Technology Organisation (ANSTO), Geoscience Australia, and the National Measurement Institute, has developed a common metadata template for reporting stable isotope results. The common template was designed to provide a shared language for stable isotope data so that the data can be unified for reuse. Using a simplified data structure, the common template allows for the supply of data from different organisations with different corporate goals, data infrastructure, operating models and different specialist skills. The common ontology describes the different concepts present in the data, giving meaning to the stable isotope observations or measurements of (isotopic) properties of physical samples of the environment. It coordinates this description of samples with standardised metadata and vocabularies, which facilitate machine-readability and semantic cross-linking of resources for interoperability between multiple domains and systems. This is to assist in reducing the need for human data manipulation which can be prone to errors, to provide a machine-readable format for new and emerging technology use-cases, and to also help stable isotope data align with Australia public data FAIR. In addition to the common template, the partners have developed a platform for making unified stable isotope data available for reuse, co- funded by the Australian Research Data Commons (ARDC). The aim of IsotopesAU is to repurpose existing publicly available environmental stable isotope data into a federated data platform, allowing single point access to the data collections. The IsotopesAU platform currently harmonises and federates stable isotopes data from the partner agencies' existing public collections, translating metadata templates to the common template.

    The templates have been developed iteratively, revised, and improved based on feedback from project participants, researchers, and lab technicians.

  19. i

    WoSIS latest - Bulk density whole soil - field moist

    • data.isric.org
    Updated Sep 15, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). WoSIS latest - Bulk density whole soil - field moist [Dataset]. https://data.isric.org/geonetwork/srv/search?format=CSV
    Explore at:
    Dataset updated
    Sep 15, 2020
    Description

    Bulk density of the whole soil including coarse fragments, field moist (kg/dm³). ISRIC is developing a centralized and user–focused server database, known as ISRIC World Soil Information Service (WoSIS). The aims are to: • Safeguard world soil data "as is" • Share soil data (point, polygon, grid) upon their standardization and harmonization • Provide quality-assessed input for a growing range of environmental applications. So far some 400,000 profiles have been imported into WoSIS from disparate soil databases; some 150,000 of have been standardised. The number of measured data for each property varies between profiles and with depth, generally depending on the purpose of the initial studies. Further, in most source data sets, there are fewer data for soil physical as opposed to soil chemical attributes and there are fewer measurements for deeper than for superficial horizons. Generally, limited quality information is associated with the various source data. Special attention has been paid to the standardization of soil analytical method descriptions with focus on the set of soil properties considered in the GlobalSoilMap specifications. Newly developed procedures for the above, that consider the soil property, analytical method and unit of measurement, have been applied to the present set of geo-referenced soil profile data. Gradually, the quality assessed and harmonized "shared" data will be made available to the international community through several webservices. All data managed in WoSIS are handled in conformance with ISRICs data use and citation policy, respecting inherited restrictions. The most recent set of standardized attributes derived from WoSIS are available via WFS. For instructions see Procedures manual 2018, Appendix A, link below (Procedures manual 2018)

  20. N

    Standard, IL Population Pyramid Dataset: Age Groups, Male and Female...

    • neilsberg.com
    csv, json
    Updated Feb 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neilsberg Research (2025). Standard, IL Population Pyramid Dataset: Age Groups, Male and Female Population, and Total Population for Demographics Analysis // 2025 Edition [Dataset]. https://www.neilsberg.com/research/datasets/52715953-f122-11ef-8c1b-3860777c1fe6/
    Explore at:
    csv, jsonAvailable download formats
    Dataset updated
    Feb 22, 2025
    Dataset authored and provided by
    Neilsberg Research
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Standard, Illinois
    Variables measured
    Male and Female Population Under 5 Years, Male and Female Population over 85 years, Male and Female Total Population for Age Groups, Male and Female Population Between 5 and 9 years, Male and Female Population Between 10 and 14 years, Male and Female Population Between 15 and 19 years, Male and Female Population Between 20 and 24 years, Male and Female Population Between 25 and 29 years, Male and Female Population Between 30 and 34 years, Male and Female Population Between 35 and 39 years, and 9 more
    Measurement technique
    The data presented in this dataset is derived from the latest U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates. To measure the three variables, namely (a) male population, (b) female population and (b) total population, we initially analyzed and categorized the data for each of the age groups. For age groups we divided it into roughly a 5 year bucket for ages between 0 and 85. For over 85, we aggregated data into a single group for all ages. For further information regarding these estimates, please feel free to reach out to us via email at research@neilsberg.com.
    Dataset funded by
    Neilsberg Research
    Description
    About this dataset

    Context

    The dataset tabulates the data for the Standard, IL population pyramid, which represents the Standard population distribution across age and gender, using estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates. It lists the male and female population for each age group, along with the total population for those age groups. Higher numbers at the bottom of the table suggest population growth, whereas higher numbers at the top indicate declining birth rates. Furthermore, the dataset can be utilized to understand the youth dependency ratio, old-age dependency ratio, total dependency ratio, and potential support ratio.

    Key observations

    • Youth dependency ratio, which is the number of children aged 0-14 per 100 persons aged 15-64, for Standard, IL, is 24.2.
    • Old-age dependency ratio, which is the number of persons aged 65 or over per 100 persons aged 15-64, for Standard, IL, is 31.9.
    • Total dependency ratio for Standard, IL is 56.0.
    • Potential support ratio, which is the number of youth (working age population) per elderly, for Standard, IL is 3.1.
    Content

    When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates.

    Age groups:

    • Under 5 years
    • 5 to 9 years
    • 10 to 14 years
    • 15 to 19 years
    • 20 to 24 years
    • 25 to 29 years
    • 30 to 34 years
    • 35 to 39 years
    • 40 to 44 years
    • 45 to 49 years
    • 50 to 54 years
    • 55 to 59 years
    • 60 to 64 years
    • 65 to 69 years
    • 70 to 74 years
    • 75 to 79 years
    • 80 to 84 years
    • 85 years and over

    Variables / Data Columns

    • Age Group: This column displays the age group for the Standard population analysis. Total expected values are 18 and are define above in the age groups section.
    • Population (Male): The male population in the Standard for the selected age group is shown in the following column.
    • Population (Female): The female population in the Standard for the selected age group is shown in the following column.
    • Total Population: The total population of the Standard for the selected age group is shown in the following column.

    Good to know

    Margin of Error

    Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.

    Custom data

    If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.

    Inspiration

    Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.

    Recommended for further research

    This dataset is a part of the main dataset for Standard Population by Age. You can refer the same here

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Virginia Geographic Information Network (2016). Geospatial Data Standard - Administrative Boundaries [Dataset]. https://vgin.vdem.virginia.gov/documents/05d6a7b182bc491cb3ead5f43137dbc1

Geospatial Data Standard - Administrative Boundaries

Explore at:
Dataset updated
Mar 29, 2016
Dataset authored and provided by
Virginia Geographic Information Network
Description

The purpose of the Virginia Administrative Boundary Geospatial Data Standard is to implement, as a Commonwealth ITRM Standard, the data file naming conventions, geometry, map projection system, common set of attributes, dataset type and specifications, and level of precision for the Virginia Administrative Boundaries Dataset, which will be the data source of record at the state level for administrative boundary spatial features within the Commonwealth of Virginia.

Search
Clear search
Close search
Google apps
Main menu