100+ datasets found
  1. u

    Surface Weather Observation Large Scale Map Imagery

    • data.ucar.edu
    image
    Updated Oct 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Surface Weather Observation Large Scale Map Imagery [Dataset]. http://doi.org/10.26023/1KTJ-6PJA-7F01
    Explore at:
    imageAvailable download formats
    Dataset updated
    Oct 7, 2025
    Time period covered
    Jan 27, 2009 - Mar 10, 2010
    Area covered
    Description

    This data set contains maps of surface weather observations over the western two-thirds of the US and southern Canada. The latest observation at the time the plot was generated was utilized, so the plots contain a mix of standard hourly and special observations. The maps were generated by NCAR/EOL. This data set contains imagery from the two PLOWS field phases, 27 January to 30 March 2009 and 14 October 2009 to 10 March 2010. No imagery are available for the period between the field phases.

  2. Google Maps Dataset

    • brightdata.com
    .json, .csv, .xlsx
    Updated Jan 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bright Data (2023). Google Maps Dataset [Dataset]. https://brightdata.com/products/datasets/google-maps
    Explore at:
    .json, .csv, .xlsxAvailable download formats
    Dataset updated
    Jan 8, 2023
    Dataset authored and provided by
    Bright Datahttps://brightdata.com/
    License

    https://brightdata.com/licensehttps://brightdata.com/license

    Area covered
    Worldwide
    Description

    The Google Maps dataset is ideal for getting extensive information on businesses anywhere in the world. Easily filter by location, business type, and other factors to get the exact data you need. The Google Maps dataset includes all major data points: timestamp, name, category, address, description, open website, phone number, open_hours, open_hours_updated, reviews_count, rating, main_image, reviews, url, lat, lon, place_id, country, and more.

  3. w

    General Land Use Final Dataset

    • geo.wa.gov
    • hub.arcgis.com
    • +1more
    Updated Mar 31, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CommerceGIS (2018). General Land Use Final Dataset [Dataset]. https://geo.wa.gov/datasets/a0ddbd4e0e2141b3841a6a42ff5aff46
    Explore at:
    Dataset updated
    Mar 31, 2018
    Dataset authored and provided by
    CommerceGIS
    Area covered
    Description

    This data set was developed as an information layer for the Washington State Department of Commerce. It is designed to be used as part of the Puget Sound Mapping Project to provide a generalized and standardized depiction of land uses and growth throughout the Puget Sound region.

    This map represents land uses, zoning abbreviations and zoning descriptions. Zoning data was collected in raster format and digitized by State Department of Commerce staff. The generalized depiction of intended future land use is based primarily upon 2012 zoning and 2010 assessor's records.NOTE: Because this is a large dataset, some geoprocessing operations (i.e. dissolve) may not work on the entire dataset. You will receive a topoengine error. Clipping out an area of interest (i.e. a county) and performing the operation on it instead of on the full dataset is a way to get around this software limitation.

  4. s

    Index To The BGS Large Scale Geological Map Collection. - Dataset -...

    • ckan.publishing.service.gov.uk
    Updated Jun 3, 2011
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2011). Index To The BGS Large Scale Geological Map Collection. - Dataset - data.gov.uk [Dataset]. https://ckan.publishing.service.gov.uk/dataset/index-to-the-bgs-large-scale-geological-map-collection
    Explore at:
    Dataset updated
    Jun 3, 2011
    Description

    Index to BGS geological map 'Standards', manuscript and published maps for Great Britain produced by the Survey on County Series (1:10560) and National Grid (1:10560 & 1:10000) Ordnance Survey base maps. 'Standards' are the best interpretation of the geology at the time they were produced. The Oracle index was set up in 1988, current holdings are over 41,000 maps. There are entries for all registered maps, but not all fields are complete on all entries.

  5. U

    A national dataset of rasterized building footprints for the U.S.

    • data.usgs.gov
    • catalog.data.gov
    Updated Feb 28, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mehdi Heris; Nathan Foks; Kenneth Bagstad; Austin Troy (2020). A national dataset of rasterized building footprints for the U.S. [Dataset]. http://doi.org/10.5066/P9J2Y1WG
    Explore at:
    Dataset updated
    Feb 28, 2020
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Mehdi Heris; Nathan Foks; Kenneth Bagstad; Austin Troy
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    2020
    Area covered
    United States
    Description

    The Bing Maps team at Microsoft released a U.S.-wide vector building dataset in 2018, which includes over 125 million building footprints for all 50 states in GeoJSON format. This dataset is extracted from aerial images using deep learning object classification methods. Large-extent modelling (e.g., urban morphological analysis or ecosystem assessment models) or accuracy assessment with vector layers is highly challenging in practice. Although vector layers provide accurate geometries, their use in large-extent geospatial analysis comes at a high computational cost. We used High Performance Computing (HPC) to develop an algorithm that calculates six summary values for each cell in a raster representation of each U.S. state: (1) total footprint coverage, (2) number of unique buildings intersecting each cell, (3) number of building centroids falling inside each cell, and area of the (4) average, (5) smallest, and (6) largest area of buildings that intersect each cell. These values a ...

  6. Transformative Research for Large Genome Physical Maps

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Apr 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Transformative Research for Large Genome Physical Maps [Dataset]. https://catalog.data.gov/dataset/transformative-research-for-large-genome-physical-maps-52d35
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    Agricultural Research Servicehttps://www.ars.usda.gov/
    Description

    The main goal of the project is to construct and utilize high-resolution genome-wide RH-based physical maps of the wheat D-genome chromosomes to facilitate the construction of sequence-ready physical maps. This research provides an unprecedented view into the evolution of cereal genomes. Importantly, the methodology is being developed to be applied to other large and complex genomes such as polyploid wheat. Resources in this dataset:Resource Title: Web Page. File Name: Web Page, url: https://wheat.pw.usda.gov/RHmapping/

  7. Global Land Cover Mapping and Estimation Yearly 30 m V001 - Dataset - NASA...

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Global Land Cover Mapping and Estimation Yearly 30 m V001 - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/global-land-cover-mapping-and-estimation-yearly-30-m-v001-6db80
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    NASA's Making Earth System Data Records for Use in Research Environments (MEaSUREs) Global Land Cover Mapping and Estimation (GLanCE) annual 30 meter (m) Version 1 data product provides global land cover and land cover change data derived from Landsat 5 Thematic Mapper (TM), Landsat 7 Enhanced Thematic Mapper Plus (ETM+), and Landsat 8 Operational Land Imager (OLI). These maps provide the user community with land cover type, land cover change, metrics characterizing the magnitude and seasonality of greenness of each pixel, and the magnitude of change. GLanCE data products will be provided using a set of seven continental grids that use Lambert Azimuthal Equal Area projections parameterized to minimize distortion for each continent. Currently, North America, South America, Europe, and Oceania are available. This dataset is useful for a wide range of applications, including ecosystem, climate, and hydrologic modeling; monitoring the response of terrestrial ecosystems to climate change; carbon accounting; and land management. The GLanCE data product provides seven layers: the land cover class, the estimated day of year of change, integer identifier for class in previous year, median and amplitude of the Enhanced Vegetation Index (EVI2) in the year, rate of change in EVI2, and the change in EVI2 median from previous year to current year. A low-resolution browse image representing EVI2 amplitude is also available for each granule.Known Issues Version 1.0 of the data set does not include Quality Assurance, Leaf Type or Leaf Phenology. These layers are populated with fill values. These layers will be included in future releases of the data product. * Science Data Set (SDS) values may be missing, or of lower quality, at years when land cover change occurs. This issue is a by-product of the fact that Continuous Change Detection and Classification (CCDC) does not fit models or provide synthetic reflectance values during short periods of time between time segments. * The accuracy of mapping results varies by land cover class and geography. Specifically, distinguishing between shrubs and herbaceous cover is challenging at high latitudes and in arid and semi-arid regions. Hence, the accuracy of shrub cover, herbaceous cover, and to some degree bare cover, is lower than for other classes. * Due to the combined effects of large solar zenith angles, short growing seasons, lower availability of high-resolution imagery to support training data, the representation of land cover at land high latitudes in the GLanCE product is lower than in mid latitudes. * Shadows and large variation in local zenith angles decrease the accuracy of the GLanCE product in regions with complex topography, especially at high latitudes. * Mapping results may include artifacts from variation in data density in overlap zones between Landsat scenes relative to mapping results in non-overlap zones. * Regions with low observation density due to cloud cover, especially in the tropics, and/or poor data density (e.g. Alaska, Siberia, West Africa) have lower map quality. * Artifacts from the Landsat 7 Scan Line Corrector failure are occasionally evident in the GLanCE map product. High proportions of missing data in regions with snow and ice at high elevations result in missing data in the GLanCE SDSs.* The GlanCE data product tends to modestly overpredict developed land cover in arid regions.

  8. 2D Maps from CAMELS IllustrisTNG Simulations

    • kaggle.com
    zip
    Updated Jan 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adrian Severino (2025). 2D Maps from CAMELS IllustrisTNG Simulations [Dataset]. https://www.kaggle.com/datasets/adrianseverino/2d-maps-from-camels-illustristng-simulations
    Explore at:
    zip(29949510888 bytes)Available download formats
    Dataset updated
    Jan 23, 2025
    Authors
    Adrian Severino
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    2D Maps from CAMELS IllustrisTNG Simulations (Subset)

    Author’s Note

    This is my first dataset publication on Kaggle, and I’m very excited to share a small, manageable subset of the CAMELS Multifield (CMD) data to help the ML community practice image classification and astrophysical data analysis! Make sure to upvote, comment, and share if you enjoy or have suggestions for me. This subset is distributed under the MIT License with permission from the original author, Francisco Villaescusa-Navarro, and the CAMELS collaboration.

    1. Overview

    The CAMELS Multifield Dataset (CMD) is a massive collection of 2D maps and 3D grids derived from cosmological simulations that track the evolution of gas, dark matter, stars, black holes, and (in some suites) magnetic fields. These simulations vary important cosmological and astrophysical parameters, allowing researchers to explore and train machine learning models that could help us understand the universe’s fundamental properties.

    Because the original CMD is extremely large, I’m providing a small subset from the IllustrisTNG suite, specifically from the LH (Latin Hypercube) set. This subset focuses on 2D maps at redshift z = 0.00, chosen randomly for demonstration and educational purposes.

    2. About CAMELS & CMD

    2.1 CAMELS Project

    • CAMELS stands for Cosmology and Astrophysics with MachinE Learning Simulations.
    • It aims to produce large datasets of cosmological simulations where diverse parameters (e.g., Ω_m, σ_8, supernova feedback, black-hole feedback) are systematically varied.

    2.2 CMD (CAMELS Multifield Dataset)

    • CMD contains hundreds of thousands of 2D maps and 3D grids from thousands of state-of-the-art hydrodynamic and N-body simulations.
    • Data is organized by suite (IllustrisTNG, SIMBA, Astrid, or Nbody) and set (LH, CV, 1P, EX, BE, SB, etc.).
    • Each 2D map or 3D grid is associated with a specific set of simulation parameters (the “labels”).

    2.3 Suites

    1. IllustrisTNG (MHD simulations): Gas, dark matter, stars, black holes, and magnetic fields.
    2. SIMBA (Hydrodynamic): Similar to IllustrisTNG, but uses the SIMBA code.
    3. Astrid (Hydrodynamic): Another code with its own feedback physics.
    4. Nbody (Gravity-only): Follows only dark matter without astrophysical processes.

    2.4 Sets

    • LH (Latin Hypercube): Each simulation has unique values for all parameters, covering a broad range in parameter space.
    • CV, 1P, EX, BE, SB: Other sets that vary parameters differently or keep some fixed.

    3. This Subset

    1. Suite & Set: IllustrisTNGLH (Latin Hypercube sampling).
    2. Field Example: Dark matter density (Mcdm), though you could encounter other fields if you download more from the official CMD resource.
    3. Format: .npy files, each containing multiple 2D slices (maps).
    4. Size: A small fraction of the full dataset (~5–10 maps or however many you decide to provide).
    5. Coordinates: Each 2D map is a “slice” of the cosmological volume at redshift z = 0.00.
    6. Parameter Labels:
      • Ω_m (matter density fraction),
      • σ_8 (root-mean-square amplitude of matter fluctuations),
      • A_SN1, A_SN2 (supernova feedback parameters),
      • A_AGN1, A_AGN2 (black-hole/AGN feedback parameters).

    Because these maps are from the IllustrisTNG suite, they have non-zero values for all six parameters. If any user references the corresponding Nbody subset, note that only Ω_m and σ_8 apply there.

    4. Why This Matters

    The CAMELS data—especially 2D projections—are excellent for: - Machine Learning & Computer Vision: Classification, segmentation, or anomaly detection tasks. - Cosmology Research: Investigating how changes in Ω_m, σ_8, or feedback physics affect large-scale structure formation. - Educational Purposes: Students and newcomers can learn how real cosmological simulation data is structured and experiment with analysis or ML pipelines.

    5. License & Attribution

    This subset is shared under the MIT License granted by the original author, Francisco Villaescusa-Navarro, and the broader CAMELS collaboration. Please see the “License” section for the full text.

    1. I am not the original author; I only provide a subset for educational and ML practice purposes.
    2. All credits go to the CAMELS collaboration and the authors of the CMD.
    3. If you publish results using this data, please cite the original sources appropriately and mention the CAMELS project.

    6. References & Further Reading

    -**CAMELS Project Overview**
    -**IllustrisTNG Official Site**
    -**[CMD Official Documentation](https://camels-multifield-dataset.read...

  9. h

    m-a-p-codefeedback-mistral-large

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Phung Van Duy, m-a-p-codefeedback-mistral-large [Dataset]. https://huggingface.co/datasets/pvduy/m-a-p-codefeedback-mistral-large
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Authors
    Phung Van Duy
    Description

    pvduy/m-a-p-codefeedback-mistral-large dataset hosted on Hugging Face and contributed by the HF Datasets community

  10. C

    National Hydrography Data - NHD and 3DHP

    • data.cnra.ca.gov
    • data.ca.gov
    Updated Jul 16, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Department of Water Resources (2025). National Hydrography Data - NHD and 3DHP [Dataset]. https://data.cnra.ca.gov/dataset/national-hydrography-dataset-nhd
    Explore at:
    pdf(3684753), pdf(4856863), pdf(437025), arcgis geoservices rest api, zip(972664), web videos, website, zip(15824984), zip(10029073), zip(39288832), zip(128966494), pdf(182651), pdf(3932070), zip(578260992), zip(1647291), pdf(1634485), csv(12977), pdf(1436424), zip(13901824), zip(4657694), pdf(9867020), zip(73817620), pdf(1175775), pdfAvailable download formats
    Dataset updated
    Jul 16, 2025
    Dataset authored and provided by
    California Department of Water Resources
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    The USGS National Hydrography Dataset (NHD) downloadable data collection from The National Map (TNM) is a comprehensive set of digital spatial data that encodes information about naturally occurring and constructed bodies of surface water (lakes, ponds, and reservoirs), paths through which water flows (canals, ditches, streams, and rivers), and related entities such as point features (springs, wells, stream gages, and dams). The information encoded about these features includes classification and other characteristics, delineation, geographic name, position and related measures, a "reach code" through which other information can be related to the NHD, and the direction of water flow. The network of reach codes delineating water and transported material flow allows users to trace movement in upstream and downstream directions. In addition to this geographic information, the dataset contains metadata that supports the exchange of future updates and improvements to the data. The NHD supports many applications, such as making maps, geocoding observations, flow modeling, data maintenance, and stewardship. For additional information on NHD, go to https://www.usgs.gov/core-science-systems/ngp/national-hydrography.

    DWR was the steward for NHD and Watershed Boundary Dataset (WBD) in California. We worked with other organizations to edit and improve NHD and WBD, using the business rules for California. California's NHD improvements were sent to USGS for incorporation into the national database. The most up-to-date products are accessible from the USGS website. Please note that the California portion of the National Hydrography Dataset is appropriate for use at the 1:24,000 scale.

    For additional derivative products and resources, including the major features in geopackage format, please go to this page: https://data.cnra.ca.gov/dataset/nhd-major-features Archives of previous statewide extracts of the NHD going back to 2018 may be found at https://data.cnra.ca.gov/dataset/nhd-archive.

    In September 2022, USGS officially notified DWR that the NHD would become static as USGS resources will be devoted to the transition to the new 3D Hydrography Program (3DHP). 3DHP will consist of LiDAR-derived hydrography at a higher resolution than NHD. Upon completion, 3DHP data will be easier to maintain, based on a modern data model and architecture, and better meet the requirements of users that were documented in the Hydrography Requirements and Benefits Study (2016). The initial releases of 3DHP include NHD data cross-walked into the 3DHP data model. It will take several years for the 3DHP to be built out for California. Please refer to the resources on this page for more information.

    The FINAL,STATIC version of the National Hydrography Dataset for California was published for download by USGS on December 27, 2023. This dataset can no longer be edited by the state stewards. The next generation of national hydrography data is the USGS 3D Hydrography Program (3DHP).

    Questions about the California stewardship of these datasets may be directed to nhd_stewardship@water.ca.gov.

  11. Z

    Data from: 3DHD CityScenes: High-Definition Maps in High-Density Point...

    • data.niaid.nih.gov
    • zenodo.org
    • +1more
    Updated Jul 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Plachetka, Christopher; Sertolli, Benjamin; Fricke, Jenny; Klingner, Marvin; Fingscheidt, Tim (2024). 3DHD CityScenes: High-Definition Maps in High-Density Point Clouds [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7085089
    Explore at:
    Dataset updated
    Jul 16, 2024
    Dataset provided by
    TU Braunschweig
    Volkswagen AG
    Authors
    Plachetka, Christopher; Sertolli, Benjamin; Fricke, Jenny; Klingner, Marvin; Fingscheidt, Tim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview

    3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items.

    Our corresponding paper (published at ITSC 2022) is available here. Further, we have applied 3DHD CityScenes to map deviation detection here.

    Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises:

    Python tools to read, generate, and visualize the dataset,

    3DHDNet deep learning pipeline (training, inference, evaluation) for map deviation detection and 3D object detection.

    The DevKit is available here:

    https://github.com/volkswagen/3DHD_devkit.

    The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany.

    When using our dataset, you are welcome to cite:

    @INPROCEEDINGS{9921866, author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and Fingscheidt, Tim}, booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)}, title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds}, year={2022}, pages={627-634}}

    Acknowledgements

    We thank the following interns for their exceptional contributions to our work.

    Benjamin Sertolli: Major contributions to our DevKit during his master thesis

    Niels Maier: Measurement campaign for data collection and data preparation

    The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies.

    The Dataset

    After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following.

    1. Dataset

    This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map.

    During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet.

    To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example.

    import json

    json_path = r"E:\3DHD_CityScenes\Dataset\train.json" with open(json_path) as jf: data = json.load(jf) print(data)

    1. HD_Map

    Map items are stored as lists of items in JSON format. In particular, we provide:

    traffic signs,

    traffic lights,

    pole-like objects,

    construction site locations,

    construction site obstacles (point-like such as cones, and line-like such as fences),

    line-shaped markings (solid, dashed, etc.),

    polygon-shaped markings (arrows, stop lines, symbols, etc.),

    lanes (ordinary and temporary),

    relations between elements (only for construction sites, e.g., sign to lane association).

    1. HD_Map_MetaData

    Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON.

    Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API.

    1. HD_PointCloud_Tiles

    The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows.

    x-coordinates: 4 byte integer

    y-coordinates: 4 byte integer

    z-coordinates: 4 byte integer

    intensity of reflected beams: 2 byte unsigned integer

    ground classification flag: 1 byte unsigned integer

    After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance.

    import numpy as np import pptk

    file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin" pc_dict = {} key_list = ['x', 'y', 'z', 'intensity', 'is_ground'] type_list = ['

  12. SPREAD: A Large-scale, High-fidelity Synthetic Dataset for Multiple Forest...

    • zenodo.org
    bin
    Updated Dec 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhengpeng Feng; Yihang She; Keshav Srinivasan; Zhengpeng Feng; Yihang She; Keshav Srinivasan (2024). SPREAD: A Large-scale, High-fidelity Synthetic Dataset for Multiple Forest Vision Tasks (Part II) [Dataset]. http://doi.org/10.5281/zenodo.14525290
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 19, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Zhengpeng Feng; Yihang She; Keshav Srinivasan; Zhengpeng Feng; Yihang She; Keshav Srinivasan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This page only provides the drone-view image dataset.

    The dataset contains drone-view RGB images, depth maps and instance segmentation labels collected from different scenes. Data from each scene is stored in a separate .7z file, along with a color_palette.xlsx file, which contains the RGB_id and corresponding RGB values.

    All files follow the naming convention: {central_tree_id}_{timestamp}, where {central_tree_id} represents the ID of the tree centered in the image, which is typically in a prominent position, and timestamp indicates the time when the data was collected.

    Specifically, each 7z file includes the following folders:

    • rgb: This folder contains the RGB images (PNG) of the scenes and their metadata (TXT). The metadata describes the weather conditions and the world time when the image was captured. An example metadata entry is: Weather:Snow_Blizzard,Hour:10,Minute:56,Second:36.

    • depth_pfm: This folder contains absolute depth information of the scenes, which can be used to reconstruct the point cloud of the scene through reprojection.

    • instance_segmentation: This folder stores instance segmentation labels (PNG) for each tree in the scene, along with metadata (TXT) that maps tree_id to RGB_id. The tree_id can be used to look up detailed information about each tree in obj_info_final.xlsx, while the RGB_id can be matched to the corresponding RGB values in color_palette.xlsx. This mapping allows for identifying which tree corresponds to a specific color in the segmentation image.

    • obj_info_final.xlsx: This file contains detailed information about each tree in the scene, such as position, scale, species, and various parameters, including trunk diameter (in cm), tree height (in cm), and canopy diameter (in cm).

    • landscape_info.txt: This file contains the ground location information within the scene, sampled every 0.5 meters.

    For birch_forest, broadleaf_forest, redwood_forest and rainforest, we also provided COCO-format annotation files (.json). Two such files can be found in these datasets:

    • {name}_coco.json: This file contains the annotation of each tree in the scene.
    • {name}_filtered.json: This file is derived from the previous one, but filtering is applied to rule out overlapping instances.

    ⚠️: 7z files that begin with "!" indicate that the RGB values in the images within the instance_segmentation folder cannot be found in color_palette.xlsx. Consequently, this prevents matching the trees in the segmentation images to their corresponding tree information, which may hinder the application of the dataset to certain tasks. This issue is related to a bug in Colossium/AirSim, which has been reported in link1 and link2.

  13. StormGPT Environmental Visual Dataset-02

    • kaggle.com
    zip
    Updated Nov 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Guzman (2025). StormGPT Environmental Visual Dataset-02 [Dataset]. https://www.kaggle.com/datasets/guzmand/stormgpt-environmental-visual-dataset-02
    Explore at:
    zip(9286776 bytes)Available download formats
    Dataset updated
    Nov 13, 2025
    Authors
    Daniel Guzman
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    About Dataset

    This dataset contains a collection of environmental visualizations generated by StormGPT, an AI-assisted environmental intelligence system designed for hydrology, atmospheric science, and stormwater analysis.

    The images represent processed outputs derived from publicly available datasets, including NOAA, NASA, USGS, and EPA. They illustrate concepts such as global precipitation patterns, watershed boundaries, stormwater runoff simulations, atmospheric data assimilation, and global environmental sensor coverage.

    A companion geospatial Excel file (stormgpt_geospatial_features.xlsx) provides structured metadata for use in machine learning models, GIS workflows, and environmental research.

    This dataset is suitable for:

    Climate and hydrologic modeling

    Atmospheric science research

    Data visualization studies

    Machine learning training

    Educational demonstrations

    Geospatial feature engineering

    All data elements originate from public environmental sources and were transformed into visual outputs through the StormGPT analytical workflow.

    Creator: Daniel Guzman Project: Stormwater Intelligence / StormGPT-V3

  14. f

    Precision, recall, and Mean Average Precision (mAP@.5:.95) on the validation...

    • plos.figshare.com
    xls
    Updated Jun 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kim Bjerge; Jamie Alison; Mads Dyrmann; Carsten Eie Frigaard; Hjalte M. R. Mann; Toke Thomas Høye (2023). Precision, recall, and Mean Average Precision (mAP@.5:.95) on the validation dataset for different trained YOLO models. [Dataset]. http://doi.org/10.1371/journal.pstr.0000051.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 14, 2023
    Dataset provided by
    PLOS Sustainability and Transformation
    Authors
    Kim Bjerge; Jamie Alison; Mads Dyrmann; Carsten Eie Frigaard; Hjalte M. R. Mann; Toke Thomas Høye
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Models trained on the small (6cls) or large (9cls) dataset. The training method indicates which metric is used to evaluate when to stop the training process: mAP@.5:.95 (mAP), F1-score (F1) or stopping after a high number of epochs (epoch).

  15. Large Scale International Boundaries

    • geodata.state.gov
    • s.cnmilf.com
    • +1more
    Updated Feb 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Department of State (2025). Large Scale International Boundaries [Dataset]. https://geodata.state.gov/geonetwork/srv/api/records/3bdb81a0-c1b9-439a-a0b1-85dac30c59b2
    Explore at:
    www:link-1.0-http--link, www:link-1.0-http--related, www:download:gpkg, www:download:zip, ogc:wms-1.3.0-http-get-capabilitiesAvailable download formats
    Dataset updated
    Feb 24, 2025
    Dataset provided by
    United States Department of Statehttp://state.gov/
    Authors
    U.S. Department of State
    Area covered
    Description

    Overview

    The Office of the Geographer and Global Issues at the U.S. Department of State produces the Large Scale International Boundaries (LSIB) dataset. The current edition is version 11.4 (published 24 February 2025). The 11.4 release contains updated boundary lines and data refinements designed to extend the functionality of the dataset. These data and generalized derivatives are the only international boundary lines approved for U.S. Government use. The contents of this dataset reflect U.S. Government policy on international boundary alignment, political recognition, and dispute status. They do not necessarily reflect de facto limits of control.

    National Geospatial Data Asset

    This dataset is a National Geospatial Data Asset (NGDAID 194) managed by the Department of State. It is a part of the International Boundaries Theme created by the Federal Geographic Data Committee.

    Dataset Source Details

    Sources for these data include treaties, relevant maps, and data from boundary commissions, as well as national mapping agencies. Where available and applicable, the dataset incorporates information from courts, tribunals, and international arbitrations. The research and recovery process includes analysis of satellite imagery and elevation data. Due to the limitations of source materials and processing techniques, most lines are within 100 meters of their true position on the ground.

    Cartographic Visualization

    The LSIB is a geospatial dataset that, when used for cartographic purposes, requires additional styling. The LSIB download package contains example style files for commonly used software applications. The attribute table also contains embedded information to guide the cartographic representation. Additional discussion of these considerations can be found in the Use of Core Attributes in Cartographic Visualization section below.

    Additional cartographic information pertaining to the depiction and description of international boundaries or areas of special sovereignty can be found in Guidance Bulletins published by the Office of the Geographer and Global Issues: https://data.geodata.state.gov/guidance/index.html

    Contact

    Direct inquiries to internationalboundaries@state.gov. Direct download: https://data.geodata.state.gov/LSIB.zip

    Attribute Structure

    The dataset uses the following attributes divided into two categories: ATTRIBUTE NAME | ATTRIBUTE STATUS CC1 | Core CC1_GENC3 | Extension CC1_WPID | Extension COUNTRY1 | Core CC2 | Core CC2_GENC3 | Extension CC2_WPID | Extension COUNTRY2 | Core RANK | Core LABEL | Core STATUS | Core NOTES | Core LSIB_ID | Extension ANTECIDS | Extension PREVIDS | Extension PARENTID | Extension PARENTSEG | Extension

    These attributes have external data sources that update separately from the LSIB: ATTRIBUTE NAME | ATTRIBUTE STATUS CC1 | GENC CC1_GENC3 | GENC CC1_WPID | World Polygons COUNTRY1 | DoS Lists CC2 | GENC CC2_GENC3 | GENC CC2_WPID | World Polygons COUNTRY2 | DoS Lists LSIB_ID | BASE ANTECIDS | BASE PREVIDS | BASE PARENTID | BASE PARENTSEG | BASE

    The core attributes listed above describe the boundary lines contained within the LSIB dataset. Removal of core attributes from the dataset will change the meaning of the lines. An attribute status of “Extension” represents a field containing data interoperability information. Other attributes not listed above include “FID”, “Shape_length” and “Shape.” These are components of the shapefile format and do not form an intrinsic part of the LSIB.

    Core Attributes

    The eight core attributes listed above contain unique information which, when combined with the line geometry, comprise the LSIB dataset. These Core Attributes are further divided into Country Code and Name Fields and Descriptive Fields.

    County Code and Country Name Fields

    “CC1” and “CC2” fields are machine readable fields that contain political entity codes. These are two-character codes derived from the Geopolitical Entities, Names, and Codes Standard (GENC), Edition 3 Update 18. “CC1_GENC3” and “CC2_GENC3” fields contain the corresponding three-character GENC codes and are extension attributes discussed below. The codes “Q2” or “QX2” denote a line in the LSIB representing a boundary associated with areas not contained within the GENC standard.

    The “COUNTRY1” and “COUNTRY2” fields contain the names of corresponding political entities. These fields contain names approved by the U.S. Board on Geographic Names (BGN) as incorporated in the ‘"Independent States in the World" and "Dependencies and Areas of Special Sovereignty" lists maintained by the Department of State. To ensure maximum compatibility, names are presented without diacritics and certain names are rendered using common cartographic abbreviations. Names for lines associated with the code "Q2" are descriptive and not necessarily BGN-approved. Names rendered in all CAPITAL LETTERS denote independent states. Names rendered in normal text represent dependencies, areas of special sovereignty, or are otherwise presented for the convenience of the user.

    Descriptive Fields

    The following text fields are a part of the core attributes of the LSIB dataset and do not update from external sources. They provide additional information about each of the lines and are as follows: ATTRIBUTE NAME | CONTAINS NULLS RANK | No STATUS | No LABEL | Yes NOTES | Yes

    Neither the "RANK" nor "STATUS" fields contain null values; the "LABEL" and "NOTES" fields do. The "RANK" field is a numeric expression of the "STATUS" field. Combined with the line geometry, these fields encode the views of the United States Government on the political status of the boundary line.

    ATTRIBUTE NAME | | VALUE | RANK | 1 | 2 | 3 STATUS | International Boundary | Other Line of International Separation | Special Line

    A value of “1” in the “RANK” field corresponds to an "International Boundary" value in the “STATUS” field. Values of ”2” and “3” correspond to “Other Line of International Separation” and “Special Line,” respectively.

    The “LABEL” field contains required text to describe the line segment on all finished cartographic products, including but not limited to print and interactive maps.

    The “NOTES” field contains an explanation of special circumstances modifying the lines. This information can pertain to the origins of the boundary lines, limitations regarding the purpose of the lines, or the original source of the line.

    Use of Core Attributes in Cartographic Visualization

    Several of the Core Attributes provide information required for the proper cartographic representation of the LSIB dataset. The cartographic usage of the LSIB requires a visual differentiation between the three categories of boundary lines. Specifically, this differentiation must be between:

    • International Boundaries (Rank 1);
    • Other Lines of International Separation (Rank 2); and
    • Special Lines (Rank 3).

    Rank 1 lines must be the most visually prominent. Rank 2 lines must be less visually prominent than Rank 1 lines. Rank 3 lines must be shown in a manner visually subordinate to Ranks 1 and 2. Where scale permits, Rank 2 and 3 lines must be labeled in accordance with the “Label” field. Data marked with a Rank 2 or 3 designation does not necessarily correspond to a disputed boundary. Please consult the style files in the download package for examples of this depiction.

    The requirement to incorporate the contents of the "LABEL" field on cartographic products is scale dependent. If a label is legible at the scale of a given static product, a proper use of this dataset would encourage the application of that label. Using the contents of the "COUNTRY1" and "COUNTRY2" fields in the generation of a line segment label is not required. The "STATUS" field contains the preferred description for the three LSIB line types when they are incorporated into a map legend but is otherwise not to be used for labeling.

    Use of

  16. Z

    SyntheWorld: A Large-Scale Synthetic Dataset for Land Cover Mapping and...

    • data-staging.niaid.nih.gov
    • data.niaid.nih.gov
    Updated Sep 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jian Song; Hongruixuan Chen; Naoto Yokoya (2023). SyntheWorld: A Large-Scale Synthetic Dataset for Land Cover Mapping and Building Change Detection [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_8349018
    Explore at:
    Dataset updated
    Sep 20, 2023
    Dataset provided by
    The University of Tokyo
    Authors
    Jian Song; Hongruixuan Chen; Naoto Yokoya
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Paper Accept by WACV 2024

    [paper, supp] [arXiv]

    Overview

    Synthetic datasets, recognized for their cost effectiveness, play a pivotal role in advancing computer vision tasks and techniques. However, when it comes to remote sensing image processing, the creation of synthetic datasets becomes challenging due to the demand for larger-scale and more diverse 3D models. This complexity is compounded by the difficulties associated with real remote sensing datasets, including limited data acquisition and high annotation costs, which amplifies the need for high-quality synthetic alternatives. To address this, we present SyntheWorld, a synthetic dataset unparalleled in quality, diversity, and scale. It includes 40,000 images with submeter-level pixels and fine-grained land cover annotations of eight categories, and it also provides 40,000 pairs of bitemporal image pairs with building change annotations for building change detection task. We conduct experiments on multiple benchmark remote sensing datasets to verify the effectiveness of SyntheWorld and to investigate the conditions under which our synthetic data yield advantages.

    Description

    This dataset has been designed for land cover mapping and building change detection tasks.

    File Structure and Content:

    1. 1024.zip:

      • Contains images of size 1024x1024 with a GSD (Ground Sampling Distance) of 0.6-1m.
      • images and ss_mask folders: Used for the land cover mapping task.
      • images folder: Post-event images for building change detection.
      • small-pre-images: Images with a minor off-nadir angle difference compared to post-event images.
      • big-pre-images: Images with a large off-nadir angle difference compared to post-event images.
      • cd_mask: Ground truth for the building change detection task.
    2. 512-1.zip, 512-2.zip, 512-3.zip:

      • Contains images of size 512x512 with a GSD of 0.3-0.6m.
      • images and ss_mask folders: Used for the land cover mapping task.
      • images folder: Post-event images for building change detection.
      • pre-event folder: Images for the pre-event phase.
      • cd-mask: Ground truth for building change detection.

    Land Cover Mapping Class Grep Map:

    class_grey = { "Bareland": 1, "Rangeland": 2, "Developed Space": 3, "Road": 4, "Tree": 5, "Water": 6, "Agriculture land": 7, "Building": 8, }

    Reference

    @misc{song2023syntheworld, title={SyntheWorld: A Large-Scale Synthetic Dataset for Land Cover Mapping and Building Change Detection}, author={Jian Song and Hongruixuan Chen and Naoto Yokoya}, year={2023}, eprint={2309.01907}, archivePrefix={arXiv}, primaryClass={cs.CV} }

  17. Z

    A dataset to investigate ChatGPT for enhancing Students' Learning Experience...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Schicchi, Daniele; Taibi, Davide (2024). A dataset to investigate ChatGPT for enhancing Students' Learning Experience via Concept Maps [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_12076680
    Explore at:
    Dataset updated
    Jun 19, 2024
    Dataset provided by
    Institute for Educational Technology, National Research Council of Italy
    Authors
    Schicchi, Daniele; Taibi, Davide
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset was compiled to examine the use of ChatGPT 3.5 in educational settings, particularly for creating and personalizing concept maps. The data has been organized into three folders: Maps, Texts, and Questionnaires. The Maps folder contains the graphical representation of the concept maps and the PlanUML code for drawing them in Italian and English. The Texts folder contains the source text used as input for the map's creation The Questionnaires folder includes the students' responses to the three administered questionnaires.

  18. GlobES ecosystem dominance maps

    • figshare.com
    zip
    Updated Jul 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ruben Remelgado; Carsten Meyer (2022). GlobES ecosystem dominance maps [Dataset]. http://doi.org/10.6084/m9.figshare.12728006.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 28, 2022
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Ruben Remelgado; Carsten Meyer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Maps of ecosystem dominance generated from the GlobES ecosystem data cube. For each year in our time series, we iterated through each pixel, and classified it with a unique identifier corresponding to the ecosystem type with the highest area. The GlobES Data Cube includes global time-series for 65 ecosystem types and, therefore, the current dataset has a maximum of 65 classes. The output was a time-series of 27 yearly layers depicting changes in per-pixel ecosystem dominance between 1992 and 2018.The list of classes and the corresponding unique grid identifiers is described in "legend.csv", which contains information on:- Class identifier within the GlobES Data Cube ("class_id")- Numeric identifier within the current dataset ("grid_id")- Class name ("long_name" and "short_name")- Ecosystem class group (ecosystem_group")

  19. Refined DataCo Supply Chain Geospatial Dataset

    • kaggle.com
    zip
    Updated Jan 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Om Gupta (2025). Refined DataCo Supply Chain Geospatial Dataset [Dataset]. https://www.kaggle.com/datasets/aaumgupta/refined-dataco-supply-chain-geospatial-dataset
    Explore at:
    zip(29010639 bytes)Available download formats
    Dataset updated
    Jan 29, 2025
    Authors
    Om Gupta
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Refined DataCo Smart Supply Chain Geospatial Dataset

    Optimized for Geospatial and Big Data Analysis

    This dataset is a refined and enhanced version of the original DataCo SMART SUPPLY CHAIN FOR BIG DATA ANALYSIS dataset, specifically designed for advanced geospatial and big data analysis. It incorporates geocoded information, language translations, and cleaned data to enable applications in logistics optimization, supply chain visualization, and performance analytics.

    Key Features

    1. Geocoded Source and Destination Data

    • Accurate latitude and longitude coordinates for both source and destination locations.
    • Facilitates geospatial mapping, route analysis, and distance calculations.

    2. Supplementary GeoJSON Files

    • src_points.geojson: Source point geometries.
    • dest_points.geojson: Destination point geometries.
    • routes.geojson: Line geometries representing source-destination routes.
    • These files are compatible with GIS software and geospatial libraries such as GeoPandas, Folium, and QGIS.

    3. Language Translation

    • Key location fields (countries, states, and cities) are translated into English for consistency and global accessibility.

    4. Cleaned and Consolidated Data

    • Addressed missing values, removed duplicates, and corrected erroneous entries.
    • Ready-to-use dataset for analysis without additional preprocessing.

    5. Routes and Points Geometry

    • Enables the creation of spatial visualizations, hotspot identification, and route efficiency analyses.

    Applications

    1. Logistics Optimization

    • Analyze transportation routes and delivery performance to improve efficiency and reduce costs.

    2. Supply Chain Visualization

    • Create detailed maps to visualize the global flow of goods.

    3. Geospatial Modeling

    • Perform proximity analysis, clustering, and geospatial regression to uncover patterns in supply chain operations.

    4. Business Intelligence

    • Use the dataset for KPI tracking, decision-making, and operational insights.

    Dataset Content

    Files Included

    1. DataCoSupplyChainDatasetRefined.csv

      • The main dataset containing cleaned fields, geospatial coordinates, and English translations.
    2. src_points.geojson

      • GeoJSON file containing the source points for easy visualization and analysis.
    3. dest_points.geojson

      • GeoJSON file containing the destination points.
    4. routes.geojson

      • GeoJSON file with LineStrings representing routes between source and destination points.

    Attribution

    This dataset is based on the original dataset published by Fabian Constante, Fernando Silva, and António Pereira:
    Constante, Fabian; Silva, Fernando; Pereira, António (2019), “DataCo SMART SUPPLY CHAIN FOR BIG DATA ANALYSIS”, Mendeley Data, V5, doi: 10.17632/8gx2fvg2k6.5.

    Refinements include geospatial processing, translation, and additional cleaning by the uploader to enhance usability and analytical potential.

    Tips for Using the Dataset

    • For geospatial analysis, leverage tools like GeoPandas, QGIS, or Folium to visualize routes and points.
    • Use the GeoJSON files for interactive mapping and spatial queries.
    • Combine this dataset with external datasets (e.g., road networks) for enriched analytics.

    This dataset is designed to empower data scientists, researchers, and business professionals to explore the intersection of geospatial intelligence and supply chain optimization.

  20. Digital Geologic-GIS Map of the Big Pine 15' Quadrangle, California (NPS,...

    • catalog.data.gov
    • s.cnmilf.com
    Updated Nov 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2025). Digital Geologic-GIS Map of the Big Pine 15' Quadrangle, California (NPS, GRD, GRI, SEKI, BIGP digital map) adapted from a U.S. Geological Survey Professional Paper map by Bateman, Pakiser and Kane (1965) [Dataset]. https://catalog.data.gov/dataset/digital-geologic-gis-map-of-the-big-pine-15-quadrangle-california-nps-grd-gri-seki-bigp-di
    Explore at:
    Dataset updated
    Nov 25, 2025
    Dataset provided by
    National Park Servicehttp://www.nps.gov/
    Area covered
    California, Big Pine
    Description

    The Digital Geologic-GIS Map of the Big Pine 15' Quadrangle, California is composed of GIS data layers and GIS tables, and is available in the following GRI-supported GIS data formats: 1.) a 10.1 file geodatabase (bigp_geology.gdb), and a 2.) Open Geospatial Consortium (OGC) geopackage. The file geodatabase format is supported with a 1.) ArcGIS Pro map file (.mapx) file (bigp_geology.mapx) and individual Pro layer (.lyrx) files (for each GIS data layer), as well as with a 2.) 10.1 ArcMap (.mxd) map document (bigp_geology.mxd) and individual 10.1 layer (.lyr) files (for each GIS data layer). Upon request, the GIS data is also available in ESRI 10.1 shapefile format. Contact Stephanie O'Meara (see contact information below) to acquire the GIS data in these GIS data formats. In addition to the GIS data and supporting GIS files, three additional files comprise a GRI digital geologic-GIS dataset or map: 1.) A GIS readme file (seki_manz_geology_gis_readme.pdf), 2.) the GRI ancillary map information document (.pdf) file (seki_manz_geology.pdf) which contains geologic unit descriptions, as well as other ancillary map information and graphics from the source map(s) used by the GRI in the production of the GRI digital geologic-GIS data for the park, and 3.) a user-friendly FAQ PDF version of the metadata (bigp_geology_metadata_faq.pdf). Please read the seki_manz_geology_gis_readme.pdf for information pertaining to the proper extraction of the GIS data and other map files. QGIS software is available for free at: https://www.qgis.org/en/site/. The data were completed as a component of the Geologic Resources Inventory (GRI) program, a National Park Service (NPS) Inventory and Monitoring (I&M) Division funded program that is administered by the NPS Geologic Resources Division (GRD). For a complete listing of GRI products visit the GRI publications webpage: For a complete listing of GRI products visit the GRI publications webpage: https://www.nps.gov/subjects/geology/geologic-resources-inventory-products.htm. For more information about the Geologic Resources Inventory Program visit the GRI webpage: https://www.nps.gov/subjects/geology/gri,htm. At the bottom of that webpage is a "Contact Us" link if you need additional information. You may also directly contact the program coordinator, Jason Kenworthy (jason_kenworthy@nps.gov). Source geologic maps and data used to complete this GRI digital dataset were provided by the following: U.S. Geological Survey. Detailed information concerning the sources used and their contribution the GRI product are listed in the Source Citation section(s) of this metadata record (bigp_geology_metadata.txt or bigp_geology_metadata_faq.pdf). Users of this data are cautioned about the locational accuracy of features within this dataset. Based on the source map scale of 1:62,500 and United States National Map Accuracy Standards features are within (horizontally) 31.8 meters or 104.2 feet of their actual location as presented by this dataset. Users of this data should thus not assume the location of features is exactly where they are portrayed in ArcGIS, QGIS or other software used to display this dataset. All GIS and ancillary tables were produced as per the NPS GRI Geology-GIS Geodatabase Data Model v. 2.3. (available at: https://www.nps.gov/articles/gri-geodatabase-model.htm).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2025). Surface Weather Observation Large Scale Map Imagery [Dataset]. http://doi.org/10.26023/1KTJ-6PJA-7F01

Surface Weather Observation Large Scale Map Imagery

Explore at:
imageAvailable download formats
Dataset updated
Oct 7, 2025
Time period covered
Jan 27, 2009 - Mar 10, 2010
Area covered
Description

This data set contains maps of surface weather observations over the western two-thirds of the US and southern Canada. The latest observation at the time the plot was generated was utilized, so the plots contain a mix of standard hourly and special observations. The maps were generated by NCAR/EOL. This data set contains imagery from the two PLOWS field phases, 27 January to 30 March 2009 and 14 October 2009 to 10 March 2010. No imagery are available for the period between the field phases.

Search
Clear search
Close search
Google apps
Main menu