100+ datasets found
  1. e

    Cloud Optimized Raster Encoding (CORE) format

    • envidat.ch
    • opendata.swiss
    • +1more
    .sh, json +2
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ionut Iosifescu Enescu; Dominik Haas-Artho; Lucia de Espona; Marius Rüetschi (2025). Cloud Optimized Raster Encoding (CORE) format [Dataset]. http://doi.org/10.16904/envidat.230
    Explore at:
    .sh, not available, xml, jsonAvailable download formats
    Dataset updated
    Jun 4, 2025
    Dataset provided by
    Swiss Federal Institute for Forest, Snow and Landscape Research WSL
    Authors
    Ionut Iosifescu Enescu; Dominik Haas-Artho; Lucia de Espona; Marius Rüetschi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Switzerland
    Dataset funded by
    WSL
    Description

    Acknowledgements: The CORE format was proudly inspired by the Cloud Optimized GeoTIFF (COG) format, by considering how to leverage the ability of clients issuing ​HTTP GET range requests for a time-series of remote sensing and aerial imagery (instead of just one image).

    License: The Cloud Optimized Raster Encoding (CORE) specifications are released to the public domain under a Creative Commons 1.0 CC0 "No Rights Reserved" international license. You can reuse the information contained herein in any way you want, for any purposes and without restrictions.

    Summary: The Cloud Optimized Raster Encoding (CORE) format is being developed for the efficient storage and management of gridded data by applying video encoding algorithms. It is mainly designed for the exchange and preservation of large time series data in environmental data repositories, while in the same time enabling more efficient workflows on the cloud. It can be applied to any large number of similar (in pixel size and image dimensions) raster data layers. CORE is not designed to replace COG but to work together with COG for a collection of many layers (e.g. by offering a fast preview of layers when switching between layers of a time series). WARNING: Currently only applicable to RGB/Byte imagery. The final CORE specifications may probably be very different from what is written herein or CORE may not ever become productive due to a myriad of reasons (see also 'Major issues to be solved'). With this early public sharing of the format we explicitly support the Open Science agenda, which implies "shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process" (quote from: European Commission, Directorate General for Research and Innovation, 2016. Open innovation, open science, open to the world). CORE Specifications: 1) a MP4 or WebM video digital multimedia container format (or any future video container playable as HTML video in major browsers) 2) a free to use or open video compression codec such as H.264, VP9, or AV1 (or any future video codec that is open sourced or free to use for end users) Note: H.264 is currently recommended because of the wide usage with support in all major browsers, fast encoding due to acceleration in hardware (which is currently not the case for AV1 or VP9) and the fact that MPEG LA has allowed the free use for streaming video that is free to the end users. However, please note that H.264 is restricted by patents and its use in proprietary or commercial software requires the payment of royalties to MPEG LA. However, when AV1 matures and accelerated hardware encoding becomes available, AV1 is expected to offer 30% to 50% smaller file size in comparison with H.264, while retaining the same quality. 3) the encoding frame rate should be of one frame per second (fps) with each layer segmented in internal tiles, similar to COG, ordered by the main use case when accessing the data: either layer contiguous or tile contiguous; Note: The internal tile arrangement should support an easy navigation inside the CORE video format, depending on the use case. 4) a CORE file is optimised for streaming with the moov atom at the beginning of the file (e.g. with -movflags faststart) and optional additional optimisations depending on the codec used (e.g. -tune fastdecode -tune zerolatency for H.264) 5) metadata tags inside the moov atom for describing and using geographic image data (that are preferably compatible with the OGC GeoTIFF standard or any future standard accepted by the geospatial community) as well as list of original file names corresponding to each CORE layer 6) it needs to encode similar source rasters (such as time series of rasters with the same extent and resolution, or different tiles of the same product; each input raster should be having the same image and pixel size) 7) it provides a mechanism for addressing and requesting overviews (lower resolution data) for a fast display in web browser depending on the map scale (currently external overviews) Major issues to be solved: - Internal overviews (similar to COG), by chaining lower resolution videos in the same MP4 container for fast access to overviews first); Currently, overviews are kept as separate files, as external overviews. - Metadata encoding (how to best encode spatial extent, layer names, and so on, for each of the layer inside the series, which may have a different geographical extent, etc...; Known issues: adding too many tags with FFmpeg which are not part of the standard MP4 moov atom; metadata tags have a limited string length. - Applicability beyond RGB/Byte datasets; defining a standard way of converting cell values from Int16/UInt16/UInt32/Int32/Float32/Float64/ data types into multi-band Byte values (and reconstructing them back to the original data type within acceptable thresholds) Example Notice: The provided CORE (.mp4) examples contain modified Copernicus Sentinel data [2018-2021]. For generating the CORE examples provided, 50 original Sentinel 2 (S-2) TCI data images from an area located inside Switzerland were downloaded from www.copernicus.eu, and then transformed into CORE format using ffmpeg with H.264 encoding using the x264 library. DISCLAIMER: Basic scripts are provided for the Geomatics peer review (in 2021) and kept as additional information for the dataset. Nevertheless, please note that software dependencies and libraries, as well as cloud storage paths, may quickly become deprecated over time (after 2021). For compatibility, stable dependencies and libraries released around 2020 should be used.

  2. Geospatial data for the Vegetation Mapping Inventory Project of Pictured...

    • catalog.data.gov
    Updated Nov 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2025). Geospatial data for the Vegetation Mapping Inventory Project of Pictured Rocks National Lakeshore [Dataset]. https://catalog.data.gov/dataset/geospatial-data-for-the-vegetation-mapping-inventory-project-of-pictured-rocks-national-la
    Explore at:
    Dataset updated
    Nov 25, 2025
    Dataset provided by
    National Park Servicehttp://www.nps.gov/
    Area covered
    Pictured Rocks
    Description

    The files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. We converted the photointerpreted data into a format usable in a geographic information system (GIS) by employing three fundamental processes: (1) orthorectify, (2) digitize, and (3) develop the geodatabase. All digital map automation was projected in Universal Transverse Mercator (UTM), Zone 16, using the North American Datum of 1983 (NAD83). Orthorectify: We orthorectified the interpreted overlays by using OrthoMapper, a softcopy photogrammetric software for GIS. One function of OrthoMapper is to create orthorectified imagery from scanned and unrectified imagery (Image Processing Software, Inc., 2002). The software features a method of visual orientation involving a point-and-click operation that uses existing orthorectified horizontal and vertical base maps. Of primary importance to us, OrthoMapper also has the capability to orthorectify the photointerpreted overlays of each photograph based on the reference information provided. Digitize: To produce a polygon vector layer for use in ArcGIS (Environmental Systems Research Institute [ESRI], Redlands, California), we converted each raster-based image mosaic of orthorectified overlays containing the photointerpreted data into a grid format by using ArcGIS. In ArcGIS, we used the ArcScan extension to trace the raster data and produce ESRI shapefiles. We digitally assigned map-attribute codes (both map-class codes and physiognomic modifier codes) to the polygons and checked the digital data against the photointerpreted overlays for line and attribute consistency. Ultimately, we merged the individual layers into a seamless layer. Geodatabase: At this stage, the map layer has only map-attribute codes assigned to each polygon. To assign meaningful information to each polygon (e.g., map-class names, physiognomic definitions, links to NVCS types), we produced a feature-class table, along with other supportive tables and subsequently related them together via an ArcGIS Geodatabase. This geodatabase also links the map to other feature-class layers produced from this project, including vegetation sample plots, accuracy assessment (AA) sites, aerial photo locations, and project boundary extent. A geodatabase provides access to a variety of interlocking data sets, is expandable, and equips resource managers and researchers with a powerful GIS tool.

  3. Open-Source Spatial Analytics (R) - Datasets - AmericaView - CKAN

    • ckan.americaview.org
    Updated Sep 10, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ckan.americaview.org (2022). Open-Source Spatial Analytics (R) - Datasets - AmericaView - CKAN [Dataset]. https://ckan.americaview.org/dataset/open-source-spatial-analytics-r
    Explore at:
    Dataset updated
    Sep 10, 2022
    Dataset provided by
    CKANhttps://ckan.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In this course, you will learn to work within the free and open-source R environment with a specific focus on working with and analyzing geospatial data. We will cover a wide variety of data and spatial data analytics topics, and you will learn how to code in R along the way. The Introduction module provides more background info about the course and course set up. This course is designed for someone with some prior GIS knowledge. For example, you should know the basics of working with maps, map projections, and vector and raster data. You should be able to perform common spatial analysis tasks and make map layouts. If you do not have a GIS background, we would recommend checking out the West Virginia View GIScience class. We do not assume that you have any prior experience with R or with coding. So, don't worry if you haven't developed these skill sets yet. That is a major goal in this course. Background material will be provided using code examples, videos, and presentations. We have provided assignments to offer hands-on learning opportunities. Data links for the lecture modules are provided within each module while data for the assignments are linked to the assignment buttons below. Please see the sequencing document for our suggested order in which to work through the material. After completing this course you will be able to: prepare, manipulate, query, and generally work with data in R. perform data summarization, comparisons, and statistical tests. create quality graphs, map layouts, and interactive web maps to visualize data and findings. present your research, methods, results, and code as web pages to foster reproducible research. work with spatial data in R. analyze vector and raster geospatial data to answer a question with a spatial component. make spatial models and predictions using regression and machine learning. code in the R language at an intermediate level.

  4. U

    Yellowstone Sample Collection - database

    • data.usgs.gov
    • gimi9.com
    • +1more
    Updated Jul 26, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joel Robinson; Emma Mcconville; Mark Szymanski; Robert Christiansen (2023). Yellowstone Sample Collection - database [Dataset]. http://doi.org/10.5066/P94JTACV
    Explore at:
    Dataset updated
    Jul 26, 2023
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    Joel Robinson; Emma Mcconville; Mark Szymanski; Robert Christiansen
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    1965 - 2001
    Description

    This database was prepared using a combination of materials that include aerial photographs, topographic maps (1:24,000 and 1:250,000), field notes, and a sample catalog. Our goal was to translate sample collection site locations at Yellowstone National Park and surrounding areas into a GIS database. This was achieved by transferring site locations from aerial photographs and topographic maps into layers in ArcMap. Each field site is located based on field notes describing where a sample was collected. Locations were marked on the photograph or topographic map by a pinhole or dot, respectively, with the corresponding station or site numbers. Station and site numbers were then referenced in the notes to determine the appropriate prefix for the station. Each point on the aerial photograph or topographic map was relocated on the screen in ArcMap, on a digital topographic map, or an aerial photograph. Several samples are present in the field notes and in the catalog but do not corresp ...

  5. d

    State Class Rasters (Land Use and Land Cover per Year and Scenario)

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Nov 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). State Class Rasters (Land Use and Land Cover per Year and Scenario) [Dataset]. https://catalog.data.gov/dataset/state-class-rasters-land-use-and-land-cover-per-year-and-scenario
    Explore at:
    Dataset updated
    Nov 27, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    This dataset consists of raster geotiff outputs of annual map projections of land use and land cover for the California Central Valley for the period 2011-2101 across 5 future scenarios. Four of the scenarios were developed as part of the Central Valley Landscape Conservation Project. The 4 original scenarios include a Bad-Business-As-Usual (BBAU; high water availability, poor management), California Dreamin’ (DREAM; high water availability, good management), Central Valley Dustbowl (DUST; low water availability, poor management), and Everyone Equally Miserable (EEM; low water availability, good management). These scenarios represent alternative plausible futures, capturing a range of climate variability, land management activities, and habitat restoration goals. We parameterized our models based on close interpretation of these four scenario narratives to best reflect stakeholder interests, adding a baseline Historical Business-As-Usual scenario (HBAU) for comparison. For these future map projections, the model was initialized in 2011 and run forward on an annual time step to 2101. Each filename has the associated scenario ID (scn418 = DUST, scn419 = DREAM, scn420 = HBAU, scn421 = BBAU, and scn426 = EEM), State Class identification as “sc”, model iteration (= it1 in all cases as only 1 Monte Carlo simulation was modeled), and timestep as “ts” information embedded in the file naming convention. For example, the filename scn418.sc.it1.ts2027.tif represents the DUST scenario (scn418), state class information (sc), iteration 1 (it1), for the 2027 model year (ts2027). The full methods and results of this research are described in detail in the parent manuscript "Integrated modeling of climate, land use, and water availability scenarios and their impacts on managed wetland habitat: A case study from California’s Central Valley" (2021).

  6. d

    Previous mineral-resource assessment data compilation - geodatabases with...

    • catalog.data.gov
    • data.usgs.gov
    • +2more
    Updated Oct 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Previous mineral-resource assessment data compilation - geodatabases with raster mosaic datasets [Dataset]. https://catalog.data.gov/dataset/previous-mineral-resource-assessment-data-compilation-geodatabases-with-raster-mosaic-data
    Explore at:
    Dataset updated
    Oct 2, 2025
    Dataset provided by
    U.S. Geological Survey
    Description

    This zip file contains geodatabases with raster mosaic datasets. The raster mosaic datasets consist of georeferenced tiff images of mineral potential maps, their associated metadata, and descriptive information about the images. These images are duplicates of the images found in the georeferenced tiff images zip file. There are four geodatabases containing the raster mosaic datasets, one for each of the four SaMiRA report areas: North-Central Montana; North-Central Idaho; Southwestern and South-Central Wyoming and Bear River Watershed; and Nevada Borderlands. The georeferenced images were clipped to the extent of the map and all explanatory text, gathered from map explanations or report text was imported into the raster mosaic dataset database as ‘Footprint’ layer attributes. The data compiled into the 'Footprint' layer tables contains the figure caption from the original map, online linkage to the source report when available, and information on the assessed commodities according to the legal definition of mineral resources—metallic, non-metallic, leasable non-fuel, leasable fuel, geothermal, paleontological, and saleable. To use the raster mosaic datasets in ArcMap, click on “add data”, double click on the [filename].gdb, and add the item titled [filename]_raster_mosaic. This will add all of the images within the geodatabase as part of the raster mosaic dataset. Once added to ArcMap, the raster mosaic dataset appears as a group of three layers under the mosaic dataset. The first item in the group is the ‘Boundary’, which contains a single polygon representing the extent of all images in the dataset. The second item is the ‘Footprint’, which contains polygons representing the extent of each individual image in the dataset. The ‘Footprint’ layer also contains the attribute table data associated with each of the images. The third item is the ‘Image’ layer and contains the images in the dataset. The images are overlapping and must be selected and locked, or queried in order to be viewed one at a time. Images can be selected from the attribute table, or can be selected using the direct select tool. When using the direct select tool, you will need to deselect the ‘overviews’ after clicking on an image or group of images. To do this, right click on the ‘Footprint’ layer and hover over ‘Selection’, then click ‘Reselect Only Primary Rasters’. To lock a selected image after selecting it, right-click on the ‘Footprint’ layer in the table of contents window and hover over ‘Selection’, then click ‘Lock To Selected Rasters’. Another way to view a single image is to run a definition query on the image. This is done by right clicking on the raster mosaic in the table of contents and opening the layer properties box. Then click on the ‘Definition Query’ tab and create a query for the desired image.

  7. Natural Earth Counties of the United States - larger scale

    • hub.arcgis.com
    Updated Jun 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri SDI (2025). Natural Earth Counties of the United States - larger scale [Dataset]. https://hub.arcgis.com/datasets/1cccf99d65b641819f83e9963f14d37f
    Explore at:
    Dataset updated
    Jun 18, 2025
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri SDI
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    United States,
    Description

    Polygon layer representing United States counties with name attributes.About Natural EarthNatural Earth is a convenient resource for creating custom maps. Unlike other map data intended for analysis or detailed government mapping, it is designed to meet the needs of cartographers and designers to make generalized maps. Maximum flexibility is a goal.Natural Earth is a public domain collection of map datasets available at 1:10 million (larger scale/more detailed), 1:50 million (medium scale/moderate detail), and 1:110 million (small scale/coarse detail) scales. It features tightly integrated vector and raster data to create a variety of visually pleasing, well-crafted maps with cartography or GIS software. Natural Earth data is made possible by many volunteers and supported by the North American Cartographic Information Society (NACIS).Convenience – Natural Earth solves a problem: finding suitable data for making small-scale maps. In a time when the web is awash in geospatial data, cartographers are forced to waste time sifting through confusing tangles of poorly attributed data to make clean, legible maps. Because your time is valuable, Natural Earth data comes ready to use.Neatness Counts–The carefully generalized linework maintains consistent, recognizable geographic shapes at 1:10m, 1:50m, and 1:110m scales. Natural Earth was built from the ground up, so you will find that all data layers align precisely with one another. For example, where rivers and country borders are one and the same, the lines are coincident.GIS Attributes – Natural Earth, however, is more than just a collection of pretty lines. The data attributes are equally important for mapmaking. Most data contain embedded feature names, which are ranked by relative importance. Other attributes facilitate faster map production, such as width attributes assigned to river segments for creating tapers. Intelligent dataThe attributes assigned to Natural Earth vectors make for efficient mapmaking. Most lines and areas contain embedded feature names, which are ranked by relative importance. Up to eight rankings per data theme allow easy custom map “mashups” to emphasize your subject while de-emphasizing reference features. Other attributes focus on map design. For example, width attributes assigned to rivers allow you to create tapered drainages. Assigning different colors to contiguous country polygons is another task made easier thanks to data attribution.Other key featuresVector features include name attributes and bounding box extents. Know that the Rocky Mountains are larger than the Ozarks.Large polygons are split for more efficient data handling—such as bathymetric layers.Projection-friendly vectors precisely match at 180 degrees longitude. Lines contain enough data points for smooth bending in conic projections, but not so many that computer processing speed suffers.Raster data includes grayscale-shaded relief and cross-blended hypsometric tints derived from the latest NASA SRTM Plus elevation data and tailored to register with Natural Earth Vector.Optimized for use in web mapping applications, with built-in scale attributes to assist features to be shown at different zoom levels.

  8. U

    U.S. block-level population density rasters for 1990, 2000, and 2010

    • data.usgs.gov
    • dataone.org
    • +2more
    Updated Jan 6, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    James Falcone (2025). U.S. block-level population density rasters for 1990, 2000, and 2010 [Dataset]. http://doi.org/10.5066/F74J0C6M
    Explore at:
    Dataset updated
    Jan 6, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    James Falcone
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Area covered
    United States
    Description

    This dataset consists of three raster datasets representing population density for the years 1990, 2000, and 2010. All three rasters are based on block-level census geography data. The 1990 and 2000 data are derived from data normalized to 2000 block boundaries, while the 2010 data are based on 2010 block boundaries. The 1990 and 2000 data are rasters at 100-meter (m) resolution, while the 2010 data are at 60-m resolution. See details about each dataset in the specific metadata for each raster.

  9. a

    7500-ft Tiling Index for King County Raster Data / idxp7500 area

    • hub.arcgis.com
    • king-snocoplanning.opendata.arcgis.com
    • +1more
    Updated Nov 2, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    King County (2017). 7500-ft Tiling Index for King County Raster Data / idxp7500 area [Dataset]. https://hub.arcgis.com/datasets/e8f1c18822b543daa42ad57b5857921e
    Explore at:
    Dataset updated
    Nov 2, 2017
    Dataset authored and provided by
    King County
    Area covered
    Description

    A spatial tiling index designed for storage of file-based image and other raster (i.e., LiDAR elevation, landcover) data sets. A regular grid with origin at 0,0 of the Washington North State Plane Coordinate System, with grid cells defined by orthogonal bounds 7500 feet long in easting and in northing. Only those cells currently involved in one of several image/raster data projects for King County and southwestern Snohomish County are labelled, though the labeling scheme can be extended. The name of the spatial index is derived from the acronym (I)n(D)e(X) (P)olygons at the (7500) foot tile level, or idxp7500. Cell label is a row-id cocantenated with a column-id generating a four-character identifier that uniquely identifies every cell. The row portion of the identifier is a two-character alpha code of the format aa, ab, ac.... ba, bb, etc. and the column portion is a two-digit integer value such as 01, 02, 03.... 11,12,13, etc. A composite cell identifier would be then, for example, aa01, aa02, .....ba11,ba12, etc. Not all image and raster data is stored at the tiling level represented by this index. Data is stored at this level if full-resolution, uncompressed data would generate larger than manageable file sizes at a larger tile size.

  10. H

    Virtual GDAL/OGR Geospatial Data Format

    • hydroshare.org
    • search.dataone.org
    zip
    Updated May 8, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tim Cera (2018). Virtual GDAL/OGR Geospatial Data Format [Dataset]. https://www.hydroshare.org/resource/228394bfdc084cb9a21d6c168ed4264e
    Explore at:
    zip(2.3 MB)Available download formats
    Dataset updated
    May 8, 2018
    Dataset provided by
    HydroShare
    Authors
    Tim Cera
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The GDAL/OGR libraries are open-source, geo-spatial libraries that work with a wide range of raster and vector data sources. One of many impressive features of the GDAL/OGR libraries is the ViRTual (VRT) format. It is an XML format description of how to transform raster or vector data sources on the fly into a new dataset. The transformations include: mosaicking, re-projection, look-up table (raster), change data type (raster), and SQL SELECT command (vector). VRTs can be used by GDAL/OGR functions and utilities as if they were an original source, even allowing for chaining of functionality, for example: have a VRT mosaic hundreds of VRTs that use look-up tables to transform original GeoTiff files. We used the VRT format for the presentation of hydrologic model results, allowing for thousands of small VRT files representing all components of the monthly water balance to be transformations of a single land cover GeoTiff file.

    Presentation at 2018 AWRA Spring Specialty Conference: Geographic Information Systems (GIS) and Water Resources X, Orlando, Florida, April 23-25, http://awra.org/meetings/Orlando2018/

  11. Z

    GRTSmh_base4frac: the raster data source GRTSmaster_habitats converted to...

    • data-staging.niaid.nih.gov
    Updated Mar 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vanderhaeghe, Floris (2025). GRTSmh_base4frac: the raster data source GRTSmaster_habitats converted to base 4 fractions [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_3354401
    Explore at:
    Dataset updated
    Mar 24, 2025
    Dataset provided by
    Research Institute for Nature and Forest (INBO)
    Authors
    Vanderhaeghe, Floris
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The data source file is a monolayered GeoTIFF in the FLT8S datatype. In GRTSmh_base4frac, the decimal (i.e. base 10) integer values from the raster data source GRTSmaster_habitats (link) have been converted into base 4 fractions, using a precision of 13 digits behind the decimal mark (as needed to cope with the range of values). For example, the integer 16 (= 4^2) has been converted into 0.0000000000100 and 4^12 has been converted into 0.1000000000000.

    Long base 4 fractions seem to be handled and stored easier than long (base 4) integers. This approach follows the one of Stevens & Olsen (2004) to represent the reverse hierarchical order in a GRTS sample as base-4-fraction addresses.

    See R-code in the GitHub repository 'n2khab-preprocessing' at commit ecadaf5 for the creation from the GRTSmaster_habitats data source.

    A reading function to return the data source in a standardized way into the R environment is provided by the R-package n2khab.

    Beware that not all GRTS ranking numbers are present in the data source, as the original GRTS raster has been clipped with the Flemish outer borders (i.e., not excluding the Brussels Capital Region).

  12. Geospatial Data Pack for Visualization

    • kaggle.com
    zip
    Updated Oct 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vega Datasets (2025). Geospatial Data Pack for Visualization [Dataset]. https://www.kaggle.com/datasets/vega-datasets/geospatial-data-pack
    Explore at:
    zip(1422109 bytes)Available download formats
    Dataset updated
    Oct 21, 2025
    Dataset authored and provided by
    Vega Datasets
    Description

    Geospatial Data Pack for Visualization 🗺️

    Learn Geographic Mapping with Altair, Vega-Lite and Vega using Curated Datasets

    Complete geographic and geophysical data collection for mapping and visualization. This consolidation includes 18 complementary datasets used by 31+ Vega, Vega-Lite, and Altair examples 📊. Perfect for learning geographic visualization techniques including projections, choropleths, point maps, vector fields, and interactive displays.

    Source data lives on GitHub and can also be accessed via CDN. The vega-datasets project serves as a common repository for example datasets used across these visualization libraries and related projects.

    Why Use This Dataset? 🤔

    • Comprehensive Geospatial Types: Explore a variety of core geospatial data models:
      • Vector Data: Includes points (like airports.csv), lines (like londonTubeLines.json), and polygons (like us-10m.json).
      • Raster-like Data: Work with gridded datasets (like windvectors.csv, annual-precip.json).
    • Diverse Formats: Gain experience with standard and efficient geospatial formats like GeoJSON (see Table 1, 2, 4), compressed TopoJSON (see Table 1), and plain CSV/TSV (see Table 2, 3, 4) for point data and attribute tables ready for joining.
    • Multi-Scale Coverage: Practice visualization across different geographic scales, from global and national (Table 1, 4) down to the city level (Table 1).
    • Rich Thematic Mapping: Includes multiple datasets (Table 3) specifically designed for joining attributes to geographic boundaries (like states or counties from Table 1) to create insightful choropleth maps.
    • Ready-to-Use & Example-Driven: Cleaned datasets tightly integrated with 31+ official examples (see Appendix) from Altair, Vega-Lite, and Vega, allowing you to immediately practice techniques like projections, point maps, network maps, and interactive displays.
    • Python Friendly: Works seamlessly with essential Python libraries like Altair (which can directly read TopoJSON/GeoJSON), Pandas, and GeoPandas, fitting perfectly into the Kaggle notebook environment.

    Table of Contents

    Dataset Inventory 🗂️

    This pack includes 18 datasets covering base maps, reference points, statistical data for choropleths, and geophysical data.

    1. BASE MAP BOUNDARIES (Topological Data)

    DatasetFileSizeFormatLicenseDescriptionKey Fields / Join Info
    US Map (1:10m)us-10m.json627 KBTopoJSONCC-BY-4.0US state and county boundaries. Contains states and counties objects. Ideal for choropleths.id (FIPS code) property on geometries
    World Map (1:110m)world-110m.json117 KBTopoJSONCC-BY-4.0World country boundaries. Contains countries object. Suitable for world-scale viz.id property on geometries
    London BoroughslondonBoroughs.json14 KBTopoJSONCC-BY-4.0London borough boundaries.properties.BOROUGHN (name)
    London CentroidslondonCentroids.json2 KBGeoJSONCC-BY-4.0Center points for London boroughs.properties.id, properties.name
    London Tube LineslondonTubeLines.json78 KBGeoJSONCC-BY-4.0London Underground network lines.properties.name, properties.color

    2. GEOGRAPHIC REFERENCE POINTS (Point Data) 📍

    DatasetFileSizeFormatLicenseDescriptionKey Fields / Join Info
    US Airportsairports.csv205 KBCSVPublic DomainUS airports with codes and coordinates.iata, state, `l...
  13. d

    Data from: 10-m backscatter mosaic produced from backscatter intensity data...

    • catalog.data.gov
    • search.dataone.org
    • +4more
    Updated Nov 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). 10-m backscatter mosaic produced from backscatter intensity data from sidescan sonar and multibeam datasets (BS_composite_10m.tif GeoTIFF Image; UTM, Zone 19N, WGS 84) [Dataset]. https://catalog.data.gov/dataset/10-m-backscatter-mosaic-produced-from-backscatter-intensity-data-from-sidescan-sonar-and-m
    Explore at:
    Dataset updated
    Nov 26, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    These data are qualitatively derived interpretive polygon shapefiles and selected source raster data defining surficial geology, sediment type and distribution, and physiographic zones of the sea floor from Nahant to Northern Cape Cod Bay. Much of the geophysical data used to create the interpretive layers were collected under a cooperative agreement among the Massachusetts Office of Coastal Zone Management (CZM), the U.S. Geological Survey (USGS), Coastal and Marine Geology Program, the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Army Corps of Engineers (USACE). Initiated in 2003, the primary objective of this program is to develop regional geologic framework information for the management of coastal and marine resources. Accurate data and maps of seafloor geology are important first steps toward protecting fish habitat, delineating marine resources, and assessing environmental changes because of natural or human effects. The project is focused on the inshore waters of coastal Massachusetts. Data collected during the mapping cooperative involving the USGS have been released in a series of USGS Open-File Reports (http://woodshole.er.usgs.gov/project-pages/coastal_mass/html/current_map.html). The interpretations released in this study are for an area extending from the southern tip of Nahant to Northern Cape Cod Bay, Massachusetts. A combination of geophysical and sample data including high resolution bathymetry and lidar, acoustic-backscatter intensity, seismic-reflection profiles, bottom photographs, and sediment samples are used to create the data interpretations. Most of the nearshore geophysical and sample data (including the bottom photographs) were collected during several cruises between 2000 and 2008. More information about the cruises and the data collected can be found at the Geologic Mapping of the Seafloor Offshore of Massachusetts Web page: http://woodshole.er.usgs.gov/project-pages/coastal_mass/.

  14. Wadi Hasa Sample Dataset — GRASS GIS Location

    • zenodo.org
    txt, zip
    Updated Sep 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Isaac Ullah; Isaac Ullah; C Michael Barton; C Michael Barton (2025). Wadi Hasa Sample Dataset — GRASS GIS Location [Dataset]. http://doi.org/10.5281/zenodo.17162040
    Explore at:
    txt, zipAvailable download formats
    Dataset updated
    Sep 19, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Isaac Ullah; Isaac Ullah; C Michael Barton; C Michael Barton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Wadi Hasa Sample Dataset — GRASS GIS Location
    Version 1.0 (2025-09-19)

    Overview
    --------
    This archive contains a complete GRASS GIS *Location* for the Wadi Hasa region (Jordan), including base data and exemplar analyses used in the Geomorphometry chapter. It is intended for teaching and reproducible research in archaeological GIS.

    How to use
    ----------
    1) Unzip the archive into your GRASSDATA directory (or a working folder) and add the Location to your GRASS session.
    2) Start GRASS and open the included workspace (Workspace.gxw) or choose a Mapset to work in.
    3) Set the computational region to the default extent/resolution for reproducibility:
    g.region n=3444220 s=3405490 e=796210 w=733450 nsres=30 ewres=30 -p
    4) Inspect layers as needed:
    g.list type=rast,vector
    r.info

    Citation & License
    ------------------
    Please cite this dataset as:

    Isaac I. Ullah. 2025. *Wadi Hasa Sample Dataset (GRASS GIS Location)*. Zenodo. https://doi.org/10.5281/zenodo.17162040

    All contents are released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. The original Wadi Hasa survey dataset is available at: https://figshare.com/articles/dataset/Wadi_Hasa_Ancient_Pastoralism_Project/1404216 The original Wadi Hasa survey dataset is available at: https://figshare.com/articles/dataset/Wadi_Hasa_Ancient_Pastoralism_Project/1404216

    Coordinate Reference System
    ---------------------------
    - Projection: UTM, Zone 36N
    - Datum/Ellipsoid: WGS84
    - Units: meter
    - Coordinate system and units are defined in the GRASS Location (PROJ_INFO/UNITS).

    Default Region (computational extent & resolution)
    --------------------------------------------------
    - North: 3444220
    - South: 3405490
    - East: 796210
    - West: 733450
    - Resolution: 30 (NS), 30 (EW)
    - Rows x Cols: 1291 x 2092 (cells: 2700772)

    Directory / Mapset Structure
    ----------------------------
    This Location contains the following Mapsets (data subprojects), each with its own raster/vector layers and attribute tables (SQLite):
    - Boolean_Predictive_Modeling: 8 raster(s), 4 vector(s)
    - ISRIC_soilgrid: 31 raster(s), 0 vector(s)
    - Landsat_Imagery: 3 raster(s), 0 vector(s)
    - Landscape_Evolution_Modeling: 41 raster(s), 0 vector(s)
    - Least_Cost_Analysis: 13 raster(s), 4 vector(s)
    - Machine_Learning_Predictive_Modeling: 70 raster(s), 11 vector(s)
    - PERMANENT: 4 raster(s), 2 vector(s)
    - Sentinel2_Imagery: 4 raster(s), 0 vector(s)
    - Site_Buffer_Analysis: 0 raster(s), 2 vector(s)
    - Terrain_Analysis: 27 raster(s), 2 vector(s)
    - Territory_Modeling: 14 raster(s), 2 vector(s)
    - Trace21k_Paleoclimate_Downscale_Example: 4 raster(s), 2 vector(s)
    - Visibility_Analysis: 11 raster(s), 5 vector(s)

    Data Content (summary)
    ----------------------
    - Total raster maps: 230
    - Total vector maps: 34

    Raster resolutions present:
    - 10 m: 13 raster(s)
    - 30 m: 183 raster(s)
    - 208.01 m: 2 raster(s)
    - 232.42 m: 30 raster(s)
    - 1000 m: 2 raster(s)

    Major content themes include:
    - Base elevation surfaces and terrain derivatives (e.g., DEMs, slope, aspect, curvature, flow accumulation, prominence).
    - Hydrology, watershed, and stream-related layers.
    - Visibility analyses (viewsheds; cumulative viewshed analyses for Nabataean and Roman towers).
    - Movement and cost-surface analyses (isotropic/anisotropic costs, least-cost paths, time-to-travel surfaces).
    - Predictive modeling outputs (boolean/inductive/deductive; regression/classification surfaces; training/test rasters).
    - Satellite imagery products (Landsat NIR/RED/NDVI; Sentinel‑2 bands and RGB composite).
    - Soil and surficial properties (ISRIC SoilGrids 250 m products).
    - Paleoclimate downscaling examples (CHELSA TraCE21k MAT/AP).

    Vectors include:
    - Archaeological point datasets (e.g., WHS_sites, WHNBS_sites, Nabatean_Towers, Roman_Towers).
    - Derived training/testing samples and buffer polygons for modeling.
    - Stream network and paths from least-cost analyses.

    Important notes & caveats
    -------------------------
    - Mixed resolutions: Analyses span 10 m (e.g., Sentinel‑2 composites, some derived surfaces), 30 m (majority of terrain and modeling rasters), ~232 m (SoilGrids products), and 1 km (CHELSA paleoclimate). Set the computational region appropriately (g.region) before processing or visualization.
    - NoData handling: The raw SRTM import (Hasa_30m_SRTM) reports extreme min/max values caused by nodata placeholders. Use the clipped/processed DEMs (e.g., Hasa_30m_clipped_wshed*) and/or set nodata with r.null as needed.
    - Masks: MASK rasters are provided for analysis subdomains where relevant.
    - Attribute tables: Vector attribute data are stored in per‑Mapset SQLite databases (sqlite/sqlite.db) and connected via layer=1.

    Provenance (brief)
    ------------------
    - Primary survey points and site datasets derive from the Wadi Hasa projects (see Figshare record above).
    - Base elevation and terrain derivatives are built from SRTM and subsequently processed/clipped for the watershed.
    - Soil variables originate from ISRIC SoilGrids (~250 m).
    - Paleoclimate examples use CHELSA TraCE21k surfaces (1 km) that are interpolated to higher resolutions for demonstration.
    - Satellite imagery layers are derived from Landsat and Sentinel‑2 scenes.

    Reproducibility & quick commands
    --------------------------------
    - Restore default region: g.region n=3444220 s=3405490 e=796210 w=733450 nsres=30 ewres=30 -p
    - Set region to a raster: g.region raster=

    Change log
    ----------
    - v1.0: Initial public release of the teaching Location on Zenodo (CC BY 4.0).

    Contact
    -------
    For questions, corrections, or suggestions, please contact Isaac I. Ullah

  15. n

    Gravel Percent Raster

    • opdgig.dos.ny.gov
    • new-york-opd-geographic-information-gateway-nysdos.hub.arcgis.com
    Updated Mar 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    New York State Department of State (2025). Gravel Percent Raster [Dataset]. https://opdgig.dos.ny.gov/maps/2dc7c169fc3740df97ab9a87d44e3921
    Explore at:
    Dataset updated
    Mar 24, 2025
    Dataset authored and provided by
    New York State Department of State
    Area covered
    Description

    Processed results from of surface grain size analysis of the sediment grab samples recovered as part of the Long Island Sound mapping project Phase II.Sediment grab samples have been taken in summer of 2017 and 2018 using a modified van Veen grab sampler. A sub-sample of the top two centimeter was taken and stored in a jar. Dried sub-samples samples were analyzed for grain size. First the samples were treated with hydroperoxide to remove organic components. Then the sample was passed through a series of standard sieves representing Phi sizes with the smallest being 64 µm. The content of each sieve was dried and weight. If there was sufficient fine material (< 64 µm), then this fine fraction was further analyzed using a Sedigraph system. The results of sieving and sedigraph analysis have been combined and the percentages for gravel, sand, silt and clay are determined following the Wentworth scale. In addition, other statistics including mean, median, skewness and standard deviation are calculated using the USGS GSSTAT program. The results of the LDEO/Queens College grain size analysis have been combined with data collected by the LISMARC group and analyzed by USGS. ArcGIS Pro empirical kriging has been used to interpolate values for gravel, sand, silt, clay, and mud percentages as well as for mean grain size onto a 50 m raster. The interpolated raster has been clipped to fit the extent of the phase 2 survey area. The final raster data are in GeoTiff format with UTM 18 N projection.Time period of content: 2017-08-01 to 2022-11-16Attribute accuracy: The attribute accuracy has not been determined. This raster dataset shown mainly the major trends and patterns of the value distribution in the Phase 2 study area.Completeness: The dataset is complete.Positional accuracy: The raster resolution is 50 m.Attributes:gravel pct raster: Interpolated gravel percent of the sample mass

  16. USAID DHS Spatial Data Repository

    • datalumos.org
    delimited
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    USAID (2025). USAID DHS Spatial Data Repository [Dataset]. http://doi.org/10.3886/E224321V1
    Explore at:
    delimitedAvailable download formats
    Dataset updated
    Mar 26, 2025
    Dataset provided by
    United States Agency for International Developmenthttp://usaid.gov/
    Authors
    USAID
    License

    https://creativecommons.org/share-your-work/public-domain/pdmhttps://creativecommons.org/share-your-work/public-domain/pdm

    Time period covered
    1984 - 2023
    Area covered
    World
    Description

    This collection consists of geospatial data layers and summary data at the country and country sub-division levels that are part of USAID's Demographic Health Survey Spatial Data Repository. This collection includes geographically-linked health and demographic data from the DHS Program and the U.S. Census Bureau for mapping in a geographic information system (GIS). The data includes indicators related to: fertility, family planning, maternal and child health, gender, HIV/AIDS, literacy, malaria, nutrition, and sanitation. Each set of files is associated with a specific health survey for a given year for over 90 different countries that were part of the following surveys:Demographic Health Survey (DHS)Malaria Indicator Survey (MIS)Service Provisions Assessment (SPA)Other qualitative surveys (OTH)Individual files are named with identifiers that indicate: country, survey year, survey, and in some cases the name of a variable or indicator. A list of the two-letter country codes is included in a CSV file.Datasets are subdivided into the following folders:Survey boundaries: polygon shapefiles of administrative subdivision boundaries for countries used in specific surveys. Indicator data: polygon shapefiles and geodatabases of countries and subdivisions with 25 of the most common health indicators collected in the DHS. Estimates generated from survey data.Modeled surfaces: geospatial raster files that represent gridded population and health indicators generated from survey data, for several countries.Geospatial covariates: CSV files that link survey cluster locations to ancillary data (known as covariates) that contain data on topics including population, climate, and environmental factors.Population estimates: spreadsheets and polygon shapefiles for countries and subdivisions with 5-year age/sex group population estimates and projections for 2000-2020 from the US Census Bureau, for designated countries in the PEPFAR program.Workshop materials: a tutorial with sample data for learning how to map health data using DHS SDR datasets with QGIS. Documentation that is specific to each dataset is included in the subfolders, and a methodological summary for all of the datasets is included in the root folder as an HTML file. File-level metadata is available for most files. Countries for which data included in the repository include: Afghanistan, Albania, Angola, Armenia, Azerbaijan, Bangladesh, Benin, Bolivia, Botswana, Brazil, Burkina Faso, Burundi, Cape Verde, Cambodia, Cameroon, Central African Republic, Chad, Colombia, Comoros, Congo, Congo (Democratic Republic of the), Cote d'Ivoire, Dominican Republic, Ecuador, Egypt, El Salvador, Equatorial Guinea, Eritrea, Eswatini (Swaziland), Ethiopia, Gabon, Gambia, Ghana, Guatemala, Guinea, Guyana, Haiti, Honduras, India, Indonesia, Jordan, Kazakhstan, Kenya, Kyrgyzstan, Lesotho, Liberia, Madagascar, Malawi, Maldives, Mali, Mauritania, Mexico, Moldova, Morocco, Mozambique, Myanmar, Namibia, Nepal, Nicaragua, Niger, Nigeria, Pakistan, Papua New Guinea, Paraguay, Peru, Philippines, Russia, Rwanda, Samoa, Sao Tome and Principe, Senegal, Sierra Leone, South Africa, Sri Lanka, Sudan, Tajikistan, Tanzania, Thailand, Timor-Leste, Togo, Trinidad and Tobago, Tunisia, Turkey, Turkmenistan, Uganda, Ukraine, Uzbekistan, Viet Nam, Yemen, Zambia, Zimbabwe

  17. OpenStreetMap+ Land Use / Land Cover classes and administrative regions of...

    • zenodo.org
    • data.niaid.nih.gov
    tiff
    Updated Jul 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martijn Witjes; Martijn Witjes (2024). OpenStreetMap+ Land Use / Land Cover classes and administrative regions of Europe [Dataset]. http://doi.org/10.5281/zenodo.6653917
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jul 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Martijn Witjes; Martijn Witjes
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Europe
    Description

    This dataset contains 23 30m resolution raster data of continental Europe land use / land cover classes extracted from OpenStreetMap, as well as administrative areas, and a harmonized building dataset based on OpenStreetMap and Copernicus HRL Imperviousness data.

    The land use / land cover classes are:

    1. buildings.commercial
    2. buildings.industrial
    3. buildings.residential
    4. cemetery
    5. construction.site
    6. dump.site (landfill)
    7. farmland
    8. farmyard
    9. forest
    10. grass
    11. greenhouse
    12. harbour
    13. meadow
    14. military
    15. orchard
    16. quarry
    17. railway
    18. reservoir
    19. road
    20. salt
    21. vineyard

    The land use / land cover data was generated by extracting OSM vector layers from https://download.geofabrik.de/). These were then transformed into a 30 m density raster for each feature type. This was done by first creating a 10 m raster where each pixel intersecting a vector feature was assigned the value 100. These pixels were then aggregated to 10 m resolution by calculating the average of every 9 adjacent pixels. This resulted in a 0—100 density layer for the three feature types. Although the digitized building data from OSM offers the highest level of detail, its coverage across Europe is inconsistent. To supplement the building density raster in regions where crowd-sourced OSM building data was unavailable, we combined it with Copernicus High Resolution Layers (HRL) (obtained from https://land.copernicus.eu/pan-european/ high-resolution-layers), filling the non-mapped areas in OSM with the Impervious Built-up 2018 pixel values, which was averaged to 30 m. The probability values produced by the averaged aggregation were integrated in such a way that values between 0—100 refer to OSM (lowest and highest probabilities equal to 0 and 100 respectively), and the values between 101—200 refer to Copernicus HRL (lowest and highest probability equal to 200 and 101 respectively). This resulted in a raster layer where values closer to 100 are more likely to be buildings than values closer to 0 and 200. Structuring the data in this way allows us to select the higher probability building pixels in both products by the single boolean expression: Pixel > 50 AND pixel <150.

    This dataset is part of the OpenStreetMap+ was used to pre-process the LUCAS/CORINE land use / land cover samples (https://doi.org/10.5281/zenodo.4740691) used to train machine learning models in Witjes et al., 2022 (https://doi.org/10.21203/rs.3.rs-561383/v4)

    Each layer can be viewed interactively on the Open Data Science Europe data viewer at maps.opendatascience.eu.

  18. Copernicus Digital Elevation Model (DEM) for Europe at 3 arc seconds (ca. 90...

    • zenodo.org
    • data.opendatascience.eu
    • +2more
    bin, png, tiff, xml
    Updated Jul 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Markus Neteler; Markus Neteler; Julia Haas; Julia Haas; Markus Metz; Markus Metz (2024). Copernicus Digital Elevation Model (DEM) for Europe at 3 arc seconds (ca. 90 meter) resolution derived from Copernicus Global 30 meter DEM dataset [Dataset]. http://doi.org/10.5281/zenodo.6211701
    Explore at:
    png, bin, xml, tiffAvailable download formats
    Dataset updated
    Jul 17, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Markus Neteler; Markus Neteler; Julia Haas; Julia Haas; Markus Metz; Markus Metz
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Area covered
    Europe
    Description

    Overview:
    The Copernicus DEM is a Digital Surface Model (DSM) which represents the surface of the Earth including buildings, infrastructure and vegetation. The original GLO-30 provides worldwide coverage at 30 meters (refers to 10 arc seconds). Note that ocean areas do not have tiles, there one can assume height values equal to zero. Data is provided as Cloud Optimized GeoTIFFs. Note that the vertical unit for measurement of elevation height is meters.

    The Copernicus DEM for Europe at 3 arcsec (0:00:03 = 0.00083333333 ~ 90 meter) in COG format has been derived from the Copernicus DEM GLO-30, mirrored on Open Data on AWS, dataset managed by Sinergise (https://registry.opendata.aws/copernicus-dem/).

    Processing steps:
    The original Copernicus GLO-30 DEM contains a relevant percentage of tiles with non-square pixels. We created a mosaic map in VRT format and defined within the VRT file the rule to apply cubic resampling while reading the data, i.e. importing them into GRASS GIS for further processing. We chose cubic instead of bilinear resampling since the height-width ratio of non-square pixels is up to 1:5. Hence, artefacts between adjacent tiles in rugged terrain could be minimized:

    gdalbuildvrt -input_file_list list_geotiffs_MOOD.csv -r cubic -tr 0.000277777777777778 0.000277777777777778 Copernicus_DSM_30m_MOOD.vrt

    In order to reduce the spatial resolution to 3 arc seconds, weighted resampling was performed in GRASS GIS (using r.resamp.stats -w and the pixel values were scaled with 1000 (storing the pixels as integer values) for data volume reduction. In addition, a hillshade raster map was derived from the resampled elevation map (using r.relief, GRASS GIS). Eventually, we exported the elevation and hillshade raster maps in Cloud Optimized GeoTIFF (COG) format, along with SLD and QML style files.

    Projection + EPSG code:
    Latitude-Longitude/WGS84 (EPSG: 4326)

    Spatial extent:
    north: 82:00:30N
    south: 18N
    west: 32:00:30W
    east: 70E

    Spatial resolution:
    3 arc seconds (approx. 90 m)

    Pixel values:
    meters * 1000 (scaled to Integer; example: value 23220 = 23.220 m a.s.l.)

    Software used:
    GDAL 3.2.2 and GRASS GIS 8.0.0 (r.resamp.stats -w; r.relief)

    Original dataset license:
    https://spacedata.copernicus.eu/documents/20126/0/CSCDA_ESA_Mission-specific+Annex.pdf

    Processed by:
    mundialis GmbH & Co. KG, Germany (https://www.mundialis.de/)

  19. a

    7500-FT Tiling Status for KC Raster Datasets / raststat 7500 area

    • gis-kingcounty.opendata.arcgis.com
    Updated Nov 9, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    King County (2017). 7500-FT Tiling Status for KC Raster Datasets / raststat 7500 area [Dataset]. https://gis-kingcounty.opendata.arcgis.com/datasets/b158294b44664ccaa234711e1e10610d
    Explore at:
    Dataset updated
    Nov 9, 2017
    Dataset authored and provided by
    King County
    Area covered
    Description

    For more information about this layer please see the GIS Data Catalog.A spatial tiling index designed for storage of file-based image and other raster (i.e., LiDAR elevation, landcover) data sets. A regular grid with origin at 0,0 of the Washington North State Plane Coordinate System, with grid cells defined by orthogonal bounds 7500 feet long in easting and in northing. Only those cells currently involved in one of several image/raster data projects for King County and southwestern Snohomish County are labelled, though the labeling scheme can be extended. The name of the spatial index is derived from the acronym (I)n(D)e(X) (P)olygons at the (7500) foot tile level, or idxp7500. Cell label is a row-id cocantenated with a column-id generating a four-character identifier that uniquely identifies every cell. The row portion of the identifier is a two-character alpha code of the format aa, ab, ac.... ba, bb, etc. and the column portion is a two-digit integer value such as 01, 02, 03.... 11,12,13, etc. A composite cell identifier would be then, for example, aa01, aa02, .....ba11,ba12, etc. Not all image and raster data is stored at the tiling level represented by this index. Data is stored at this level if full-resolution, uncompressed data would generate larger than manageable file sizes at a larger tile size.

  20. MX Raster Data For Compression

    • zenodo.org
    tar
    Updated Feb 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jakoncic Jean; Jakoncic Jean (2024). MX Raster Data For Compression [Dataset]. http://doi.org/10.5281/zenodo.7887840
    Explore at:
    tarAvailable download formats
    Dataset updated
    Feb 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jakoncic Jean; Jakoncic Jean
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data set, contains 6 raster experiment data sets, performed on and around a large lysozyme crystal sample using various beam intensities. These data are used to investigate several compressions (ratio and speed).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ionut Iosifescu Enescu; Dominik Haas-Artho; Lucia de Espona; Marius Rüetschi (2025). Cloud Optimized Raster Encoding (CORE) format [Dataset]. http://doi.org/10.16904/envidat.230

Cloud Optimized Raster Encoding (CORE) format

Explore at:
.sh, not available, xml, jsonAvailable download formats
Dataset updated
Jun 4, 2025
Dataset provided by
Swiss Federal Institute for Forest, Snow and Landscape Research WSL
Authors
Ionut Iosifescu Enescu; Dominik Haas-Artho; Lucia de Espona; Marius Rüetschi
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Area covered
Switzerland
Dataset funded by
WSL
Description

Acknowledgements: The CORE format was proudly inspired by the Cloud Optimized GeoTIFF (COG) format, by considering how to leverage the ability of clients issuing ​HTTP GET range requests for a time-series of remote sensing and aerial imagery (instead of just one image).

License: The Cloud Optimized Raster Encoding (CORE) specifications are released to the public domain under a Creative Commons 1.0 CC0 "No Rights Reserved" international license. You can reuse the information contained herein in any way you want, for any purposes and without restrictions.

Summary: The Cloud Optimized Raster Encoding (CORE) format is being developed for the efficient storage and management of gridded data by applying video encoding algorithms. It is mainly designed for the exchange and preservation of large time series data in environmental data repositories, while in the same time enabling more efficient workflows on the cloud. It can be applied to any large number of similar (in pixel size and image dimensions) raster data layers. CORE is not designed to replace COG but to work together with COG for a collection of many layers (e.g. by offering a fast preview of layers when switching between layers of a time series). WARNING: Currently only applicable to RGB/Byte imagery. The final CORE specifications may probably be very different from what is written herein or CORE may not ever become productive due to a myriad of reasons (see also 'Major issues to be solved'). With this early public sharing of the format we explicitly support the Open Science agenda, which implies "shifting from the standard practices of publishing research results in scientific publications towards sharing and using all available knowledge at an earlier stage in the research process" (quote from: European Commission, Directorate General for Research and Innovation, 2016. Open innovation, open science, open to the world). CORE Specifications: 1) a MP4 or WebM video digital multimedia container format (or any future video container playable as HTML video in major browsers) 2) a free to use or open video compression codec such as H.264, VP9, or AV1 (or any future video codec that is open sourced or free to use for end users) Note: H.264 is currently recommended because of the wide usage with support in all major browsers, fast encoding due to acceleration in hardware (which is currently not the case for AV1 or VP9) and the fact that MPEG LA has allowed the free use for streaming video that is free to the end users. However, please note that H.264 is restricted by patents and its use in proprietary or commercial software requires the payment of royalties to MPEG LA. However, when AV1 matures and accelerated hardware encoding becomes available, AV1 is expected to offer 30% to 50% smaller file size in comparison with H.264, while retaining the same quality. 3) the encoding frame rate should be of one frame per second (fps) with each layer segmented in internal tiles, similar to COG, ordered by the main use case when accessing the data: either layer contiguous or tile contiguous; Note: The internal tile arrangement should support an easy navigation inside the CORE video format, depending on the use case. 4) a CORE file is optimised for streaming with the moov atom at the beginning of the file (e.g. with -movflags faststart) and optional additional optimisations depending on the codec used (e.g. -tune fastdecode -tune zerolatency for H.264) 5) metadata tags inside the moov atom for describing and using geographic image data (that are preferably compatible with the OGC GeoTIFF standard or any future standard accepted by the geospatial community) as well as list of original file names corresponding to each CORE layer 6) it needs to encode similar source rasters (such as time series of rasters with the same extent and resolution, or different tiles of the same product; each input raster should be having the same image and pixel size) 7) it provides a mechanism for addressing and requesting overviews (lower resolution data) for a fast display in web browser depending on the map scale (currently external overviews) Major issues to be solved: - Internal overviews (similar to COG), by chaining lower resolution videos in the same MP4 container for fast access to overviews first); Currently, overviews are kept as separate files, as external overviews. - Metadata encoding (how to best encode spatial extent, layer names, and so on, for each of the layer inside the series, which may have a different geographical extent, etc...; Known issues: adding too many tags with FFmpeg which are not part of the standard MP4 moov atom; metadata tags have a limited string length. - Applicability beyond RGB/Byte datasets; defining a standard way of converting cell values from Int16/UInt16/UInt32/Int32/Float32/Float64/ data types into multi-band Byte values (and reconstructing them back to the original data type within acceptable thresholds) Example Notice: The provided CORE (.mp4) examples contain modified Copernicus Sentinel data [2018-2021]. For generating the CORE examples provided, 50 original Sentinel 2 (S-2) TCI data images from an area located inside Switzerland were downloaded from www.copernicus.eu, and then transformed into CORE format using ffmpeg with H.264 encoding using the x264 library. DISCLAIMER: Basic scripts are provided for the Geomatics peer review (in 2021) and kept as additional information for the dataset. Nevertheless, please note that software dependencies and libraries, as well as cloud storage paths, may quickly become deprecated over time (after 2021). For compatibility, stable dependencies and libraries released around 2020 should be used.

Search
Clear search
Close search
Google apps
Main menu