17 datasets found
  1. Copernicus Digital Elevation Model (DEM) for Europe at 100 meter resolution...

    • zenodo.org
    • data.opendatascience.eu
    • +1more
    bin, png, tiff, xml
    Updated Jul 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Markus Neteler; Markus Neteler; Julia Haas; Julia Haas; Markus Metz; Markus Metz (2024). Copernicus Digital Elevation Model (DEM) for Europe at 100 meter resolution (EU-LAEA) derived from Copernicus Global 30 meter DEM dataset [Dataset]. http://doi.org/10.5281/zenodo.6211990
    Explore at:
    png, tiff, xml, binAvailable download formats
    Dataset updated
    Jul 17, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Markus Neteler; Markus Neteler; Julia Haas; Julia Haas; Markus Metz; Markus Metz
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Area covered
    Europe
    Description

    Overview:
    The Copernicus DEM is a Digital Surface Model (DSM) which represents the surface of the Earth including buildings, infrastructure and vegetation. The original GLO-30 provides worldwide coverage at 30 meters (refers to 10 arc seconds). Note that ocean areas do not have tiles, there one can assume height values equal to zero. Data is provided as Cloud Optimized GeoTIFFs. Note that the vertical unit for measurement of elevation height is meters.

    The Copernicus DEM for Europe at 100 meter resolution (EU-LAEA projection) in COG format has been derived from the Copernicus DEM GLO-30, mirrored on Open Data on AWS, dataset managed by Sinergise (https://registry.opendata.aws/copernicus-dem/).

    Processing steps:
    The original Copernicus GLO-30 DEM contains a relevant percentage of tiles with non-square pixels. We created a mosaic map in VRT format and defined within the VRT file the rule to apply cubic resampling while reading the data, i.e. importing them into GRASS GIS for further processing. We chose cubic instead of bilinear resampling since the height-width ratio of non-square pixels is up to 1:5. Hence, artefacts between adjacent tiles in rugged terrain could be minimized:

    gdalbuildvrt -input_file_list list_geotiffs_MOOD.csv -r cubic -tr 0.000277777777777778 0.000277777777777778 Copernicus_DSM_30m_MOOD.vrt

    In order to reproject the data to EU-LAEA projection while reducing the spatial resolution to 100 m, bilinear resampling was performed in GRASS GIS (using r.proj and the pixel values were scaled with 1000 (storing the pixels as Integer values) for data volume reduction. In addition, a hillshade raster map was derived from the resampled elevation map (using r.relief, GRASS GIS). Eventually, we exported the elevation and hillshade raster maps in Cloud Optimized GeoTIFF (COG) format, along with SLD and QML style files.

    Projection + EPSG code:
    ETRS89-extended / LAEA Europe (EPSG: 3035)

    Spatial extent:
    north: 6874000
    south: -485000
    west: 869000
    east: 8712000

    Spatial resolution:
    100 m

    Pixel values:
    meters * 1000 (scaled to Integer; example: value 23220 = 23.220 m a.s.l.)

    Software used:
    GDAL 3.2.2 and GRASS GIS 8.0.0 (r.proj; r.relief)

    Original dataset license:
    https://spacedata.copernicus.eu/documents/20126/0/CSCDA_ESA_Mission-specific+Annex.pdf

    Processed by:
    mundialis GmbH & Co. KG, Germany (https://www.mundialis.de/)

  2. f

    Travel time to cities and ports in the year 2015

    • figshare.com
    tiff
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andy Nelson (2023). Travel time to cities and ports in the year 2015 [Dataset]. http://doi.org/10.6084/m9.figshare.7638134.v4
    Explore at:
    tiffAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Authors
    Andy Nelson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset and the validation are fully described in a Nature Scientific Data Descriptor https://www.nature.com/articles/s41597-019-0265-5

    If you want to use this dataset in an interactive environment, then use this link https://mybinder.org/v2/gh/GeographerAtLarge/TravelTime/HEAD

    The following text is a summary of the information in the above Data Descriptor.

    The dataset is a suite of global travel-time accessibility indicators for the year 2015, at approximately one-kilometre spatial resolution for the entire globe. The indicators show an estimated (and validated), land-based travel time to the nearest city and nearest port for a range of city and port sizes.

    The datasets are in GeoTIFF format and are suitable for use in Geographic Information Systems and statistical packages for mapping access to cities and ports and for spatial and statistical analysis of the inequalities in access by different segments of the population.

    These maps represent a unique global representation of physical access to essential services offered by cities and ports.

    The datasets travel_time_to_cities_x.tif (where x has values from 1 to 12) The value of each pixel is the estimated travel time in minutes to the nearest urban area in 2015. There are 12 data layers based on different sets of urban areas, defined by their population in year 2015 (see PDF report).

    travel_time_to_ports_x (x ranges from 1 to 5)

    The value of each pixel is the estimated travel time to the nearest port in 2015. There are 5 data layers based on different port sizes.

    Format Raster Dataset, GeoTIFF, LZW compressed Unit Minutes

    Data type Byte (16 bit Unsigned Integer)

    No data value 65535

    Flags None

    Spatial resolution 30 arc seconds

    Spatial extent

    Upper left -180, 85

    Lower left -180, -60 Upper right 180, 85 Lower right 180, -60 Spatial Reference System (SRS) EPSG:4326 - WGS84 - Geographic Coordinate System (lat/long)

    Temporal resolution 2015

    Temporal extent Updates may follow for future years, but these are dependent on the availability of updated inputs on travel times and city locations and populations.

    Methodology Travel time to the nearest city or port was estimated using an accumulated cost function (accCost) in the gdistance R package (van Etten, 2018). This function requires two input datasets: (i) a set of locations to estimate travel time to and (ii) a transition matrix that represents the cost or time to travel across a surface.

    The set of locations were based on populated urban areas in the 2016 version of the Joint Research Centre’s Global Human Settlement Layers (GHSL) datasets (Pesaresi and Freire, 2016) that represent low density (LDC) urban clusters and high density (HDC) urban areas (https://ghsl.jrc.ec.europa.eu/datasets.php). These urban areas were represented by points, spaced at 1km distance around the perimeter of each urban area.

    Marine ports were extracted from the 26th edition of the World Port Index (NGA, 2017) which contains the location and physical characteristics of approximately 3,700 major ports and terminals. Ports are represented as single points

    The transition matrix was based on the friction surface (https://map.ox.ac.uk/research-project/accessibility_to_cities) from the 2015 global accessibility map (Weiss et al, 2018).

    Code The R code used to generate the 12 travel time maps is included in the zip file that can be downloaded with these data layers. The processing zones are also available.

    Validation The underlying friction surface was validated by comparing travel times between 47,893 pairs of locations against journey times from a Google API. Our estimated journey times were generally shorter than those from the Google API. Across the tiles, the median journey time from our estimates was 88 minutes within an interquartile range of 48 to 143 minutes while the median journey time estimated by the Google API was 106 minutes within an interquartile range of 61 to 167 minutes. Across all tiles, the differences were skewed to the left and our travel time estimates were shorter than those reported by the Google API in 72% of the tiles. The median difference was −13.7 minutes within an interquartile range of −35.5 to 2.0 minutes while the absolute difference was 30 minutes or less for 60% of the tiles and 60 minutes or less for 80% of the tiles. The median percentage difference was −16.9% within an interquartile range of −30.6% to 2.7% while the absolute percentage difference was 20% or less in 43% of the tiles and 40% or less in 80% of the tiles.

    This process and results are included in the validation zip file.

    Usage Notes The accessibility layers can be visualised and analysed in many Geographic Information Systems or remote sensing software such as QGIS, GRASS, ENVI, ERDAS or ArcMap, and also by statistical and modelling packages such as R or MATLAB. They can also be used in cloud-based tools for geospatial analysis such as Google Earth Engine.

    The nine layers represent travel times to human settlements of different population ranges. Two or more layers can be combined into one layer by recording the minimum pixel value across the layers. For example, a map of travel time to the nearest settlement of 5,000 to 50,000 people could be generated by taking the minimum of the three layers that represent the travel time to settlements with populations between 5,000 and 10,000, 10,000 and 20,000 and, 20,000 and 50,000 people.

    The accessibility layers also permit user-defined hierarchies that go beyond computing the minimum pixel value across layers. A user-defined complete hierarchy can be generated when the union of all categories adds up to the global population, and the intersection of any two categories is empty. Everything else is up to the user in terms of logical consistency with the problem at hand.

    The accessibility layers are relative measures of the ease of access from a given location to the nearest target. While the validation demonstrates that they do correspond to typical journey times, they cannot be taken to represent actual travel times. Errors in the friction surface will be accumulated as part of the accumulative cost function and it is likely that locations that are further away from targets will have greater a divergence from a plausible travel time than those that are closer to the targets. Care should be taken when referring to travel time to the larger cities when the locations of interest are extremely remote, although they will still be plausible representations of relative accessibility. Furthermore, a key assumption of the model is that all journeys will use the fastest mode of transport and take the shortest path.

  3. W

    SYD Landsat raw data v01

    • cloud.csiss.gmu.edu
    • researchdata.edu.au
    • +1more
    zip
    Updated Dec 14, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australia (2019). SYD Landsat raw data v01 [Dataset]. https://cloud.csiss.gmu.edu/uddi/dataset/fe7aa98d-ea2a-48fc-bc09-1d5ce3a50246
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 14, 2019
    Dataset provided by
    Australia
    Description

    Abstract

    This dataset and its metadata statement were supplied to the Bioregional Assessment Programme by a third party and are presented here as originally supplied.

    Landsat TM, and ETM+ data are provided in GeoTIFF for Level 1T (terrain corrected) products, or for either Level 1Gt (systematic terrain corrected) or Level 1G (systematic corrected) products, if Level 1T processing is not available. GeoTIFF defines a set of publicly available TIFF tags that describe cartographic and geodetic information associated with TIFF images. GeoTIFF is a format that enables referencing a raster image to a known geodetic model or map projection.

    The initial tags are followed by image data that, in turn, may be interrupted by more descriptive tags. By using the GeoTIFF format, both metadata and image data can be encoded into the same file. The Landsat 7 ETM+ GeoTIFF file format is described in detail in the"Landsat 7 ETM+ Level 1 Product Data Format Control Book (DFCB), LSDS-272": http://landsat.usgs.gov/documents/LSDS-272.pdf. The Landsat 4-5 TM GeoTIFF file format is described in detail in the "Landsat Thematic Mapper (TM) Level 1 (L1) Data Format Control Book (DFCB), LS-DFCB-20": http://landsat.usgs.gov/documents/LS-DFCB-20.pdf.

    For more information on GeoTIFF visit: http://trac.osgeo.org/geotiff

    Dataset History

    ORGANIZATION

    Each band of Landsat data in the GeoTIFF format is delivered as a grayscale, uncompressed, 8-bit string of unsigned integers. A metadata (MTL) file is included with data processed through the Level-1 Product Generation System (LPGS). A file containing the ground control points (GCP) used during image processing is also included. A processing history (WO) file is included with data processed through the National Landsat Archive Production System (NLAPS). Landsat 7 ETM+ SLC-off products processed after December 11, 2008, will include an additional directory (gap_mask) that contains a set of flat binary scan gap mask files (one per band). (Please note that the processing date and acquisition date are not necessarily the same.)

    * DATA FILE NAMES

    The file naming convention for Landsat LPGS-processed GeoTIFF data

    is as follows:

    LMSppprrrYYYYDOYGSIVV_BN.TIF where:

     L      = Landsat 
    
     M      = Mission (E for ETM+ data; T for TM data; M for MSS)
    
     S      = Satellite (7 = Landsat 7, 5 = Landsat 5, 4 = Landsat 4)
    
     ppp     = starting path of the product
    
     rrr     = starting and ending rows of the product
    
     YYYY    = acquisition year
    
     DOY     = Julian date
    
     GSI     = Ground Station Identifier 
    
     VV     = 2 digit version number
    
     BN     = file type:
    
       B1     = band 1
    
       B2     = band 2
    
       B3     = band 3
    
       B4     = band 4
    
       B5     = band 5
    
       B6_VCID_1 = band 6L (low gain) (ETM+)
    
       B6_VCID_2 = band 6H (high gain) (ETM+)
    
       B6     = band 6 (TM and MSS)
    
       B7     = band 7 
    
       B8     = band 8 (ETM+)
    
       MTL    = Level-1 metadata
    
       GCP    = ground control points
    
     TIF     = GeoTIFF file extension
    

    The file naming convention for Landsat NLAPS-processed GeoTIFF data

    is as follows:

    LLNppprrrOOYYDDDMM_AA.TIF where:

     LL     = Landsat sensor (LT for TM data)
    
     N      = satellite number
    
     ppp     = starting path of the product
    
     rrr     = starting row of the product
    
     OO     = WRS row offset (set to 00)
    
     YY     = last two digits of the year of 
    
            acquisition
    
     DDD     = Julian date of acquisition
    
     MM     = instrument mode (10 for MSS; 50 for TM)
    
     AA     = file type:
    
       B1     = band 1
    
       B2     = band 2
    
       B3     = band 3
    
       B4     = band 4
    
       B5     = band5
    
       B6     = band 6
    
       B7     = band 7
    
       WO     = processing history file 
    
     TIF     = GeoTIFF file extension
    

    * GAP MASKS

    All Landsat 7 ETM+ SLC-off imagery processed on or after December 11, 2008, will include gap mask files. (Please note the difference between acquisition date and processing date, files dates are not necessarily the same.) The gap mask files are bit mask files showing the locations of the image gaps (areas that fall between ETM+ scans). One tarred and gzip-compressed gap mask file is provided for each band in GeoTIFF format. The file naming convention for gap mask files is identical to that described above for LPGS-processed GeoTIFF data, with "_GM" inserted before file type.

    If gap mask files are not included with the data, a tutorial for creating them can be found at: http://landsat.usgs.gov/gap_mask_files_are_not_provided_can_I_create_my_own.php

    * README

    The README_GTF.TXT (or README.GTF) is an ASCII text file and is this file.

    * READING DATA

    Delivered via file transfer protocol (FTP): data files are tarred and g-zip compressed and will need to be unzipped and untarred before the data files can be used. UNIX systems should have the "gunzip" and "tar"

    commands available for uncompressing and accessing the data. For PC users, free software can be downloaded from an online source. Otherwise, check your PC, as you may already have appropriate software available.

    No software is included on this product for viewing Landsat data.

    GENERAL INFORMATION and DOCUMENTATION

    Landsat Project Information:

    http://landsat.usgs.gov

    Landsat data access:

    * USGS Global Visualization Viewer (GloVis): http://glovis.usgs.gov

    * USGS EarthExplorer: http://earthexplorer.usgs.gov

    * USGS LandsatLook Viewer: http://landsatlook.usgs.gov

    * Landsat International Ground Station (IGS) network:

             http://landsat.usgs.gov/about_ground_stations.php
    

    FGDC metadata:

    http://www.fgdc.gov/metadata

    Data restrictions and citation:

    https://lta.cr.usgs.gov/citation

    * National Snow and Ice Data Center (NSIDC)

    Radarsat Antarctic Mapping Project (RAMP) elevation data citation:

    Liu, H., K. Jezek, B. Li, and Z. Zhao. 2001.

    Radarsat Antarctic Mapping Project digital elevation model version 2.

    Boulder, CO: National Snow and Ice Data Center. Digital media.

    For information on the data, please refer to the data set documentation

    available at the following web site:

    http://nsidc.org/data/nsidc-0082.html

    PRODUCT SUPPORT

    For further information on this product, contact USGS

    EROS Customer Services:

    Customer Services (ATTN: Landsat)

    U.S. Geological Survey

    Earth Resources Observation and Science (EROS) Center

    47914 252nd Street

    Sioux Falls, SD 57198-0001

    Tel: 800-252-4547

    Tel: 605-594-6151

    Email: custserv@usgs.gov

    For information on other products from USGS EROS:

    http://eros.usgs.gov/ or https://lta.cr.usgs.gov/

    For information on other USGS products:

    http://ask.usgs.gov/

    or call 1-888-ASK-USGS (275-8747)

    DISCLAIMER

    Any use of trade, product, or firm names is for descriptive

    purposes only and does not imply endorsement by the U.S.

    Government.

    Publication Date: July 2014

    Dataset Citation

    U.S. Geological Survey (2014) SYD Landsat raw data v01. Bioregional Assessment Source Dataset. Viewed 18 June 2018, http://data.bioregionalassessments.gov.au/dataset/fe7aa98d-ea2a-48fc-bc09-1d5ce3a50246.

  4. A

    SNOWDATA GeoTIFF Annual Snow Up Date (year)

    • data.amerigeoss.org
    xml, zip
    Updated Aug 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2022). SNOWDATA GeoTIFF Annual Snow Up Date (year) [Dataset]. https://data.amerigeoss.org/tl/dataset/snowdata-geotiff-annual-snow-up-date-year
    Explore at:
    xml, zipAvailable download formats
    Dataset updated
    Aug 26, 2022
    Dataset provided by
    United States
    Description

    This dataset includes Snow Up Date(sudy) for northern Alaska in GeoTiff format, covering the years 1980-2012. Snow Up Date is defined as day of the start of the core snow period(day of year). The core snow season is defined to be the longest period of continuous snow cover in each year. The dataset was generated by the Arctic LCC SNOWDATA: Snow Datasets for Arctic Terrestrial Applications project.

    "Day-of-year" (doy) output is expressed in Ordinal dates ("1" on 1 January, and "365" on 31 December). Dates have not been corrected for leap years. This output is appropriate for display purposes, as it is readily interpreted as calendar day of year. It is not recommended as input for analysis, as it may produce incorrect statistics; "day-of-simulation" (dos) files should be used for that purpose.

    "Day-of-year" (doy) is converted from "day-of-simulation" (dos) by the following sequence of process steps:

    if (dos <= 122) then 
      doy = dos + 243  
    else
      doy = dos – 122  
    endif
    

    The dataset is delivered in the ZIP archive file format. Each year is output in a separate GeoTiff file, where the year is indicated by the filename.

    Over the last 20 years, under a variety of NOAA, NSF, and NASA research programs, a snow-evolution modeling system has been developed that includes the MicroMet micrometeorological model, the SnowModel snow-process model, and the SnowAssim data assimilation model. These modeling tools can be thought of as physically-based mathematical descriptions that create value-added information (e.g., snow depth, snow density, snow hardness, rain-on-snow events, and snow cover duration) from basic meteorological variables (e.g., air temperature, humidity, precipitation, and wind speed and direction). The resulting products are based on a physical understanding of environmental processes and features, and their interactions with the atmosphere and surrounding land surface. SnowModel is unique in its representation of blowing snow processes; it includes SnowTran-3D, a model developed initially for Arctic Alaska applications, and arguably the most widely used snow transport model in the world. The model formulations are general enough to allow simulations over temporal domains spanning years to decades, and spatial domains spanning from small watersheds to all of Alaska. MicroMet can use atmospheric forcing ranging from individual meteorological stations, to gridded atmospheric (re)analysis products, to climate change scenario datasets. SnowAssim is able to ingest snow data ranging from ground-based snow observations to remote-sensing data. This dataset is the result of using these meteorological- and snow-evolution models to ingest appropriate datasets and produce the required outputs. A total of 528 meteorological station sites obtained from the Imiq Database (http://arcticlcc.org/imiq) provided air temperature, relative humidity, and wind speed/direction. Daily met station data were converted to hourly using various sub-models, then all hourly data were aggregated to 3-hourly for the model simulations. Precipitation inputs were obtained from the NASA 3-hourly MERRA atmospheric reanalysis data set. Snow DATA outputs are at 2 km x 2 km resolution and cover all of mainland Alaska and portions of adjacent Yukon and Northwest territories north of 61.5° N, and west of 130.2° W. Snow Tran 3-D is not implemented in this instance, because grid cell size exceeds the scale at which wind transport is expected to operate.

  5. o

    Continuous MODIS land surface temperature dataset over the Eastern...

    • explore.openaire.eu
    • data.niaid.nih.gov
    • +1more
    Updated Jan 6, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shilo Shiff; M Itamar Lensky; David Helman (2020). Continuous MODIS land surface temperature dataset over the Eastern Mediterranean [Dataset]. http://doi.org/10.5281/zenodo.4013701
    Explore at:
    Dataset updated
    Jan 6, 2020
    Authors
    Shilo Shiff; M Itamar Lensky; David Helman
    Area covered
    Mediterranean Sea
    Description

    A continuous dataset of Land Surface Temperature (LST) is vital for climatological and environmental studies. LST can be regarded as a combination of seasonal mean temperature (climatology) and daily anomaly, which is attributed mainly to the synoptic-scale atmospheric circulation (weather). To reproduce LST in cloudy pixels, time series (2002-2019) of cloud-free 1km MODIS Aqua LST images were generated and the pixel-based seasonality (climatology) was calculated using temporal Fourier analysis. To add the anomaly, we used the NCEP Climate Forecast System Version 2 (CFSv2) model, which provides air surface temperature under both cloudy and clear sky conditions. The combination of the two sources of data enables the estimation of LST in cloudy pixels. Data structure The dataset consists of geo-located continuous LST (Day, Night and Daily) which calculates LST values of cloudy pixels. The spatial domain of the data is the Eastern Mediterranean, at the resolution of the MYD11A1 product (~1 Km). Data are stored in GeoTIFF format as signed 16-bit integers using a scale factor of 0.02, with one file per day, each defined by 4 dimensions (Night LST Cont., Day LST Cont., Daily Average LST Cont., QA). The QA band stores information about the presence of cloud in the original pixel. If in both original files, Day LST and Night LST there was NoData due to clouds, then the QA value is 0. QA value of 1 indicates NoData at original Day LST, 2 indicates NoData at Night LST and 3 indicates valid data at both, day and night. File names follow this naming convention: LST_

    represents the day. Files of each year (2002-2019) are compressed in a ZIP file. The same data is also provided in NetCDF format, each file represents a whole year and is consist of 4 bands (Night LST Cont., Day LST Cont., Daily Average LST Cont., QA) for each day. The file LSTcont_validation.tif contains the validation dataset in which the MAE, RMSE, and Pearson (r) of the validation with true LST are provided. Data are stored in GeoTIFF format as signed 32-bit floats, with the same spatial extent and resolution as the LSTcont dataset. These data are stored with one file containing three bands (MAE, RMSE, and Perarson_r). The same data with the same structure is also provided in NetCDF format. How to use The data can be read in various of program languages such as Python, IDL, Matlab etc.and can be visualize in a GIS program such as ArcGis or Qgis. A short animation demonstrates how to visualize the data using the Qgis open source program is available in the project Github code reposetory. Web application The LSTcont web application (https://shilosh.users.earthengine.app/view/continuous-lst) is an Earth Engine app. The interface includes a map and a date picker. The user can select a date (July 2002 – present) and visualize LSTcont for that day anywhere on the globe. The web app calculate LSTcont on the fly based on ready-made global climatological files. The LSTcont can be downloaded as a GeoTiff with 5 bands in that order: Mean daily LSTcont, Night original LST, Night LSTcont, Day original LST, Day LSTcont. Code availability Datasets for other regions can be easily produced by the GEE platform with the code provided project Github code reposetory.

  6. Z

    Urban Green Raster Germany 2018

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Feb 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meinel, Gotthard (2022). Urban Green Raster Germany 2018 [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5842521
    Explore at:
    Dataset updated
    Feb 28, 2022
    Dataset provided by
    Wurm, Michael
    Krüger, Tobias
    Meinel, Gotthard
    Tenikl, Julia
    Eichler, Lisa
    Taubenböck, Hannes
    Area covered
    Germany
    Description

    Abstract

    The Urban Green Raster Germany is a land cover classification for Germany that addresses in particular the urban vegetation areas. The raster dataset covers the terrestrial national territory of Germany and has a spatial resolution of 10 meters. The dataset is based on a fully automated classification of Sentinel-2 satellite data from a full 2018 vegetation period using reference data from the European LUCAS land use and land cover point dataset. The dataset identifies eight land cover classes. These include Built-up, Built-up with significant green share, Coniferous wood, Deciduous wood, Herbaceous vegetation (low perennial vegetation), Water, Open soil, Arable land (low seasonal vegetation). The land cover dataset provided here is offered as an integer raster in GeoTiff format. The assignment of the number coding to the corresponding land cover class is explained in the legend file.

    Data acquisition

    The data acquisition comprises two main processing steps: (1) Collection, processing, and automated classification of the multispectral Sentinel 2 satellite data with the “Land Cover DE method”, resulting in the raw land cover classification dataset, NDVI layer, and RF assignment frequency vector raster. (2) GIS-based postprocessing including discrimination of (densely) built-up and loosely built-up pixels according NDVI threshold, and creating water-body and arable-land masks from geo-topographical base-data (ATKIS Basic DLM) and reclassification of water and arable land pixels based on the assignment frequency.

    Data collection

    Satellite data were searched and downloaded from the Copernicus Open Access Hub (https://scihub.copernicus.eu/).

    The LUCAS reference and validation points were loaded from the Eurostat platform (https://ec.europa.eu/eurostat/web/lucas/data/database).

    The processing of the satellite data was performed at the DLR data center in Oberpfaffenhofen.

    GIS-based post-processing of the automatic classification result was performed at IOER in Dresden.

    Value of the data

    The dataset can be used to quantify the amount of green areas within cities on a homogeneous data base [5].

    Thus it is possible to compare cities of different sizes regarding their greenery and with respect to their ratio of green and built-up areas [6].

    Built-up areas within cities can be discriminated regarding their built-up density (dense built-up vs. built-up with higher green share).

    Data description

    A Raster dataset in GeoTIFF format: The dataset is stored as an 8 bit integer raster with values ranging from 1 to 8 for the eight different land cover classes. The nomenclature of the coded values is as follows: 1 = Built-up, 2=open soil; 3=Coniferous wood, 4= Deciduous wood, 5=Arable land (low seasonal vegetation), 6=Herbaceous vegetation (low perennial vegetation), 7=Water, 8=Built-up with significant green share. Name of the file ugr2018_germany.tif. The dataset is zipped alongside with accompanying files: *.twf (geo-referencing world-file), *.ovr (Overlay file for quick data preview in GIS), *.clr (Color map file).

    A text file with the integer value assignment of the land cover classes. Name of the file: Legend_LC-classes.txt.

    Experimental design, materials and methods

    The first essential step to create the dataset is the automatic classification of a satellite image mosaic of all available Sentinel-2 images from May to September 2018 with a maximum cloud cover of 60 percent. Points from the 2018 LUCAS (Land use and land cover survey) dataset from Eurostat [1] were used as reference and validation data. Using Random Forest (RF) classifier [2], seven land use classes (Deciduous wood, Coniferous wood, Herbaceous vegetation (low perennial vegetation), Built-up, Open soil, Water, Arable land (low seasonal vegetation)) were first derived, which is methodologically in line with the procedure used to create the dataset "Land Cover DE - Sentinel-2 - Germany, 2015" [3]. The overall accuracy of the data is 93 % [4].

    Two downstream post-processing steps served to further qualify the product. The first step included the selective verification of pixels of the classes arable land and water. These are often misidentified by the classifier due to radiometric similarities with other land covers; in particular, radiometric signatures of water surfaces often resemble shadows or asphalt surfaces. Due to the heterogeneous inner-city structures, pixels are also frequently misclassified as cropland.

    To mitigate these errors, all pixels classified as water and arable land were matched with another data source. This consisted of binary land cover masks for these two land cover classes originating from the Monitor of Settlement and Open Space Development (IOER Monitor). For all water and cropland pixels that were outside of their respective masks, the frequencies of class assignments from the RF classifier were checked. If the assignment frequency to water or arable land was at least twice that to the subsequent class, the classification was preserved. Otherwise, the classification strength was considered too weak and the pixel was recoded to the land cover with the second largest assignment frequency.

    Furthermore, an additional land cover class "Built-up with significant vegetation share" was introduced. For this purpose, all pixels of the Built-up class were intersected with the NDVI of the satellite image mosaic and assigned to the new category if an NDVI threshold was exceeded in the pixel. The associated NDVI threshold was previously determined using highest resolution reference data of urban green structures in the cities of Dresden, Leipzig and Potsdam, which were first used to determine the true green fractions within the 10m Sentinel pixels, and based on this to determine an NDVI value that could be used as an indicator of a significant green fraction within the built-up pixel. However, due to the wide dispersion of green fraction values within the built-up areas, it is not possible to establish a universally valid green percentage value for the land cover class of Built-up with significant vegetation share. Thus, the class essentially serves to the visual differentiability of densely and loosely (i.e., vegetation-dominated) built-up areas.

    Acknowledgments

    This work was supported by the Federal Institute for Research on Building, Urban Affairs and Spatial Development (BBSR) [10.06.03.18.101].The provided data has been developed and created in the framework of the research project “Wie grün sind bundesdeutsche Städte?- Fernerkundliche Erfassung und stadträumlich-funktionale Differenzierung der Grünausstattung von Städten in Deutschland (Erfassung der urbanen Grünausstattung)“ (How green are German cities?- Remote sensing and urban-functional differentiation of the green infrastructure of cities in Germany (Urban Green Infrastructure Inventory)). Further persons involved in the project were: Fabian Dosch (funding administrator at BBSR), Stefan Fina (research partner, group leader at ILS Dortmund), Annett Frick, Kathrin Wagner (research partners at LUP Potsdam).

    References

    [1] Eurostat (2021): Land cover / land use statistics database LUCAS. URL: https://ec.europa.eu/eurostat/web/lucas/data/database

    [2] L. Breiman (2001). Random forests, Mach. Learn., 45, pp. 5-32

    [3] M. Weigand, M. Wurm (2020). Land Cover DE - Sentinel-2—Germany, 2015 [Data set]. German Aerospace Center (DLR). doi: 10.15489/1CCMLAP3MN39

    [4] M. Weigand, J. Staab, M. Wurm, H. Taubenböck, (2020). Spatial and semantic effects of LUCAS samples on fully automated land use/land cover classification in high-resolution Sentinel-2 data. Int J Appl Earth Obs, 88, 102065. doi: https://doi.org/10.1016/j.jag.2020.102065

    [5] L. Eichler., T. Krüger, G. Meinel, G. (2020). Wie grün sind deutsche Städte? Indikatorgestützte fernerkundliche Erfassung des Stadtgrüns. AGIT Symposium 2020, 6, 306–315. doi: 10.14627/537698030

    [6] H. Taubenböck, M. Reiter, F. Dosch, T. Leichtle, M. Weigand, M. Wurm (2021). Which city is the greenest? A multi-dimensional deconstruction of city rankings. Comput Environ Urban Syst, 89, 101687. doi: 10.1016/j.compenvurbsys.2021.101687

  7. g

    TreeMap 2016 Stand Size Code Algorithm (Image Service)

    • gimi9.com
    Updated Apr 26, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). TreeMap 2016 Stand Size Code Algorithm (Image Service) [Dataset]. https://gimi9.com/dataset/data-gov_treemap-2016-stand-size-code-algorithm-image-service/
    Explore at:
    Dataset updated
    Apr 26, 2022
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We matched forest plot data from Forest Inventory and Analysis (FIA) to a 30x30 meter (m) grid. TreeMap 2016 is being used in both the private and public sectors for projects including fuel treatment planning, snag hazard mapping, and estimation of terrestrial carbon resources. We used a random forests machine-learning algorithm to impute the forest plot data to a set of target rasters provided by Landscape Fire and Resource Management Planning Tools (LANDFIRE: https://landfire.gov). Predictor variables consisted of percent forest cover, height, and vegetation type, as well as topography (slope, elevation, and aspect), location (latitude and longitude), biophysical variables (photosynthetically active radiation, precipitation, maximum temperature, minimum temperature, relative humidity, and vapour pressure deficit), and disturbance history (time since disturbance and disturbance type) for the landscape circa 2016. The main output of this project (the GeoTIFF included in this data publication) is a raster map of imputed plot identifiers at 30X30 m spatial resolution for the conterminous U.S. for landscape conditions circa 2016. In the attribute table of this raster, we also present a set of attributes drawn from the FIA databases, including forest type and live basal area. The raster map of plot identifiers can be linked to the FIA databases available through the FIA DataMart (https://doi.org/10.2737/RDS-2001-FIADB). The dataset has been validated for applications including percent live tree cover, height of the dominant trees, forest type, species of trees with most basal area, aboveground biomass, fuel treatment planning, and snag hazard. Application of the dataset to research questions other than those for which it has been validated should be investigated by the researcher before proceeding. The dataset may be suitable for other applications and for use across various scales (stand, landscape, and region), however, the researcher should test the dataset's applicability to a particular research question before proceeding.

  8. e

    Normalized Difference Vegetation Index (NDVI) derived from 2010 National...

    • portal.edirepository.org
    • dataone.org
    • +1more
    kml, pdf, png +2
    Updated Nov 5, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michelle Stuhlmacher; Lance Watkins (2019). Normalized Difference Vegetation Index (NDVI) derived from 2010 National Agriculture Imagery Program (NAIP) data for the central Arizona region [Dataset]. http://doi.org/10.6073/pasta/8a465e9b76035bffeb00f3a6134eb913
    Explore at:
    tiff(3144937515 byte), text/javascript(12191 byte), tiff(3514309279 byte), tiff(3021661291 byte), tiff(2942170958 byte), tiff(895709520 byte), tiff(1416209068 byte), tiff(1506541324 byte), tiff(1606742280 byte), tiff(1914063131 byte), tiff(1765921648 byte), tiff(3604687567 byte), tiff(3742352948 byte), png(681855 byte), pdf(29487 byte), kml(11564 bytes), tiff(1803664259 byte), tiff(3144324296 byte), tiff(3376307703 byte)Available download formats
    Dataset updated
    Nov 5, 2019
    Dataset provided by
    EDI
    Authors
    Michelle Stuhlmacher; Lance Watkins
    Time period covered
    Jun 16, 2010
    Area covered
    Variables measured
    Name, raster_value
    Description

    This project calculates the Normalized Difference Vegetation Index (NDVI) from 2010 National Agriculture Imagery Program (NAIP) imagery (1-meter resolution) for the central Arizona region. Because of their large size, data (as GeoTIFF files) for each survey year are provided as fifteen individual tiles each comprising a portion of the overall coverage area. An index of the relative position of each tile in the coverage area is provided as a pdf, png, and kml where the tile index contains a portion of the GeoTIFF file name (e.g., the relative position of the data file NAIP_NDVI_CAP2010-0000000000-0000000000.tif to the overall coverage area is identified by the index id 0000000000-0000000000 in the pdf, png, and kml index map).

    Javascript code used to process NDVI values is included with this dataset.

  9. Digital Orthophoto Quarter-Quadrangles from 1999, Niwot Ridge LTER Project...

    • search.dataone.org
    • portal.edirepository.org
    Updated Mar 11, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2015). Digital Orthophoto Quarter-Quadrangles from 1999, Niwot Ridge LTER Project Area, Colorado [Dataset]. https://search.dataone.org/view/https%3A%2F%2Fpasta.lternet.edu%2Fpackage%2Fmetadata%2Feml%2Fknb-lter-nwt%2F706%2F2
    Explore at:
    Dataset updated
    Mar 11, 2015
    Dataset provided by
    Long Term Ecological Research Networkhttp://www.lternet.edu/
    Authors
    U.S. Geological Survey
    Time period covered
    Sep 6, 1999 - Sep 13, 1999
    Area covered
    Description

    (SEE SUPPLEMENTAL INFORMATION SECTION FOR FILE-SPECIFIC INFORMATION.)Digital orthophoto quarter-quads are now available for most of the United States and its Territories. Quarter-quad DOQs cover an area measuring 3.75-minutes longitude by 3.75-minutes latitude. Quarter-quad DOQs are available in both Native and GeoTIFF formats. Native format consists of an ASCII keyword header followed by a series of 8-bit binary image lines for B/W and 24-bit band-interleaved-by-pixel (BIP) for color. DOQs in native format are cast to the Universal Transverse Mercator (UTM) projection and referenced to either the North American Datum (NAD) of 1927 (NAD27) or the NAD of 1983 (NAD83). GeoTIFF format consists of a georeferenced Tagged Image File Format (TIFF), with all geographic referencing information embedded within the .tif file. DOQs in GeoTIFF format are cast to the UTM projection and referenced to NAD83. The average file size of a B/W quarter quad is 40-45 megabytes, and a color file is generally 140-150 megabytes. Quarter-quad DOQs are distributed on CD-ROM, DVD, and File Transfer Protocol (FTP) as uncompressed files.A downloadable software is available (DOQQ-to-GeoTIFF conversion) which will convert a DOQ image from Native to GeoTIFF format in either NAD27 or NAD83. NOTE: This EML metadata file does not contain important geospatial data processing information. Before using any NWT LTER geospatial data read the arcgis metadata XML file in either ISO or FGDC compliant format, using ArcGIS software (ArcCatalog > description), or by viewing the .xml file provided with the geospatial dataset.

  10. H

    Monthly Aggregated NEX-GDDP Ensemble Climate Projections: Historical...

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Dec 12, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brad Peter; Joseph Messina; Nishani Moragoda (2021). Monthly Aggregated NEX-GDDP Ensemble Climate Projections: Historical (1985–2005) and RCP 4.5 and RCP 8.5 (2006–2080) [Dataset]. http://doi.org/10.7910/DVN/ZNEJMS
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 12, 2021
    Dataset provided by
    Harvard Dataverse
    Authors
    Brad Peter; Joseph Messina; Nishani Moragoda
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Monthly Aggregated NEX-GDDP Ensemble Climate Projections: Historical (1985–2005) and RCP 4.5 and RCP 8.5 (2006–2080) This dataset is a monthly-scale aggregation of the NEX-GDDP: NASA Earth Exchange Global Daily Downscaled Climate Projections processed using Google Earth Engine (Gorelick 2017). The native delivery on Google Earth Engine is at the daily timescale for each individual CMIP5 GCM model. This dataset was created to facilitate use of NEX-GDDP and reduce processing times for projects that seek an ensemble model with a coarser temporal resolution. The aggregated data have been made available in Google Earth Engine via 'users/cartoscience/GCM_NASA-NEX-GDDP/NEX-GDDP-PRODUCT-ID_Ensemble-Monthly_YEAR' (see code below on how to access), and all 171 GeoTIFFS have been uploaded to this dataverse entry. Relevant links: https://www.nasa.gov/nex https://www.nccs.nasa.gov/services/data-collections/land-based-products/nex-gddp https://esgf.nccs.nasa.gov/esgdoc/NEX-GDDP_Tech_Note_v0.pdf https://developers.google.com/earth-engine/datasets/catalog/NASA_NEX-GDDP https://journals.ametsoc.org/view/journals/bams/93/4/bams-d-11-00094.1.xml https://rd.springer.com/article/10.1007/s10584-011-0156-z#page-1 The dataset can be accessed within Google Earth Engine using the following code: var histYears = ee.List.sequence(1985,2005).getInfo() var rcpYears = ee.List.sequence(2006,2080).getInfo() var path1 = 'users/cartoscience/GCM_NASA-NEX-GDDP/NEX-GDDP-' var path2 = '_Ensemble-Monthly_' var product product = 'Hist' var hist = ee.ImageCollection( histYears.map(function(y) { return ee.Image(path1+product+path2+y) }) ) product = 'RCP45' var rcp45 = ee.ImageCollection( rcpYears.map(function(y) { return ee.Image(path1+product+path2+y) }) ) product = 'RCP85' var rcp85 = ee.ImageCollection( rcpYears.map(function(y) { return ee.Image(path1+product+path2+y) }) ) print( 'Hist (1985–2005)', hist, 'RCP45 (2006–2080)', rcp45, 'RCP85 (2006–2080)', rcp85 ) var first = hist.first() var tMax = first.select('tasmin_1') var tMin = first.select('tasmax_1') var tMean = first.select('tmean_1') var pSum = first.select('pr_1') Map.addLayer(tMax, {min: -10, max: 40}, 'Average min temperature Jan 1985 (Hist)', false) Map.addLayer(tMin, {min: 10, max: 40}, 'Average max temperature Jan 1985 (Hist)', false) Map.addLayer(tMean, {min: 10, max: 40}, 'Average temperature Jan 1985 (Hist)', false) Map.addLayer(pSum, {min: 10, max: 500}, 'Accumulated rainfall Jan 1985 (Hist)', true) https://code.earthengine.google.com/5bfd9741274679dded7a95d1b57ca51d Ensemble average based on the following models: ACCESS1-0,BNU-ESM,CCSM4,CESM1-BGC,CNRM-CM5, CSIRO-Mk3-6-0,CanESM2,GFDL-CM3,GFDL-ESM2G, GFDL-ESM2M,IPSL-CM5A-LR,IPSL-CM5A-MR,MIROC-ESM-CHEM, MIROC-ESM,MIROC5,MPI-ESM-LR,MPI-ESM-MR,MRI-CGCM3, NorESM1-M,bcc-csm1-1,inmcm4 Each annual GeoTIFF contains 48 bands (4 variables across 12 months)— Temperature: Monthly mean (tasmin, tasmax, tmean) Precipitation: Monthly sum (pr) Bands 1–48 correspond with: tasmin_1, tasmax_1, tmean_1, pr_1, tasmin_2, tasmax_2, tmean_2, pr_2, tasmin_3, tasmax_3, tmean_3, pr_3, tasmin_4, tasmax_4, tmean_4, pr_4, tasmin_5, tasmax_5, tmean_5, pr_5, tasmin_6, tasmax_6, tmean_6, pr_6, tasmin_7, tasmax_7, tmean_7, pr_7, tasmin_8, tasmax_8, tmean_8, pr_8, tasmin_9, tasmax_9, tmean_9, pr_9, tasmin_10, tasmax_10, tmean_10, pr_10, tasmin_11, tasmax_11, tmean_11, pr_11, tasmin_12, tasmax_12, tmean_12, pr_12 *Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D. and Moore, R., 2017. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment, 202, pp.18–27. Project information: SEAGUL: Southeast Asia Globalization, Urbanization, Land and Environment Changes http://seagul.info/ https://lcluc.umd.edu/projects/divergent-local-responses-globalization-urbanization-land-transition-and-environmental This project was made possible by the the NASA Land-Cover/Land-Use Change Program (Grant #: 80NSSC20K0740)

  11. Z

    Data from: Characterization of Industrial Smoke Plumes from Remote Sensing...

    • data.niaid.nih.gov
    Updated Nov 25, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mommert, Michael (2020). Characterization of Industrial Smoke Plumes from Remote Sensing Data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4250705
    Explore at:
    Dataset updated
    Nov 25, 2020
    Dataset provided by
    Scheibenreif, Linus
    Mommert, Michael
    Sigel, Mario
    Borth, Damian
    Neuhausler, Marcel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Characterization of Industrial Smoke Plumes from Remote Sensing Data

    This data set contains imaging data acquired by ESA's Sentinel-2 Earth-observing satellite constellation for a sample of industrial sites that were picked based on emission information provided by the European Pollutant Release and Transfer Register. The images contain scenes of mainly industrial sites, some of which are actively emitting smoke plumes.

    This data set was created to investigate whether it would be possible to train a deep learning model to automatically identify and segment smoke plumes from remote sensing image data. Please refer to the acknowledgements section for more on information on this project.

    Description

    Each image is provided in the GeoTIFF file format, contains a total of 13 bands and georeferencing information, and has a shape of 120 x 120 pixels (corresponding to a square area with an edge length of 1.2 km on the ground). The bands are extracted from Sentinel-2 Level-2A products, except for band 10, which has been extracted from the corresponding Level-1C product (this band has not been utilized in the underlying work).

    This repository contains a total of 21,350 images. Based on manual annotation, the image sample was split into a sample of 3,750 positive images that contain industrial smoke plumes, and 17,600 negative images that do not contain smoke plumes. Furthermore, this repository contains a collection of JSON files that hold manual segmentation labels for smoke plumes present in 1,437 images. Segmentation labels were generated using label-studio. Please note that polygon edge coordinates have to be scaled by a factor of 1.2 to fit the images.

    Content

    The following tarballs are contained in this repository:

    README.md - this file

    images.tar.gz [6.0GB] - contains 21,350 GeoTIFF images

    segmentation_labels.tar.gz [350KB] - contains 1,437 JSON files

    Acknowledgement

    If you use this data set, please cite our publication:

    Mommert, M., Sigel, M., Neuhausler, M., Scheibenreif, L., Borth, D., "Characterization of Industrial Smoke Plumes from Remote Sensing Data", Tackling Climate Change with Machine Learning workshop at NeurIPS 2020.

    Please refer to this publication for additional information on the data set.

    The code used for this publication is available at github.

    This data set contains modified Copernicus Sentinel data acquired in 2019, processed by ESA.

    Responsible Author

    Michael Mommert University of St. Gallen, Institute of Computer Science Chair Artificial Intelligence and Machine Learning michael.mommert ( at ) unisg.ch

  12. GlobPOP: A 31-year (1990-2020) global gridded population dataset generated...

    • zenodo.org
    tiff
    Updated Apr 18, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luling Liu; Xin Cao; Xin Cao; Shijie Li; Na Jie; Luling Liu; Shijie Li; Na Jie (2025). GlobPOP: A 31-year (1990-2020) global gridded population dataset generated by cluster analysis and statistical learning [Dataset]. http://doi.org/10.5281/zenodo.10088105
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Apr 18, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Luling Liu; Xin Cao; Xin Cao; Shijie Li; Na Jie; Luling Liu; Shijie Li; Na Jie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data Update Notice 数据更新通知

    We are pleased to announce that the GlobPOP dataset for the years 2021-2022 has undergone a comprehensive quality check and has now been updated accordingly. Following the established methodology that ensures the high precision and reliability, these latest updates allow for even more comprehensive time-series analysis. The updated GlobPOP dataset remains available in GeoTIFF format for easy integration into your existing workflows.

    2021-2022 年的 GlobPOP 数据集经过全面的质量检查,现已进行相应更新。 遵循确保高精度和可靠性的原有方法,本次更新允许进行更全面的时间序列分析。 更新后的 GlobPOP 数据集仍以 GeoTIFF 格式提供,以便轻松集成到您现有的工作流中。

    To reflect these updates, our interactive web application has also been refreshed. Users can now explore the updated national population time-series curves from 1990 to 2022. This can be accessed via the same link: https://globpop.shinyapps.io/GlobPOP/. Thank you for your continued support of the GlobPOP, and we hope that the updated data will further enhance your research and policy analysis endeavors.

    交互式网页反映了人口最新动态,用户现在可以探索感兴趣的国家1990 年至 2022 年人口时间序列曲线,并将其与人口普查数据进行比较。感谢您对 GlobPOP 的支持,我们希望更新的数据将进一步加强您的研究和政策分析工作。

    If you encounter any issues, please contact us via email at lulingliu@mail.bnu.edu.cn.

    如果您遇到任何问题,请通过电子邮件联系我们。

    Introduction

    Continuously monitoring global population spatial dynamics is essential for implementing effective policies related to sustainable development, such as epidemiology, urban planning, and global inequality.

    Here, we present GlobPOP, a new continuous global gridded population product with a high-precision spatial resolution of 30 arcseconds from 1990 to 2020. Our data-fusion framework is based on cluster analysis and statistical learning approaches, which intends to fuse the existing five products(Global Human Settlements Layer Population (GHS-POP), Global Rural Urban Mapping Project (GRUMP), Gridded Population of the World Version 4 (GPWv4), LandScan Population datasets and WorldPop datasets to a new continuous global gridded population (GlobPOP). The spatial validation results demonstrate that the GlobPOP dataset is highly accurate. To validate the temporal accuracy of GlobPOP at the country level, we have developed an interactive web application, accessible at https://globpop.shinyapps.io/GlobPOP/, where data users can explore the country-level population time-series curves of interest and compare them with census data.

    With the availability of GlobPOP dataset in both population count and population density formats, researchers and policymakers can leverage our dataset to conduct time-series analysis of population and explore the spatial patterns of population development at various scales, ranging from national to city level.

    Data description

    The product is produced in 30 arc-seconds resolution(approximately 1km in equator) and is made available in GeoTIFF format. There are two population formats, one is the 'Count'(Population count per grid) and another is the 'Density'(Population count per square kilometer each grid)

    Each GeoTIFF filename has 5 fields that are separated by an underscore "_". A filename extension follows these fields. The fields are described below with the example filename:

    GlobPOP_Count_30arc_1990_I32

    Field 1: GlobPOP(Global gridded population)
    Field 2: Pixel unit is population "Count" or population "Density"
    Field 3: Spatial resolution is 30 arc seconds
    Field 4: Year "1990"
    Field 5: Data type is I32(Int 32) or F32(Float32)

    More information

    Please refer to the paper for detailed information:

    Liu, L., Cao, X., Li, S. et al. A 31-year (1990–2020) global gridded population dataset generated by cluster analysis and statistical learning. Sci Data 11, 124 (2024). https://doi.org/10.1038/s41597-024-02913-0.

    The fully reproducible codes are publicly available at GitHub: https://github.com/lulingliu/GlobPOP.

  13. Data from: Global patterns of plant functional traits and their...

    • zenodo.org
    zip
    Updated Aug 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiaze Li; Jiaze Li; Iain Colin Prentice; Iain Colin Prentice (2024). Global patterns of plant functional traits and their relationships to climate [Dataset]. http://doi.org/10.5281/zenodo.13325275
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 15, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jiaze Li; Jiaze Li; Iain Colin Prentice; Iain Colin Prentice
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data, code and high-quality figures of the project "Global patterns of plant functional traits and their relationships to climate", including Plant form data set, global maps of three bioclimatic variables (GeoTiff format), global trait maps for 16 plant functional traits and separate maps of 6 major traits for non-woody, woody deciduous and evergreen plants (GeoTiff format), global fractional cover of non-woody, woody deciduous and woody evergreen plants (GeoTiff format), as well as merged data sets used for the analyses, and the reproducible R Scripts used to conduct all data manipulations and analyses.

  14. Four-Decade (1979-2022) Daily Global Snow Cover Fraction Climate Data Record...

    • zenodo.org
    • data.niaid.nih.gov
    Updated Mar 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiongxin Xiao; Xiongxin Xiao; Kathrin Naegeli; Valentina Premier; Shaopeng Li; Christoph Neuhaus; Stefan Wunderle; Kathrin Naegeli; Valentina Premier; Shaopeng Li; Christoph Neuhaus; Stefan Wunderle (2025). Four-Decade (1979-2022) Daily Global Snow Cover Fraction Climate Data Record from AVHRR (Version 3.0) [Dataset]. http://doi.org/10.5281/zenodo.13385488
    Explore at:
    Dataset updated
    Mar 22, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Xiongxin Xiao; Xiongxin Xiao; Kathrin Naegeli; Valentina Premier; Shaopeng Li; Christoph Neuhaus; Stefan Wunderle; Kathrin Naegeli; Valentina Premier; Shaopeng Li; Christoph Neuhaus; Stefan Wunderle
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    VERSION UPDATE NOTE:

    This version 3.0 composited dataset will not be publicly availble since January 2025. This composited dataset is generated with ESA CCI AVHRR SCF V3.0 products for single sensor, which are available at the website https://catalogue.ceda.ac.uk/uuid/7491427f8c3442ce825ba5472c224322/ and https://catalogue.ceda.ac.uk/uuid/56ff07acabab42888afe2d20b488ec49/. Anyone interested in AVHRR SCF V3.0 product can test the composite processing according to your needs.

    Moreover, our new version dataset (Version 4.0), including the daily compoisted AVHRR SCF product, will be avaiable soon. Given some obvious underestimation errors and missing errors of snow cover pixels in AVHRR 3.0 SCF products, we have done significant improvements for our updaed version AVHRR SCF products (V4.0).

    [--12/23/2024]

    --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Data Description

    This dataset provides daily composited snow cover fraction (SCF) data, spanning four decades from 1979 to 2022. SCF represents the proportion of snow-covered area observed from space. This dataset is combining the multiple SCF observations derived from three generations of Advanced Very High-Resolution Radiometer (AVHRR) sensors, as part of the ESA CCI+ Snow project (AVHRR/1: TIROS-N, NOAA-6, 8, and 10; AVHRR/2: NOAA-7, 9, 11, 12, and 14; AVHRR/3: NOAA-15, 16, 17, 18, 19, and METOP-A/B/C).

    Date types

    The dataset includes two types of SCF:

    • Snow Cover Fraction Viewable (SCFV): Represents the area of snow visible over land surface from space, while snow area in forested area refers to snow visible on top of forest canopies.
    • Snow Cover Fraction on Ground (SCFG): Represents snow on the ground, with corrections applied in forested areas to account for the obstruction caused by forest canopy.

    Both SCFV and SCFG are provided in percentage (%) per grid, at a spatial resolution of 0.05° (approximately 5 km). The data covers all land areas globally, excluding Antarctica and Greenland ice sheets.

    Important Notes:

    • Due to limitations in satellite observations, 110 days of data are missing from the dataset. A list of these dates is provided in the accompanying file "Dates List of Missing Data.xlsx".
    • The dataset is organized by year, with each day's data stored in a separate GeoTIFF (*.tif) file.

    Code for the AVHRR SCF products:

    CodesDescription
    0 - 100 Snow cover fraction (SCF) [%]; 0: snow freee; 100: fully snow covered
    205Cloud
    206Polar night
    210Water
    215Glacier, Icecaps, Icesheets
    254ERROR: No satellite acuqisition

  15. e

    Soil-Adjusted Vegetation Index (SAVI) derived from 2017 National Agriculture...

    • portal.edirepository.org
    • search.dataone.org
    kml, pdf +2
    Updated Nov 12, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michelle Stuhlmacher (2019). Soil-Adjusted Vegetation Index (SAVI) derived from 2017 National Agriculture Imagery Program (NAIP) data for the central Arizona region [Dataset]. http://doi.org/10.6073/pasta/f715f5f896f5f47118240e3f174e476b
    Explore at:
    tiff(4911168640 byte), tiff(2446036790 byte), tiff(4774563658 byte), tiff(4552875562 byte), tiff(4760855490 byte), text/javascript(12598 byte), tiff(4635487497 byte), pdf(29871 byte), tiff(1559505306 byte), tiff(4915989891 byte), tiff(2594399679 byte), tiff(4735822149 byte), tiff(4683828833 byte), tiff(1773712922 byte), tiff(4704596932 byte), tiff(4835786304 byte), tiff(930489247 byte), tiff(4665981562 byte), tiff(2466351168 byte), tiff(4585559640 byte), tiff(4769567944 byte), tiff(4676023816 byte), tiff(2529596495 byte), kml(22385 bytes), tiff(1741142384 byte), tiff(2621679405 byte), tiff(4710737200 byte), tiff(4673149833 byte), tiff(4813207426 byte), tiff(2631820880 byte), tiff(4762735665 byte)Available download formats
    Dataset updated
    Nov 12, 2019
    Dataset provided by
    EDI
    Authors
    Michelle Stuhlmacher
    Time period covered
    Jun 1, 2017 - Jun 4, 2017
    Area covered
    Variables measured
    Name, raster_value
    Description

    This project calculates the Soil-adjusted Vegetation Index (SAVI) from 2017 National Agriculture Imagery Program (NAIP) imagery (1-meter resolution) for the central Arizona region. Because of their large size, data (as GeoTIFF files) for each survey year are provided as multiple individual tiles each comprising a portion of the overall coverage area. An index of the relative position of each tile in the coverage area is provided as a pdf and kml where the tile index contains a portion of the GeoTIFF file name (e.g., the relative position of the data file NAIP_SAVI_CAP2017-0000000000-0000000000.tif to the overall coverage area is identified by the index id 0000000000-0000000000 in the pdf and kml index maps).

    Javascript code used to process SAVI values is included with this dataset.

  16. La Soufrière volcano (Saint Vincent) Fusion of Pleiades (2014, 2 m) and...

    • zenodo.org
    zip
    Updated Apr 8, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Grandin Raphael; Grandin Raphael; Delorme Arthur; Delorme Arthur (2021). La Soufrière volcano (Saint Vincent) Fusion of Pleiades (2014, 2 m) and Copernicus (2018, 30 m) digital elevation models [Dataset]. http://doi.org/10.5281/zenodo.4668734
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 8, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Grandin Raphael; Grandin Raphael; Delorme Arthur; Delorme Arthur
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    La Soufrière, Saint Vincent and the Grenadines
    Description

    Release 1.1 For Zenodo
    ===========================

    Authors

    Raphaël GRANDIN1 and Arthur DELORME2

    1 : Université de Paris, Institut de Physique du Globe de Paris. Email: grandin@ipgp.fr

    2: Université de Paris, Institut de Physique du Globe de Paris. Email: delorme@ipgp.fr

    ===========================
    1. Collection Overview

    This collection contains a digital surface model (DSM) of the Soufrière volcano (Saint Vincent) calculated from Pleiades images acquired in 2014, hole-filled with the 2018 Copernicus digital elevation model (DEM).

    The Pleiades dataset consists in three images acquired in 2014:

    * image A = `DS_PHR1A_201407041445368_FR1_PX_W062N13_1009_00974`
    * image B = `DS_PHR1A_201409271441564_FR1_PX_W062N13_1009_00974`
    * image C = `DS_PHR1A_201410161445303_SE1_PX_W062N13_1009_00974`

    By combining these three images, three different digital surface models (DSMs) were computed (AB, BC and ABC). The three Pleiades DSMs were then merged together, taking advantage of the different cloud cover in the three pairs / triplets. Areas that are not visible in any of the three DSMs due to clouds are subsequently filled with the Copernicus DEM.

    The collection includes five folders :

    1. **Report**:
    * "SaintVincent_DEM_Pleiades_Copernicus_fusion_Grandin_Delorme_2021.pdf": report
    2. **DSM**: the merged DSM in Geotiff format:
    * "SaintVincent_Pleiades_Copernicus_merged.tif": the merged Pleiades DSM + Copernicus DEM
    3. **Data**: the three Pleiades DSMs in Geotiff format:
    * "SaintVincent_Pleiades_AB_dsm.tif": the Pleiades DSM computed from images A and B
    * "SaintVincent_Pleiades_AB_cor.tif": the correlation score betwen images A and B
    * "SaintVincent_Pleiades_BC_dsm.tif": the Pleiades DSM computed from images B and C
    * "SaintVincent_Pleiades_BC_cor.tif": the correlation score betwen images B and C
    * "SaintVincent_Pleiades_ABC_dsm.tif": the Pleiades DSM computed from images A, B and C
    * "SaintVincent_Pleiades_ABC_cor.tif": the correlation score betwen images A, B and C
    4. **KMZ**: quickviews in KMZ format:
    * SaintVincent_Pleiades_Copernicus_merged_color.kmz": the merged Pleiades DSM + Copernicus
    DEM in KMZ format (color version)
    * "SaintVincent_Pleiades_Copernicus_merged_shaded.kmz": the merged Pleiades DSM + Copernicus DEM in KMZ format (hillshade version)
    5. **Figures**: the figures shown in the report

    ===========================

    2. Dataset Acknowledgement

    Access to Pleiades data was granted through the DINAMIS program (https://dinamis.teledetection.fr/) via project ID 2021-055-Sci (PI: Raphaël Grandin, IPGP).

    This work was supported by public funds received in the framework of GEOSUD, a project (ANR-10-EQPX-20) of the program "Investissements d’Avenir" managed by the French National Research Agency.

    Calculation of the Pleiades DSM used the S-CAPAD cluster of IPGP.

    ===========================

    3. Dataset Attribution

    This dataset is licensed under a Creative Commons CC BY-NC 4.0 International License (Attribution-NonCommercial).


    Attribution required for copies and derivative works:

    The underlying dataset from which this work has been derived includes Pleiades material ©CNES (2014), distributed by AIRBUS DS, and EO material ©CCME (2018), provided under COPERNICUS by the European Union and ESA, all rights reserved.

    ===========================

    4. Dataset Citation

    Grandin and Delorme (2021). “La Soufrière volcano (Saint Vincent) – Fusion of Pleiades (2014, 2 m) and Copernicus (2018, 30 m) digital elevation models”.

    Dataset distributed on Zenodo: https://doi.org/10.5281/zenodo.4668734

    Dataset distributed on GitHub: https://github.com/RaphaelGrandin/SaintVincent_DEM_Pleiades_Copernicus

    @misc{grandindelorme2021,
    title={{La Soufriere volcano (Saint Vincent) -- Fusion of Pleiades (2014, 2 m) and Copernicus
    (2018, 30 m) digital elevation models}}, 
    author={Grandin, Raphael and Delorme, Arthur},
    year={2021},
    howpublished={Dataset on Zenodo}, doi={10.5281/zenodo.4668734}
    }

    ===========================

    5. Collection Location

    Country: Saint Vincent and the Grenadines

    Bounding box:

    
    
    
    

    ===========================

    6. Method


    Three digital surface models (DSMs) are computed from panchromatic images from the Pleiades satellite, whose ground sampling distance (GSD) is 0.5 m. As no stereoscopic acquisition is available on the volcano area in the archive catalog, the processed images are monoscopic acquisitions, taken on three dates: 04/07/2014 (image A, [Figure 1](Figures/DS_PHR1A_201407041445368_FR1_PX_W062N13_1009_00974.png?raw=true)), 27/09/2014 (image B, [Figure 2](Figures/DS_PHR1A_201409271441564_FR1_PX_W062N13_1009_00974.png)) and 16/10/2014 (image C, [Figure 3](Figures/DS_PHR1A_201410161445303_SE1_PX_W062N13_1009_00974.png)). This dataset, with images of different dates, which are partially covered by clouds, is not ideal for producing a DSM. The idea is therefore to produce several DSMs with different combinations of images, then to merge these DSMs, finally filling any hole by interpolation or with an external DSM, namely the Copernicus DEM (https://spacedata.copernicus.eu/web/cscda/dataset-details?articleId=394198). Considering the base-to-height ratio of the different pairs of images, three combinations of images seem prone to provide satisfactory results: A-B, B-C and A-B-C.


    Images are processed using the open source photogrammetry software MicMac (Rupnik et al., 2017). First, the geometry model of each image is translated into MicMac format (Convert2GenBundle command). Then tie points between images are extracted from each possible pair of images (Tapioca). A bundle block adjustment is performed between the three images to refine the geometry models (Campari). Finally, the three DSMs are computed separately, by correlation between images A-B (1), B-C (2) and A-B-C (3) (Malt). The GSD of the DSMs is 0.5 m, thanks to MicMac multi-scale approach and regularization criterion. They are downsampled to 2 m to reduce the signal to noise ratio ([Figure 4a](Figures/AB_dsm_raw.png), [Figure 4c](Figures/BC_dsm_raw.png), [Figure 4e](Figures/ABC_dsm_raw.png)). Each DSM comes with a correlation score for each pixel, which can be used to remove pixels whose correlation score is below a certain threshold ([Figure 4b](Figures/AB_cor_raw.png), [Figure 4d](Figures/BC_cor_raw.png), [Figure 4f](Figures/ABC_cor_raw.png)).


    The areas masked by clouds in the Pleiades DSM are then filled with the digital elevation model from Coper- nicus. A threshold on the correlation score is used to build a cloud mask. Finally, the three hole-filled DSMs are merged using the correlation score as a weighting factor ([Figure 5](Figures/Merged.png)).

    ===========================

    References

    [1] Ewelina Rupnik, Mehdi Daakir, and Marc Pierrot Deseilligny. Micmac–a free, open-source solution for pho- togrammetry. Open Geospatial Data, Software and Standards, 2(1):1–9, 2017. [Link]

  17. j

    Uncertainty maps for model-based global climate classification systems

    • portalcienciaytecnologia.jcyl.es
    • springernature.figshare.com
    Updated 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Navarro, Andrés; Merino, Andrés; García-Ortega, Eduardo; Tapiador, Francisco; Navarro, Andrés; Merino, Andrés; García-Ortega, Eduardo; Tapiador, Francisco (2025). Uncertainty maps for model-based global climate classification systems [Dataset]. https://portalcienciaytecnologia.jcyl.es/documentos/67a9c7c719544708f8c72463
    Explore at:
    Dataset updated
    2025
    Authors
    Navarro, Andrés; Merino, Andrés; García-Ortega, Eduardo; Tapiador, Francisco; Navarro, Andrés; Merino, Andrés; García-Ortega, Eduardo; Tapiador, Francisco
    Description

    The dataset includes climate classification maps and their associated consensus maps for present climate and three future socio-economic pathways (SSPs). These maps represent the Earth's climate zones according to four well-established classification schemes: Holdridge, Köppen, Thornthwaite, and Whittaker. The data are based on Global Climate Model (GCM) simulations from the Coupled Model Intercomparison Project phase six (CMIP6). For further information on the source archives of these climate classification maps, please refer to the 'Data Records' section in Navarro et al. (2025). If you use these maps in any publications, please cite Navarro et al. (2025).

    code.tar.gz This tar.gz contains the source code used to generate the results presented in the manuscript.

    GeoTiff.tar.gz This tar.gz contains GeoTIFF files for four climate classification schemes (Holdridge, Köppen, Thornthwaite, and Whittaker) along with corresponding consensus maps for present climate and three future SSPs.

    NetCDF.tar.gz This tar.gz contains NetCDF files for four climate classification schemes (Holdridge, Köppen, Thornthwaite, and Whittaker) along with corresponding consensus maps for present climate and three future SSPs.

    BIL.tar.gz This tar.gz contains BIL and HDR files for four climate classification schemes (Holdridge, Köppen, Thornthwaite, and Whittaker) along with corresponding consensus maps for present climate and three future SSPs.

  18. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Markus Neteler; Markus Neteler; Julia Haas; Julia Haas; Markus Metz; Markus Metz (2024). Copernicus Digital Elevation Model (DEM) for Europe at 100 meter resolution (EU-LAEA) derived from Copernicus Global 30 meter DEM dataset [Dataset]. http://doi.org/10.5281/zenodo.6211990
Organization logo

Copernicus Digital Elevation Model (DEM) for Europe at 100 meter resolution (EU-LAEA) derived from Copernicus Global 30 meter DEM dataset

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
png, tiff, xml, binAvailable download formats
Dataset updated
Jul 17, 2024
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Markus Neteler; Markus Neteler; Julia Haas; Julia Haas; Markus Metz; Markus Metz
License

Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically

Area covered
Europe
Description

Overview:
The Copernicus DEM is a Digital Surface Model (DSM) which represents the surface of the Earth including buildings, infrastructure and vegetation. The original GLO-30 provides worldwide coverage at 30 meters (refers to 10 arc seconds). Note that ocean areas do not have tiles, there one can assume height values equal to zero. Data is provided as Cloud Optimized GeoTIFFs. Note that the vertical unit for measurement of elevation height is meters.

The Copernicus DEM for Europe at 100 meter resolution (EU-LAEA projection) in COG format has been derived from the Copernicus DEM GLO-30, mirrored on Open Data on AWS, dataset managed by Sinergise (https://registry.opendata.aws/copernicus-dem/).

Processing steps:
The original Copernicus GLO-30 DEM contains a relevant percentage of tiles with non-square pixels. We created a mosaic map in VRT format and defined within the VRT file the rule to apply cubic resampling while reading the data, i.e. importing them into GRASS GIS for further processing. We chose cubic instead of bilinear resampling since the height-width ratio of non-square pixels is up to 1:5. Hence, artefacts between adjacent tiles in rugged terrain could be minimized:

gdalbuildvrt -input_file_list list_geotiffs_MOOD.csv -r cubic -tr 0.000277777777777778 0.000277777777777778 Copernicus_DSM_30m_MOOD.vrt

In order to reproject the data to EU-LAEA projection while reducing the spatial resolution to 100 m, bilinear resampling was performed in GRASS GIS (using r.proj and the pixel values were scaled with 1000 (storing the pixels as Integer values) for data volume reduction. In addition, a hillshade raster map was derived from the resampled elevation map (using r.relief, GRASS GIS). Eventually, we exported the elevation and hillshade raster maps in Cloud Optimized GeoTIFF (COG) format, along with SLD and QML style files.

Projection + EPSG code:
ETRS89-extended / LAEA Europe (EPSG: 3035)

Spatial extent:
north: 6874000
south: -485000
west: 869000
east: 8712000

Spatial resolution:
100 m

Pixel values:
meters * 1000 (scaled to Integer; example: value 23220 = 23.220 m a.s.l.)

Software used:
GDAL 3.2.2 and GRASS GIS 8.0.0 (r.proj; r.relief)

Original dataset license:
https://spacedata.copernicus.eu/documents/20126/0/CSCDA_ESA_Mission-specific+Annex.pdf

Processed by:
mundialis GmbH & Co. KG, Germany (https://www.mundialis.de/)

Search
Clear search
Close search
Google apps
Main menu