The Sentinel-1C satellite was launched December 5, 2024. Sentinel-1C is the the latest satellite to be added to the Sentinel-1 constellation. The Sentinel-1 satellites (Sentinel-1A, Sentinel-1B, and Sentinel-1C) are sun-synchronous polar-orbiting satellites that operate day and night performing C-band synthetic aperture radar (SAR) imaging. The Sentinel-1 satellites operate in four imaging modes with different resolutions (down to 5 meters) and coverage (up to 400 kilometers). The Sentinel-1 satellites provide dual polarization capability and short revisit times.
Sentinel-1C Single Look Complex (SLC) data products consist of focused SAR data, geo-referenced using orbit and attitude data from the satellite, and are provided in slant-range geometry. Slant range is the natural radar range observation coordinate, defined as the line-of-sight from the radar to each reflecting object. The products are in zero-Doppler orientation, where each row of pixels represents points along a line perpendicular to the sub-satellite track.
The products include a single look in each dimension using the full available signal bandwidth and complex samples (real and imaginary) preserving the phase information. The products have been geo-referenced using the satellite’s orbit and attitude data and have been corrected for azimuth bi-static delay, elevation antenna pattern, and range spreading loss.
The data products in this collection mirror the Sentinel-1C products provided through the Copernicus Data Space Ecosystem.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the datasets and codes used in the research article "Revealing European-wide ecosystem strategies to drought from space".
The LAI data presented here is derived from the GEOV2 LAI product, provided by the European Union’s Copernicus Land Monitoring Service. Original data accessed from: https://land.copernicus.eu/global/products/lai.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This archive contains remote sensing datasets computed as part of the study Lava Tube System Development Defined by Multispectral Imaging and InSAR: the Case of the 2024 Eruption of Fernandina Volcano (Galápagos), which has been submitted for possible publication in Journal of Geophysical Research - Solid Earth. The following datasets are available:
Landsat 8/9 images, cropped on the lava field. Two composite image are given: (1) lava-flow composite image from bands 10, 7 and 6; and (2) true-colours. The images cover a period from Mar. 2024 to Oct. 2024. The archive name is Fernandina2024_lavaflow_Landsat_images.zip.
Sentinel-2 images, cropped on the lava field. The composite images are created from SWIR, NIR and Red bands. The images cover a period from Feb. 2024 to Oct. 2024. The archive name is Fernandina2024_lavaflow_Sentinel2_images.zip.
Sentinel-1 InSAR coherence images are given in Fernandina2024_lavaflow_Sentinel1_images.zip. In details, the images are Sentinel-1 Ascending (relative orbit: 106) and Sentinel-1 Descending (relative orbit: 128) images between Feb. 2024 and Oct. 2024.
The maps of InSAR estimates (displacements, thickness, etc.) performed on the lava field are provided by the file Fernandina2024_lavaflow_Sentinel1_results.zip.
The file Fernandina2024_lavaflow_vector.zip includes the vector files used in the study.
The file TS_displacement_summit_2024.txt includes the InSAR vertical displacements between Feb. 2024 and Oct. 2024, for a point at the summit of Fernandina.
Raster and vector data are given in Geotiff format and shapefile, with CRS codes EPSG:32615 and EPSG:32715. Futher information can be found in the metadata.
Coherence images are given in uchar format. Please rescale the pixel values between 0 and 1 before use.
The datasets are derived from original satellite imagery as follows:
Copernicus Sentinel-1 data are retrieved from ASF DAAC and Copernicus Data Space Ecosystem (2024), processed by ESA. Copernicus Sentinel-2 data are retrieved from Copernicus Data Space Ecosystem (2024), processed by ESA. LandSat-8/9 and ICESAT-2 data are retrieved from USGS Earth Explorer (2024), processed by NASA. NASA Commercial Satellite Data Acquisition (CSDA) Program (Maxar WorldView-1 images acquired on 10 November 2024). Copernicus DEM (free license): ©DLR e.V. 2010-2014 and ©Airbus Defence and Space GmbH 2014-2018 provided under COPERNICUS by the European Union and ESA; all rights reserved (https://doi.org/10.5270/ESA-c5d3d65).
Sentinel-2, 10, 20, and 60m Multispectral, Multitemporal, 13-band imagery is rendered on-the-fly and available for visualization. This imagery layer pulls directly from the Sentinel-2 on AWS collection and is updated daily with new imagery.This imagery layer can be applied across a number of industries, scientific disciplines, and management practices. Some applications include, but are not limited to, land cover and environmental monitoring, climate change, deforestation, disaster and emergency management, national security, plant health and precision agriculture, forest monitoring, watershed analysis and runoff predictions, land-use planning, tracking urban expansion, highlighting burned areas and estimating fire severity.Geographic CoverageGlobalContinental land masses from 65.4° South to 72.1° North, with these special guidelines:All coastal waters up to 20 km from the shoreAll islands greater than 100 km2All EU islandsAll closed seas (e.g. Caspian Sea)The Mediterranean SeaTemporal CoverageThe revisit time for each point on Earth is every 5 days.This layer is updated daily with new imagery.This imagery layer includes a rolling collection of imagery acquired within the past 14 months.The number of images available will vary depending on location.Product LevelThis service provides Level-1C Top of Atmosphere imagery.Alternatively, Sentinel-2 Level-2A is also available.Image Selection/FilteringThe most recent and cloud free images are displayed by default.Any image available within the past 14 months can be displayed via custom filtering.Filtering can be done based on attributes such as Acquisition Date, Estimated Cloud Cover, and Tile ID.Tile_ID is computed as [year][month][day]T[hours][minutes][seconds]_[UTMcode][latitudeband][square]_[sequence]. More…Visual RenderingDefault rendering is Natural Color (bands 4,3,2) with Dynamic Range Adjustment (DRA).The DRA version of each layer enables visualization of the full dynamic range of the images.Rendering (or display) of band combinations and calculated indices is done on-the-fly from the source images via Raster Functions.Various pre-defined Raster Functions can be selected or custom functions created.Available renderings include: Agriculture with DRA, Bathymetric with DRA, Color-Infrared with DRA, Natural Color with DRA, Short-wave Infrared with DRA, Geology with DRA, NDMI Colorized, Normalized Difference Built-Up Index (NDBI), NDWI Raw, NDWI - with VRE Raw, NDVI – with VRE Raw (NDRE), NDVI - VRE only Raw, NDVI Raw, Normalized Burn Ratio, NDVI Colormap.Multispectral BandsBandDescriptionWavelength (µm)Resolution (m)1Coastal aerosol0.433 - 0.453602Blue0.458 - 0.523103Green0.543 - 0.578104Red0.650 - 0.680105Vegetation Red Edge0.698 - 0.713206Vegetation Red Edge0.733 - 0.748207Vegetation Red Edge0.773 - 0.793208NIR0.785 - 0.900108ANarrow NIR0.855 - 0.875209Water vapour0.935 - 0.9556010SWIR – Cirrus1.365 - 1.3856011SWIR-11.565 - 1.6552012SWIR-22.100 - 2.28020Additional NotesOverviews exist with a spatial resolution of 150m and are updated every quarter based on the best and latest imagery available at that time.To work with source images at all scales, the ‘Lock Raster’ functionality is available.NOTE: ‘Lock Raster’ should only be used on the layer for short periods of time, as the imagery and associated record Object IDs may change daily.This ArcGIS Server dynamic imagery layer can be used in Web Maps and ArcGIS Desktop as well as Web and Mobile applications using the REST based Image services API.Images can be exported up to a maximum of 4,000 columns x 4,000 rows per request.Data SourceSentinel-2 imagery is the result of close collaboration between the (European Space Agency) ESA, the European Commission and USGS. Data is hosted by the Amazon Web Services as part of their Registry of Open Data. Users can access the imagery from Sentinel-2 on AWS , or alternatively access EarthExplorer or the Copernicus Data Space Ecosystem to download the scenes.For information on Sentinel-2 imagery, see Sentinel-2.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains annotated marine vessels from 15 different Sentinel-2 product, used for training object detection models for marine vessel detection. The vessels are annotated as bounding boxes, covering also some amount of the wake if present.
Location | Product name |
Archipelago sea | S2B_MSIL1C_20220619T100029_N0400_R122_T34VEM_20220619T104419 |
S2A_MSIL1C_20220721T095041_N0400_R079_T34VEM_20220721T115325 | |
S2A_MSIL1C_20220813T095601_N0400_R122_T34VEM_20220813T120233 | |
Gulf of Finland | S2B_MSIL1C_20220606T095029_N0400_R079_T35VLG_20220606T105944 |
S2B_MSIL1C_20220626T095039_N0400_R079_T35VLG_20220626T104321 | |
S2A_MSIL1C_20220721T095041_N0400_R079_T35VLG_20220721T115325 | |
Bothnian Bay | S2A_MSIL1C_20220627T100611_N0400_R022_T34WFT_20220627T134958 |
S2B_MSIL1C_20220712T100559_N0400_R022_T34WFT_20220712T121613 | |
S2B_MSIL1C_20220828T095549_N0400_R122_T34WFT_20220828T104748 | |
Bothnian Sea | S2B_MSIL1C_20210714T100029_N0500_R122_T34VEN_20230224T120043 |
S2A_MSIL1C_20220624T100041_N0400_R122_T34VEN_20220624T120211 | |
S2A_MSIL1C_20220813T095601_N0400_R122_T34VEN_20220813T120233 | |
Kvarken | S2A_MSIL1C_20220617T100611_N0400_R022_T34VER_20220617T135008 |
S2B_MSIL1C_20220712T100559_N0400_R022_T34VER_20220712T121613 | |
S2A_MSIL1C_20220826T100611_N0400_R022_T34VER_20220826T135136 |
T34VEM|-20220619|-20220721|-20220813
Product name | Number of annotations |
S2B_MSIL1C_20220619T100029_N0400_R122_T34VEM_20220619T104419 | 591 |
S2A_MSIL1C_20220721T095041_N0400_R079_T34VEM_20220721T115325 | 1518 |
S2A_MSIL1C_20220813T095601_N0400_R122_T34VEM_20220813T120233 | 1368 |
S2B_MSIL1C_20220606T095029_N0400_R079_T35VLG_20220606T105944 | 248 |
S2B_MSIL1C_20220626T095039_N0400_R079_T35VLG_20220626T104321 | 1206 |
S2A_MSIL1C_20220721T095041_N0400_R079_T35VLG_20220721T115325 | 971 |
S2A_MSIL1C_20220627T100611_N0400_R022_T34WFT_20220627T134958 | 122 |
S2B_MSIL1C_20220712T100559_N0400_R022_T34WFT_20220712T121613 | 162 |
S2B_MSIL1C_20220828T095549_N0400_R122_T34WFT_20220828T104748 | 98 |
S2B_MSIL1C_20210714T100029_N0301_R122_T34VEN_20210714T121056 | 450 |
S2A_MSIL1C_20220624T100041_N0400_R122_T34VEN_20220624T120211 | 424 |
S2A_MSIL1C_20220813T095601_N0400_R122_T34VEN_20220813T120233 | 399 |
S2A_MSIL1C_20220617T100611_N0400_R022_T34VER_20220617T135008 | 83 |
S2B_MSIL1C_20220712T100559_N0400_R022_T34VER_20220712T121613 | 183 |
S2A_MSIL1C_20220826T100611_N0400_R022_T34VER_20220826T135136 | 88 |
mean | min | 25% | 50% | 75% | max | |
Area (m²) | 5305.7 | 567.9 | 1629.9 | 2328.2 | 5176.3 | 414795.7 |
Diameter (m) | 92.5 | 33.9 | 57.9 | 69.4 | 108.3 | 913.9 |
Model | Fold | Precision | Recall | mAP50 | mAP |
yolov8n | 1 | 0,820806 | 0.838353 | 0.842 | 0.403 |
yolov8s | 4 | 0.843822 | 0.860479 | 0.865 | 0.422 |
yolov8m | 4 | 0.858263 | 0.874616 | 0.880 | 0.453 |
yolov8l | 1 | 0.840311 | 0.863553 | 0.862 | 0.443 |
yolov8x | 1 | 0.855134 | 0.859865 | 0.876 | 0.450 |
jarvi
(Lakes), meri
(Sea) and virtavesialue
(Rivers as polygon geometry) from the Topographical database by the National Land Survey of Finland. Unfortunately this also discards all points not within the Finnish borders.vesikivikko
(Water rock areas) from the Topographical database.38511
, 38512
, 38513
from the layer vesikivi
in the Topographical database.ty_njr
class ids are 1, 2, 3, 4, 5, 8tuulivoimalat
from geo2ml.scripts.data import create_coco_dataset
raster_path = '
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The datasets consist of vectors in .shp format with attributes of the surface affected by wildfire or burned area. All the data have resulted from the Deep Learning U-Net approach based on Sentinel 2 MSI images. The satellite images are publicly available on the Copernicus Data Space Ecosystem website https://dataspace.copernicus.eu. The analysis covers the period from January to April 2024, identified as the early spring of 2024.
The supporting datasets consist of:
Metadata:
We hereby confirm that all vector data regarding the wildfire and burned area are coming from our analysis based on Sentinel 2 MSI covering the time interval January to April 2024 for Romania.
© 2025 The authors
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a prototype product for a new Copernicus Service element which is being developed in the course of FOCCUS. The algorithm computes percent cover of emerging aquatic vegetation (seagrass and macroalgae) in intertidal regions of the German Bight based on temporal aggregates of the Normalised Difference Vegetation Index (NDVI). Sentinel-2 data are atmospherically corrected using sen2cor and pixels identified as water or clouds using IdePix are masked. Images are processed and aggregated using the Copernicus Data Space Ecosystem. Retrieved NDVI values are temporally aggregated per pixel around the time of maximum seagrass cover in July/August/September. Aggregated NDVI values are translated into vegetation percent cover based on an empirical relationship.
Data availability depends on water level at the time of satellite acquisitions and cloud cover and may therefore vary spatially and temporally. The algorithm does not enable differentiation between different types of emerging aquatic vegetation (e.g., macroalgae, seagrass, diatoms) and does not consider vegetation health. This information would be subject of further development based on the delivered products.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a prototype product for a new Copernicus Service element which is being developed in the course of FOCCUS. The algorithm computes the presence/absence of submerged aquatic vegetation (SAV; seagrass and macroalgae) in subtidal regions of the Balearic Islands based on temporally aggregated Sentinel-2 imagery and EMODNET bathymetry. Sentinel-2 data are atmospherically corrected using ACOLITE and water pixels are identified using IdePix. Images are processed and aggregated using the Copernicus Data Space Ecosystem. Aggregate images are classified as SAV (0) or non-SAV (1) using machine learning methods. Results are clipped to water depths shallower than 25 m and cleaned up using a 3x3 pixel median filter.
Data availability depends on the number of usable observations (clouds, water clarity, glint) and may therefore vary spatially and temporally. The quality of the products depends on proper flagging by IdePix classification and other masks applied. The algorithm does not differentiate between different types of submerged aquatic vegetation (e.g., macroalgae or seagrass) and does not consider vegetation health. A known issue is are misclassifications of optically deep water in some areas.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains annotated marine vessels from 15 different Sentinel-2 product, used for training object detection models for marine vessel detection. The vessels are annotated as bounding boxes, covering also some amount of the wake, if present.
Source data
Individual products used to generate annotations are shown in the following table:
Location Product name
Archipelago sea S2A_MSIL1C_20220515T100031_N0400_R122_T34VEM_20220515T120450
S2B_MSIL1C_20220619T100029_N0400_R122_T34VEM_20220619T104419
S2A_MSIL1C_20220721T095041_N0400_R079_T34VEM_20220721T115325
S2A_MSIL1C_20220813T095601_N0400_R122_T34VEM_20220813T120233
Gulf of Finland S2B_MSIL1C_20220606T095029_N0400_R079_T35VLG_20220606T105944
S2B_MSIL1C_20220626T095039_N0400_R079_T35VLG_20220626T104321
S2B_MSIL1C_20220703T094039_N0400_R036_T35VLG_20220703T103953
S2A_MSIL1C_20220721T095041_N0400_R079_T35VLG_20220721T115325
Bothnian Bay S2A_MSIL1C_20220627T100611_N0400_R022_T34WFT_20220627T134958
S2B_MSIL1C_20220712T100559_N0400_R022_T34WFT_20220712T121613
S2B_MSIL1C_20220828T095549_N0400_R122_T34WFT_20220828T104748
Bothnian Sea S2B_MSIL1C_20210714T100029_N0500_R122_T34VEN_20230224T120043
S2B_MSIL1C_20220619T100029_N0400_R122_T34VEN_20220619T104419
S2A_MSIL1C_20220624T100041_N0400_R122_T34VEN_20220624T120211
S2A_MSIL1C_20220813T095601_N0400_R122_T34VEN_20220813T120233
Kvarken S2A_MSIL1C_20220617T100611_N0400_R022_T34VER_20220617T135008
S2B_MSIL1C_20220712T100559_N0400_R022_T34VER_20220712T121613
S2A_MSIL1C_20220826T100611_N0400_R022_T34VER_20220826T135136
Even though the reference data IDs are for L1C products, L2A products from the same acquisition dates can be used along with the annotations. However, Sen2Cor has been known to produce incorrect reflectance values for water bodies.
The raw products can be acquired from Copernicus Data Space Ecosystem.
Annotations
The annotations are bounding boxes drawn around marine vessels so that some amount of their wakes, if present, are also contained within the boxes. The data are distributed as geopackage files, so that one geopackage corresponds to a single Sentinel-2 tile, and each package has separate layers for individual products as shown below:
T34VEM
|-20220515
|-20220619
|-20220721
|-20220813
All layers have a column id, which has the value boat for all annotations.
CRS is EPSG:32634 for all products except for the Gulf of Finland (35VLG), which is in EPSG:32635. This is done in order to have the bounding boxes to be aligned with the pixels in the imagery.
As tiles 34VEM and 34VEN have an overlap of 9.5x100 km, 34VEN is not annotated from the overlapping part to prevent data leakage between splits.
Annotation process The minimum size for an object to be considered as a potential marine vessel was set to 2x2 pixels. Three separate acquisitions for each location were used to detect smallest objects, so that if an object was located at the same place in all images, then it was left unannotated. The data were annotated by two experts.
Product name Number of annotations
S2A_MSIL1C_20220515T100031_N0400_R122_T34VEM_20220515T120450 183
S2B_MSIL1C_20220619T100029_N0400_R122_T34VEM_20220619T104419 519
S2A_MSIL1C_20220721T095041_N0400_R079_T34VEM_20220721T115325 1518
S2A_MSIL1C_20220813T095601_N0400_R122_T34VEM_20220813T120233 1371
S2B_MSIL1C_20220606T095029_N0400_R079_T35VLG_20220606T105944 277
S2B_MSIL1C_20220626T095039_N0400_R079_T35VLG_20220626T104321 1205
S2B_MSIL1C_20220703T094039_N0400_R036_T35VLG_20220703T103953 746
S2A_MSIL1C_20220721T095041_N0400_R079_T35VLG_20220721T115325 971
S2A_MSIL1C_20220627T100611_N0400_R022_T34WFT_20220627T134958 122
S2B_MSIL1C_20220712T100559_N0400_R022_T34WFT_20220712T121613 162
S2B_MSIL1C_20220828T095549_N0400_R122_T34WFT_20220828T104748 98
S2B_MSIL1C_20210714T100029_N0301_R122_T34VEN_20210714T121056 450
S2B_MSIL1C_20220619T100029_N0400_R122_T34VEN_20220619T104419 66
S2A_MSIL1C_20220624T100041_N0400_R122_T34VEN_20220624T120211 424
S2A_MSIL1C_20220813T095601_N0400_R122_T34VEN_20220813T120233 399
S2A_MSIL1C_20220617T100611_N0400_R022_T34VER_20220617T135008 83
S2B_MSIL1C_20220712T100559_N0400_R022_T34VER_20220712T121613 184
S2A_MSIL1C_20220826T100611_N0400_R022_T34VER_20220826T135136 88
Annotation statistics Sentinel-2 images have spatial resolution of 10 m, so below statistics can be converted to pixel sizes by dividing them by 10 (diameter) or 100 (area).
mean min 25% 50% 75% max
Area (m²) 5305.7 567.9 1629.9 2328.2 5176.3 414795.7
Diameter (m) 92.5 33.9 57.9 69.4 108.3 913.9
As most of the annotations cover also most of the wake of the marine vessel, the bounding boxes are significantly larger than a typical boat. There are a few annotations larger than 100 000 m², which are either cruise or cargo ships that are travelling along ordinal directions instead of cardinal directions, instead of e.g. smaller leisure boats.
Annotations typically have diameter less than 100 meters, and the largest diameters correspond to similar instances than the largest bounding box areas.
Train-test-split
We used tiles 34VEN and 34VER as the test dataset. For validation, we split the other three tile areas into 5x5 equal sized grid, and used 20 % of the area (i.e 5 cells) for the validation. The same split also makes it possible to do cross-validation.
Post-processing
Before evaluating, the predictions for the test set are cleaned using the following steps:
All prediction whose centroid points are not located on water are discarded. The water mask used contains layers jarvi
(Lakes), meri
(Sea) and virtavesialue
(Rivers as polygon geometry) from the Topographical database by the National Land Survey of Finland. Unfortunately this also discards all points not within the Finnish borders.
All predictions whose centroid points are located on water rock areas are discarded. The mask is the layer vesikivikko
(Water rock areas) from the Topographical database.
All predictions that contain an above water rock within the bounding box are discarded. The mask contains classes 38511
, 38512
, 38513
from the layer vesikivi
in the Topographical database.
All predictions that contain a lighthouse or a sector light within the bounding box are discarded. Lighthouses and sector lights come from Väylävirasto data, ty_njr
class ids are 1, 2, 3, 4, 5, 8
All predictions that are wind turbines, found in Topographical database layer tuulivoimalat
All predictions that are obviously too large are discarded. The prediction is defined to be "too large" if either of its edges is longer than 750 meters.
Model checkpoint for the best performing model is available on Hugging Face platform: https://huggingface.co/mayrajeo/marine-vessel-detection-yolo
Usage The simplest way to chip the rasters into suitable format and convert the data to COCO or YOLO formats is to use geo2ml. First download the raw mosaics and convert them into GeoTiff files and then use the following to generate the datasets.
To generate COCO format dataset run
from geo2ml.scripts.data import create_coco_dataset raster_path = '' outpath = '' poly_path = '' layer = '' create_coco_dataset(raster_path=raster_path, polygon_path=poly_path, target_column='id', gpkg_layer=layer, outpath=outpath, save_grid=False, dataset_name='', gridsize_x=320, gridsize_y=320, ann_format='box', min_bbox_area=0)
To generate YOLO format dataset run
from geo2ml.scripts.data import create_yolo_dataset raster_path = '' outpath = '' poly_path = '' layer = '' create_yolo_dataset(raster_path=raster_path, polygon_path=poly_path, target_column='id', gpkg_layer=layer, outpath=outpath, save_grid=False, gridsize_x=320, gridsize_y=320, ann_format='box', min_bbox_area=0)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A map of 73 global biome clusters, geographic areas that were grouped to optimize the global 100m land cover processing.
In order to group Earth Observation data for faster processing or adaptation of algorithms to specific regions, the 100m global land cover (CGLS-LC100) algorithm uses a Global Biome Cluster layer. The term biome cluster hereby refers to a geographic area which has similar bio-geophysical parameters and, therefore, can be grouped for processing. In other words, the biome cluster layer can be seen as an ecological regionalisation which outlines areas of similar environmental conditions, ecological processes, and biotic communities (Coops et al., 2018). There are already several global regionalisation layers existing, e.g. Ecoregions 2017 global dataset (Dinerstein et al., 2017), Geiger-Koeppen global ecozones after Olofsson update (Olofsson et al., 2012), Global ecological zones for FAO forest reporting with update 2010 (FAO, 2012). But several tests in the CGLS-LC100 workflow have shown that the existing layers did not provide the required global and continental classification accuracy. These findings go along with Coops et al. (2018) who stated that "Most regionalisations are made based on subjective criteria, and cannot be readily revised, leading to outstanding questions with respect to how to optimally develop and define them."
Therefore, we decided to develop a customized ecological regionalisation layer which performs best with the given PROBA-V remote sensing data and the specifications of the CGLS-LC100 product. It groups spectral similar areas and helps to optimize the later classification/regression to regional patterns. Input into the layer creation were well-known existing datasets which were combined, re-grouped and advanced based on prior CGLS-LC100 classification results and local mapping knowledge of the workflow developer. To ensure that this layer is clearly separable from other existing regionalisations and not mistakenly interpreted as an eco-region layer, we decide to call it biome clusters layer.
The following steps outline the global biome clusters layer generation:
Spatial union of Ecoregions 2017 dataset (Dinerstein et al., 2017), Geiger-Koeppen dataset (Olofsson et al., 2012) and Global FAO eco-regions datasets (FAO, 2012);
Regrouping and dissolving by using experience from first global CGLS-LC100 mapping results and subjective mapping experience of the developer;
Refinement of the biome clusters in the High North latitudes via incorporation of a Global tree-line layer (Alaska Geobotany Center, 2003);
Manual improvement of borders between biome clusters to reduce classification artefacts by using a DEM and mapping experience from previous projects and continental test runs;
Usage of a global land/sea mask, the Sentinel-2 tiling grid and PROBA-V imaging extent to extend the borders of the biome clusters into the sea to make sure that also small islands on the coastline are correctly processed.
When developing a regionalisation, the definition of the clusters and the boundaries that delineate them in time and space is the key challenge. Overall, the map distinguishes 73 global biome clusters.
Ground-based readings of temperature and rainfall, satellite imagery, aerial photographs, ground verification data and Digital Elevation Model (DEM) were used in this study. Ground-based meteorological information was obtained from Bangladesh Meteorological Department (BMD) for the period 1977 to 2015 and was used to determine the trends of rainfall and temperature in this thesis. Satellite images obtained from the US Geological Survey (USGS) Center for Earth Resources Observation and Science (EROS) website (www.glovis.usgs.gov) in four time periods were analysed to assess the dynamics of mangrove population at species level. Remote sensing techniques, as a solution to lack of spatial data at a relevant scale and difficulty in accessing the mangroves for field survey and also as an alternative to the traditional methods were used in monitoring of the changes in mangrove species composition, . To identify mangrove forests, a number of satellite sensors have been used, including Landsat TM/ETM/OLI, SPOT, CBERS, SIR, ASTER, and IKONOS and Quick Bird. The use of conventional medium-resolution remote sensor data (e.g., Landsat TM, ASTER, SPOT) in the identification of different mangrove species remains a challenging task. In many developing countries, the high cost of acquiring high- resolution satellite imagery excludes its routine use. The free availability of archived images enables the development of useful techniques in its use and therefor Landsat imagery were used in this study for mangrove species classification. Satellite imagery used in this study includes: Landsat Multispectral Scanner (MSS) of 57 m resolution acquired on 1st February 1977, Landsat Thematic Mapper (TM) of 28.5 m resolution acquired on 5th February 1989, Landsat Enhanced Thematic Mapper (ETM+) of 28.5 m resolution acquired on 28th February 2000 and Landsat Operational Land Imager (OLI) of 30 m resolution acquired on 4th February 2015. To study tidal channel dynamics of the study area, aerial photographs from 1974 and 2011, and a satellite image from 2017 were used. Satellite images from 1974 with good spatial resolution of the area were not available, and therefore aerial photographs of comparatively high and fine resolution were considered adequate to obtain information on tidal channel dynamics. Although high-resolution satellite imagery was available for 2011, aerial photographs were used for this study due to their effectiveness in terms of cost and also ease of comparison with the 1974 photographs. The aerial photographs were sourced from the Survey of Bangladesh (SOB). The Sentinel-2 satellite image from 2017 was downloaded from the European Space Agency (ESA) website (https://scihub.copernicus.eu/). In this research, elevation data acts as the main parameter in the determination of the sea level rise (SLR) impacts on the spatial distribution of the future mangrove species of the Bangladesh Sundarbans. High resolution elevation data is essential for this kind of research where every centimeter counts due to the low-lying characteristics of the study area. The high resolution (less than 1m vertical error) DEM data used in this study was obtained from Water Resources Planning Organization (WRPO), Bangladesh. The elevation information used to construct the DEM was originally collected by a Finnish consulting firm known as FINNMAP in 1991 for the Bangladesh government.
Roads are ubiquitous infrastructures that have detrimental effects on wildlife and contribute to increased mortality and fragmentation of animal populations. Although several mitigation measures are available to reduce road impacts, their planning rarely considers the dynamic nature of the environment, which may reflect into temporal variation in habitat suitability and connectivity for animal species. Consequently, the effectiveness of such measures may fall short of expectations. By combining high-resolution satellite imagery and connectivity modelling, we propose a generalizable approach to identify the most probable crossing points across a barrier at different time snapshots. This information may be pivotal in planning mitigation measures that can take into account dynamic components. We collected occurrence data of three farmland-adapted anurans and high spatiotemporal definition Sentinel-2 multispectral images to build habitat suitability models that capture the relationship betw..., , # Modelling amphibian road crossing points in a dynamic environment
Dataset DOI: 10.5061/dryad.cz8w9gjg2
Reported data were the basis for Habitat Suitability Modelling and subsequent Connectivity Models.
Data includes the occurrence data of the three amphibian target species (Hyla intermedia, Pelophylax synkl. esculentus, Bufotes viridis), collected in 2021 and 2022. The values of the Sentinel-2 spectral bands we used as predictors are Copernicus Sentinel data [2025] and can be downloaded from the Copernicus Browser (https://browser.dataspace.copernicus.eu/).
Description:Â
Occurrence points of the three target species. Observations missing important information have already been removed from this dataset. Here you can find the data we used for all the modelling steps. They are the occurrence data of ...,
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the fiducial reference database developed under the Ground Reference Observations Underlying Novel Decametric Vegetation Data Products from Earth Observation (GROUNDED EO) project, which was initiated within the context of the European Space Agency’s Living Planet Fellowship programme.
The database contains over 16,000 fiducial reference measurements covering 81 National Ecological Observatory Network (NEON), Integrated Carbon Observation System (ICOS), and Terrestrial Ecosystem Research Network (TERN) sites between 2013 and 2022. The sites span cultivated crops, deciduous forest, evergreen forest, grasslands, mixed forest, pasture/hay, shrub/scrub and woody wetlands over the United States, Europe, and Australia.
Fiducial reference measurements were generated from over 280,000 digital hemispherical photography (DHP) and digital cover photography (DCP) images. These were acquired in elementary sampling units (ESUs) of 20 m x 20 m (NEON), 30 m x 30 m (ICOS), and 100 m x 100 m (TERN), containing 9 (ICOS), 12 (NEON), and 36 (TERN) sampling points per ESU.
DHP and DCP images were automatically processed using the HemiPy and CoverPy modules, respectively, which calculate estimates of gap fraction to derive plant area index (PAI), green area index (GAI), the fraction of vegetation cover (FCOVER), and for HemiPy, the fraction of intercepted photosynthetically active radiation (FIPAR). HemiPy and CoverPy adopt the recommendations of the Fiducial Reference Measurements for Vegetation (FRM4VEG) project, propagating uncertainties (due to variability in gap fraction) through the derivation of the considered variables, in line with the International Standards Organisation (ISO) Guide to the Expression of Uncertainty in Measurement (GUM). Each value, is, therefore, provided with an estimate of its standard uncertainty (delimited with +/-
).
Corrections for woody material, which is known to represent up to 35% of total plant area in forests, and may lead to errors of up to 61%, were applied, as were corrections for missing understory data at ICOS sites using previously published information. By applying these corrections and combining overstory and understory measurements, total leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), and FCOVER were derived from the values obtained from HemiPy and CoverPy.
CSV files are provided containing i) all available fiducial reference measurements, and ii) only those fiducial reference measurements matched with contemporaneous Sentinel-2 L2A surface reflectance observations (i.e. those acquired within one day).
More detailed description of the fiducial reference database is provided in the associated Remote Sensing of Environment paper. Please cite the paper in any work making use of these data:
Brown, L.A., Fernandes, R., Verrelst, J., Morris, H., Djamai, N., Reyez-Muñoz, P., D.Kovács, D., Meier, C. GROUNDED EO: Data-driven Sentinel-2 LAI and FAPAR retrieval using Gaussian processes trained with extensive fiducial reference measurements, Remote Sens. Environ.
http://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/noLimitationshttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/noLimitations
Corine Land Cover 2018 (CLC2018) is one of the Corine Land Cover (CLC) datasets produced within the frame the Copernicus Land Monitoring Service referring to land cover / land use status of year 2018.
CLC service has a long-time heritage (formerly known as "CORINE Land Cover Programme"), coordinated by the European Environment Agency (EEA). It provides consistent and thematically detailed information on land cover and land cover changes across Europe.
CLC datasets are based on the classification of satellite images produced by the national teams of the participating countries - the EEA members and cooperating countries (EEA39). National CLC inventories are then further integrated into a seamless land cover map of Europe. The resulting European database relies on standard methodology and nomenclature with following base parameters: 44 classes in the hierarchical 3-level CLC nomenclature; minimum mapping unit (MMU) for status layers is 25 hectares; minimum width of linear elements is 100 metres. Change layers have higher resolution, i.e. minimum mapping unit (MMU) is 5 hectares for Land Cover Changes (LCC), and the minimum width of linear elements is 100 metres. The CLC service delivers important data sets supporting the implementation of key priority areas of the Environment Action Programmes of the European Union as e.g. protecting ecosystems, halting the loss of biological diversity, tracking the impacts of climate change, monitoring urban land take, assessing developments in agriculture or dealing with water resources directives. part of the European Copernicus Programme coordinated by the European Environment Agency, providing environmental information from a combination of air- and space-based observation systems and in-situ monitoring.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
I made this dataset while performing Integrated Valuation of Ecosystem Services and Tradeoffss (InVEST) models of wetlands in India.
This dataset is a collection of Geographic Information System (GIS) data sourced from various public domains. It includes shapefiles, image raster files, etc which can are primarily developed with the aim of using with GIS software such as ArcGIS Pro, QGIS, etc. Most of the datasets are global in nature with some, like the OpenStreetMap data pertaining to India only. The data is as described below:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Humanity's role in changing the face of the earth is a long-standing concern, as is the human domination of ecosystems. Geologists are debating the introduction of a new geological epoch, the 'anthropocene', as humans are 'overwhelming the great forces of nature'. In this context, the accumulation of artefacts, i.e., human-made physical objects, is a pervasive phenomenon. Variously dubbed 'manufactured capital', 'technomass', 'human-made mass', 'in-use stocks' or 'socioeconomic material stocks', they have become a major focus of sustainability sciences in the last decade. Globally, the mass of socioeconomic material stocks now exceeds 10e14 kg, which is roughly equal to the dry-matter equivalent of all biomass on earth. It is doubling roughly every 20 years, almost perfectly in line with 'real' (i.e. inflation-adjusted) GDP. In terms of mass, buildings and infrastructures (here collectively called 'built structures') represent the overwhelming majority of all socioeconomic material stocks.
This dataset features a detailed map of material stocks in the CONUS on a 10m grid based on high resolution Earth Observation data (Sentinel-1 + Sentinel-2), crowd-sourced geodata (OSM) and material intensity factors.
Spatial extent
This subdataset covers the North East CONUS, i.e.
For the remaining CONUS, see the related identifiers.
Temporal extent
The map is representative for ca. 2018.
Data format
The data are organized by states. Within each state, data are split into 100km x 100km tiles (EQUI7 grid), and mosaics are provided.
Within each tile, images for area, volume, and mass at 10m spatial resolution are provided. Units are m², m³, and t, respectively. Each metric is split into buildings, other, rail and street (note: In the paper, other, rail, and street stocks are subsumed to mobility infrastructure). Each category is further split into subcategories (e.g. building types).
Additionally, a grand total of all stocks is provided at multiple spatial resolutions and units, i.e.
For each state, mosaics of all above-described data are provided in GDAL VRT format, which can readily be opened in most Geographic Information Systems. File paths are relative, i.e. DO NOT change the file structure or file naming.
Additionally, the grand total mass per state is tabulated for each county in mass_grand_total_t_10m2.tif.csv. County FIPS code and the ID in this table can be related via FIPS-dictionary_ENLOCALE.csv.
Material layers
Note that material-specific layers are not included in this repository because of upload limits. Only the totals are provided (i.e. the sum over all materials). However, these can easily be derived by re-applying the material intensity factors from (see related identifiers):
A. Baumgart, D. Virág, D. Frantz, F. Schug, D. Wiedenhofer, Material intensity factors for buildings, roads and rail-based infrastructure in the United States. Zenodo (2022), doi:10.5281/zenodo.5045337.
Further information
For further information, please see the publication.
A web-visualization of this dataset is available here.
Visit our website to learn more about our project MAT_STOCKS - Understanding the Role of Material Stock Patterns for the Transformation to a Sustainable Society.
Publication
D. Frantz, F. Schug, D. Wiedenhofer, A. Baumgart, D. Virág, S. Cooper, C. Gómez-Medina, F. Lehmann, T. Udelhoven, S. van der Linden, P. Hostert, and H. Haberl (2023): Unveiling patterns in human dominated landscapes through mapping the mass of US built structures. Nature Communications 14, 8014. https://doi.org/10.1038/s41467-023-43755-5
Funding
This research was primarly funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (MAT_STOCKS, grant agreement No 741950). Workflow development was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project-ID 414984028-SFB 1404.
Acknowledgments
We thank the European Space Agency and the European Commission for freely and openly sharing Sentinel imagery; USGS for the National Land Cover Database; Microsoft for Building Footprints; Geofabrik and all contributors for OpenStreetMap.This dataset was partly produced on EODC - we thank Clement Atzberger for supporting the generation of this dataset by sharing disc space on EODC.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Humanity’s role in changing the face of the earth is a long-standing concern, as is the human domination of ecosystems. Geologists are debating the introduction of a new geological epoch, the ‘anthropocene’, as humans are ‘overwhelming the great forces of nature’. In this context, the accumulation of artefacts, i.e., human-made physical objects, is a pervasive phenomenon. Variously dubbed ‘manufactured capital’, ‘technomass’, ‘human-made mass’, ‘in-use stocks’ or ‘socioeconomic material stocks’, they have become a major focus of sustainability sciences in the last decade. Globally, the mass of socioeconomic material stocks now exceeds 10e14 kg, which is roughly equal to the dry-matter equivalent of all biomass on earth. It is doubling roughly every 20 years, almost perfectly in line with ‘real’ (i.e. inflation-adjusted) GDP. In terms of mass, buildings and infrastructures (here collectively called ‘built structures’) represent the overwhelming majority of all socioeconomic material stocks.
This dataset features a detailed map of material stocks in the CONUS on a 10m grid based on high resolution Earth Observation data (Sentinel-1 + Sentinel-2), crowd-sourced geodata (OSM) and material intensity factors.
Spatial extent
This subdataset covers the Rocky Mountains CONUS, i.e.
For the remaining CONUS, see the related identifiers.
Temporal extent
The map is representative for ca. 2018.
Data format
The data are organized by states. Within each state, data are split into 100km x 100km tiles (EQUI7 grid), and mosaics are provided.
Within each tile, images for area, volume, and mass at 10m spatial resolution are provided. Units are m², m³, and t, respectively. Each metric is split into buildings, other, rail and street (note: In the paper, other, rail, and street stocks are subsumed to mobility infrastructure). Each category is further split into subcategories (e.g. building types).
Additionally, a grand total of all stocks is provided at multiple spatial resolutions and units, i.e.
For each state, mosaics of all above-described data are provided in GDAL VRT format, which can readily be opened in most Geographic Information Systems. File paths are relative, i.e. DO NOT change the file structure or file naming.
Additionally, the grand total mass per state is tabulated for each county in mass_grand_total_t_10m2.tif.csv. County FIPS code and the ID in this table can be related via FIPS-dictionary_ENLOCALE.csv.
Material layers
Note that material-specific layers are not included in this repository because of upload limits. Only the totals are provided (i.e. the sum over all materials). However, these can easily be derived by re-applying the material intensity factors from (see related identifiers):
A. Baumgart, D. Virág, D. Frantz, F. Schug, D. Wiedenhofer, Material intensity factors for buildings, roads and rail-based infrastructure in the United States. Zenodo (2022), doi:10.5281/zenodo.5045337.
Further information
For further information, please see the publication.
A web-visualization of this dataset is available here.
Visit our website to learn more about our project MAT_STOCKS - Understanding the Role of Material Stock Patterns for the Transformation to a Sustainable Society.
Publication
D. Frantz, F. Schug, D. Wiedenhofer, A. Baumgart, D. Virág, S. Cooper, C. Gomez-Medina, F. Lehmann, T. Udelhoven, S. van der Linden, P. Hostert, H. Haberl. Weighing the US Economy: Map of Built Structures Unveils Patterns in Human-Dominated Landscapes. In prep
Funding
This research was primarly funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (MAT_STOCKS, grant agreement No 741950). Workflow development was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project-ID 414984028-SFB 1404.
Acknowledgments
We thank the European Space Agency and the European Commission for freely and openly sharing Sentinel imagery; USGS for the National Land Cover Database; Microsoft for Building Footprints; Geofabrik and all contributors for OpenStreetMap.This dataset was partly produced on EODC - we thank Clement Atzberger for supporting the generation of this dataset by sharing disc space on EODC.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Humanity’s role in changing the face of the earth is a long-standing concern, as is the human domination of ecosystems. Geologists are debating the introduction of a new geological epoch, the ‘anthropocene’, as humans are ‘overwhelming the great forces of nature’. In this context, the accumulation of artefacts, i.e., human-made physical objects, is a pervasive phenomenon. Variously dubbed ‘manufactured capital’, ‘technomass’, ‘human-made mass’, ‘in-use stocks’ or ‘socioeconomic material stocks’, they have become a major focus of sustainability sciences in the last decade. Globally, the mass of socioeconomic material stocks now exceeds 10e14 kg, which is roughly equal to the dry-matter equivalent of all biomass on earth. It is doubling roughly every 20 years, almost perfectly in line with ‘real’ (i.e. inflation-adjusted) GDP. In terms of mass, buildings and infrastructures (here collectively called ‘built structures’) represent the overwhelming majority of all socioeconomic material stocks.
This dataset features a detailed map of material stocks in the CONUS on a 10m grid based on high resolution Earth Observation data (Sentinel-1 + Sentinel-2), crowd-sourced geodata (OSM) and material intensity factors.
Spatial extent
This subdataset covers the Great Plains CONUS, i.e.
For the remaining CONUS, see the related identifiers.
Temporal extent
The map is representative for ca. 2018.
Data format
The data are organized by states. Within each state, data are split into 100km x 100km tiles (EQUI7 grid), and mosaics are provided.
Within each tile, images for area, volume, and mass at 10m spatial resolution are provided. Units are m², m³, and t, respectively. Each metric is split into buildings, other, rail and street (note: In the paper, other, rail, and street stocks are subsumed to mobility infrastructure). Each category is further split into subcategories (e.g. building types).
Additionally, a grand total of all stocks is provided at multiple spatial resolutions and units, i.e.
For each state, mosaics of all above-described data are provided in GDAL VRT format, which can readily be opened in most Geographic Information Systems. File paths are relative, i.e. DO NOT change the file structure or file naming.
Additionally, the grand total mass per state is tabulated for each county in mass_grand_total_t_10m2.tif.csv. County FIPS code and the ID in this table can be related via FIPS-dictionary_ENLOCALE.csv.
Material layers
Note that material-specific layers are not included in this repository because of upload limits. Only the totals are provided (i.e. the sum over all materials). However, these can easily be derived by re-applying the material intensity factors from (see related identifiers):
A. Baumgart, D. Virág, D. Frantz, F. Schug, D. Wiedenhofer, Material intensity factors for buildings, roads and rail-based infrastructure in the United States. Zenodo (2022), doi:10.5281/zenodo.5045337.
Further information
For further information, please see the publication.
A web-visualization of this dataset is available here.
Visit our website to learn more about our project MAT_STOCKS - Understanding the Role of Material Stock Patterns for the Transformation to a Sustainable Society.
Publication
D. Frantz, F. Schug, D. Wiedenhofer, A. Baumgart, D. Virág, S. Cooper, C. Gomez-Medina, F. Lehmann, T. Udelhoven, S. van der Linden, P. Hostert, H. Haberl. Weighing the US Economy: Map of Built Structures Unveils Patterns in Human-Dominated Landscapes. In prep
Funding
This research was primarly funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (MAT_STOCKS, grant agreement No 741950). Workflow development was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project-ID 414984028-SFB 1404.
Acknowledgments
We thank the European Space Agency and the European Commission for freely and openly sharing Sentinel imagery; USGS for the National Land Cover Database; Microsoft for Building Footprints; Geofabrik and all contributors for OpenStreetMap.This dataset was partly produced on EODC - we thank Clement Atzberger for supporting the generation of this dataset by sharing disc space on EODC.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Humanity's role in changing the face of the earth is a long-standing concern, as is the human domination of ecosystems. Geologists are debating the introduction of a new geological epoch, the 'anthropocene', as humans are 'overwhelming the great forces of nature'. In this context, the accumulation of artefacts, i.e., human-made physical objects, is a pervasive phenomenon. Variously dubbed 'manufactured capital', 'technomass', 'human-made mass', 'in-use stocks' or 'socioeconomic material stocks', they have become a major focus of sustainability sciences in the last decade. Globally, the mass of socioeconomic material stocks now exceeds 10e14 kg, which is roughly equal to the dry-matter equivalent of all biomass on earth. It is doubling roughly every 20 years, almost perfectly in line with 'real' (i.e. inflation-adjusted) GDP. In terms of mass, buildings and infrastructures (here collectively called 'built structures') represent the overwhelming majority of all socioeconomic material stocks.
This dataset features a detailed map of material stocks in the CONUS on a 10m grid based on high resolution Earth Observation data (Sentinel-1 + Sentinel-2), crowd-sourced geodata (OSM) and material intensity factors.
Spatial extent
This subdataset covers the South CONUS, i.e.
For the remaining CONUS, see the related identifiers.
Temporal extent
The map is representative for ca. 2018.
Data format
The data are organized by states. Within each state, data are split into 100km x 100km tiles (EQUI7 grid), and mosaics are provided.
Within each tile, images for area, volume, and mass at 10m spatial resolution are provided. Units are m², m³, and t, respectively. Each metric is split into buildings, other, rail and street (note: In the paper, other, rail, and street stocks are subsumed to mobility infrastructure). Each category is further split into subcategories (e.g. building types).
Additionally, a grand total of all stocks is provided at multiple spatial resolutions and units, i.e.
For each state, mosaics of all above-described data are provided in GDAL VRT format, which can readily be opened in most Geographic Information Systems. File paths are relative, i.e. DO NOT change the file structure or file naming.
Additionally, the grand total mass per state is tabulated for each county in mass_grand_total_t_10m2.tif.csv. County FIPS code and the ID in this table can be related via FIPS-dictionary_ENLOCALE.csv.
Material layers
Note that material-specific layers are not included in this repository because of upload limits. Only the totals are provided (i.e. the sum over all materials). However, these can easily be derived by re-applying the material intensity factors from (see related identifiers):
A. Baumgart, D. Virág, D. Frantz, F. Schug, D. Wiedenhofer, Material intensity factors for buildings, roads and rail-based infrastructure in the United States. Zenodo (2022), doi:10.5281/zenodo.5045337.
Further information
For further information, please see the publication.
A web-visualization of this dataset is available here.
Visit our website to learn more about our project MAT_STOCKS - Understanding the Role of Material Stock Patterns for the Transformation to a Sustainable Society.
Publication
D. Frantz, F. Schug, D. Wiedenhofer, A. Baumgart, D. Virág, S. Cooper, C. Gómez-Medina, F. Lehmann, T. Udelhoven, S. van der Linden, P. Hostert, and H. Haberl (2023): Unveiling patterns in human dominated landscapes through mapping the mass of US built structures. Nature Communications 14, 8014. https://doi.org/10.1038/s41467-023-43755-5
Funding
This research was primarly funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (MAT_STOCKS, grant agreement No 741950). Workflow development was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)—Project-ID 414984028-SFB 1404.
Acknowledgments
We thank the European Space Agency and the European Commission for freely and openly sharing Sentinel imagery; USGS for the National Land Cover Database; Microsoft for Building Footprints; Geofabrik and all contributors for OpenStreetMap.This dataset was partly produced on EODC - we thank Clement Atzberger for supporting the generation of this dataset by sharing disc space on EODC.
The Sentinel-1C satellite was launched December 5, 2024. Sentinel-1C is the the latest satellite to be added to the Sentinel-1 constellation. The Sentinel-1 satellites (Sentinel-1A, Sentinel-1B, and Sentinel-1C) are sun-synchronous polar-orbiting satellites that operate day and night performing C-band synthetic aperture radar (SAR) imaging. The Sentinel-1 satellites operate in four imaging modes with different resolutions (down to 5 meters) and coverage (up to 400 kilometers). The Sentinel-1 satellites provide dual polarization capability and short revisit times.
Sentinel-1C Ground Range Detected (GRD) products consist of focused SAR data that has been detected, multi-looked and projected to ground range using the Earth ellipsoid model WGS84. The ellipsoid projection of the GRD products is corrected using the terrain height and is specified in the product’s general annotation. The terrain height used varies in azimuth and it is constant in range (only the terrain height of first subswath is considered for Interferometric Wide (IW) and Extra Wide (EW) swath modes).
Ground range coordinates are the slant range coordinates projected onto the ellipsoid of the Earth. Pixel values represent detected amplitude. Phase information is lost. The resulting product has approximately square resolution pixels and square pixel spacing with reduced speckle at a cost of reduced spatial resolution.
For IW and EW GRD products, multi-looking is performed on each burst individually. All bursts in all sub-swaths are then seamlessly merged to form a single, contiguous, ground range, detected image per polarization.
The dual polarization medium resolution GRD data products in this collection mirror the Sentinel-1C products provided through the Copernicus Data Space Ecosystem.
The Sentinel-1C satellite was launched December 5, 2024. Sentinel-1C is the the latest satellite to be added to the Sentinel-1 constellation. The Sentinel-1 satellites (Sentinel-1A, Sentinel-1B, and Sentinel-1C) are sun-synchronous polar-orbiting satellites that operate day and night performing C-band synthetic aperture radar (SAR) imaging. The Sentinel-1 satellites operate in four imaging modes with different resolutions (down to 5 meters) and coverage (up to 400 kilometers). The Sentinel-1 satellites provide dual polarization capability and short revisit times.
Sentinel-1C Single Look Complex (SLC) data products consist of focused SAR data, geo-referenced using orbit and attitude data from the satellite, and are provided in slant-range geometry. Slant range is the natural radar range observation coordinate, defined as the line-of-sight from the radar to each reflecting object. The products are in zero-Doppler orientation, where each row of pixels represents points along a line perpendicular to the sub-satellite track.
The products include a single look in each dimension using the full available signal bandwidth and complex samples (real and imaginary) preserving the phase information. The products have been geo-referenced using the satellite’s orbit and attitude data and have been corrected for azimuth bi-static delay, elevation antenna pattern, and range spreading loss.
The data products in this collection mirror the Sentinel-1C products provided through the Copernicus Data Space Ecosystem.