Usually, the information related to the crop types available in a given territory is annual information, that is, we only know the type of main crop grown over a year and we do not know any crops that have followed one another during the year and also we do not know when a particular crop is sown and when it is harvested. The main objective of this dataset is to create the basis for experimenting with suitable solutions to give a reliable answer to the above questions, or to propose models capable of producing dynamic segmentation maps that show when a crop begins to grow and when it is collected. Consequently, being able to understand if more than one crop has been grown in a territory within a year. In this dataset, we have 20 coverage classes as ground-truth values provided by Regine Lombardia. The mapping of the class labels used (see file lombardia-classes/classes25pc.txt) brings together some classes and provides the time intervals within which that category grows. The last two columns of the following table are respectively the date (month-day) of the start and end of the interval in which the class is visible during the construction of our dataset.
Sentinel-1 performs systematic acquisition of bursts in both IW and EW modes. The bursts overlap almost perfectly between different passes and are always located at the same place. With the deployment of the SAR processor S1-IPF 3.4, a new element has been added to the products annotations: the Burst ID, which should help the end user to identify a burst area of interest and facilitate searches. The Burst ID map is a complementary auxiliary product. The maps have a validity that covers the entire time span of the mission and they are global, i.e., they include as well information where no SAR data is acquired. Each granule contains information about burst and sub-swath IDs, relative orbit and burst polygon, and should allow for an easier link between a certain burst ID in a product and its corresponding geographic location.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sentinel2GlobalLULC is a deep learning-ready dataset of RGB images from the Sentinel-2 satellites designed for global land use and land cover (LULC) mapping. Sentinel2GlobalLULC v2.1 contains 194,877 images in GeoTiff and JPEG format corresponding to 29 broad LULC classes. Each image has 224 x 224 pixels at 10 m spatial resolution and was produced by assigning the 25th percentile of all available observations in the Sentinel-2 collection between June 2015 and October 2020 in order to remove atmospheric effects (i.e., clouds, aerosols, shadows, snow, etc.). A spatial purity value was assigned to each image based on the consensus across 15 different global LULC products available in Google Earth Engine (GEE).
Our dataset is structured into 3 main zip-compressed folders, an Excel file with a dictionary for class names and descriptive statistics per LULC class, and a python script to convert RGB GeoTiff images into JPEG format. The first folder called "Sentinel2LULC_GeoTiff.zip" contains 29 zip-compressed subfolders where each one corresponds to a specific LULC class with hundreds to thousands of GeoTiff Sentinel-2 RGB images. The second folder called "Sentinel2LULC_JPEG.zip" contains 29 zip-compressed subfolders with a JPEG formatted version of the same images provided in the first main folder. The third folder called "Sentinel2LULC_CSV.zip" includes 29 zip-compressed CSV files with as many rows as provided images and with 12 columns containing the following metadata (this same metadata is provided in the image filenames):
For seven LULC classes, we could not export from GEE all images that fulfilled a spatial purity of 100% since there were millions of them. In this case, we exported a stratified random sample of 14,000 images and provided an additional CSV file with the images actually contained in our dataset. That is, for these seven LULC classes, we provide these 2 CSV files:
To clearly state the geographical coverage of images available in this dataset, we included in the version v2.1, a compressed folder called "Geographic_Representativeness.zip". This zip-compressed folder contains a csv file for each LULC class that provides the complete list of countries represented in that class. Each csv file has two columns, the first one gives the country code and the second one gives the number of images provided in that country for that LULC class. In addition to these 29 csv files, we provided another csv file that maps each ISO Alpha-2 country code to its original full country name.
© Sentinel2GlobalLULC Dataset by Yassir Benhammou, Domingo Alcaraz-Segura, Emilio Guirado, Rohaifa Khaldi, Boujemâa Achchab, Francisco Herrera & Siham Tabik is marked with Attribution 4.0 International (CC-BY 4.0)
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Three ET datasets were generated to evaluate the potential integration of Landsat and Sentinel-2 data for improved ET mapping. The first ET dataset was generated by linear interpolation (Lint) of Landsat-based ET fraction (ETf) images of before and after the selected image dates. The second ET dataset was generated using the regular SSEBop approach using the Landsat image only (Lonly). The third ET dataset was generated from the proposed Landsat-Sentinel data fusion (L-S) approach by applying ETf images from Landsat and Sentinel. The scripts (two) used to generate these three ET datasets are included – one script for processing SSEBop model to generate ET maps from Lonly and another script for generating ET maps from Lint and L-S approach.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
http://creativecommons.org/licenses/http://creativecommons.org/licenses/
The Barest Earth Sentinel-2 Map Index dataset depicts the 1 to 250 000 maps sheet tile frames that have been used to generate individual tile downloads of the Barest Earth Sentinel-2 product. This web service is designed to be used in conjunction with the Barest Earth Sentinel-2 web service to provide users with direct links for imagery download.
This web map is a subset of Sentinel-2 Views. Sentinel-2, 10, 20, and 60m Multispectral, Multitemporal, 13-band imagery is rendered on-the-fly and available for visualization and analytics. This imagery layer pulls directly from the Sentinel-2 on AWS collection and is updated daily with new imagery.This imagery layer can be applied across a number of industries, scientific disciplines, and management practices. Some applications include, but are not limited to, land cover and environmental monitoring, climate change, deforestation, disaster and emergency management, national security, plant health and precision agriculture, forest monitoring, watershed analysis and runoff predictions, land-use planning, tracking urban expansion, highlighting burned areas and estimating fire severity.Geographic CoverageGlobalContinental land masses from 65.4° South to 72.1° North, with these special guidelines:All coastal waters up to 20 km from the shoreAll islands greater than 100 km2All EU islandsAll closed seas (e.g. Caspian Sea)The Mediterranean SeaNote: Areas of interest going beyond the Mission baseline (as laid out in the Mission Requirements Document) will be assessed, and may be added to the baseline if sufficient resources are identified.Temporal CoverageThe revisit time for each point on Earth is every 5 days.This layer is updated daily with new imagery.This imagery layer is designed to include imagery collected within the past 14 months. Custom Image Services can be created for access to images older than 14 months.The number of images available will vary depending on location.Image Selection/FilteringThe most recent and cloud free images are displayed by default.Any image available, within the past 14 months, can be displayed via custom filtering.Filtering can be done based on attributes such as Acquisition Date, Estimated Cloud Cover, and Tile ID.Tile_ID is computed as [year][month][day]T[hours][minutes][seconds]_[UTMcode][latitudeband][square]_[sequence]. More…NOTE: Not using filters, and loading the entire archive, may affect performance.Analysis ReadyThis imagery layer is analysis ready with TOA correction applied.Visual RenderingDefault rendering is Natural Color (bands 4,3,2) with Dynamic Range Adjustment (DRA).The DRA version of each layer enables visualization of the full dynamic range of the images.Rendering (or display) of band combinations and calculated indices is done on-the-fly from the source images via Raster Functions.Various pre-defined Raster Functions can be selected or custom functions created.Available renderings include: Agriculture with DRA, Bathymetric with DRA, Color-Infrared with DRA, Natural Color with DRA, Short-wave Infrared with DRA, Geology with DRA, NDMI Colorized, Normalized Difference Built-Up Index (NDBI), NDWI Raw, NDWI - with VRE Raw, NDVI – with VRE Raw (NDRE), NDVI - VRE only Raw, NDVI Raw, Normalized Burn Ratio, NDVI Colormap.Multispectral BandsBandDescriptionWavelength (µm)Resolution (m)1Coastal aerosol0.433 - 0.453602Blue0.458 - 0.523103Green0.543 - 0.578104Red0.650 - 0.680105Vegetation Red Edge0.698 - 0.713206Vegetation Red Edge0.733 - 0.748207Vegetation Red Edge0.773 - 0.793208NIR0.785 - 0.900108ANarrow NIR0.855 - 0.875209Water vapour0.935 - 0.9556010SWIR – Cirrus1.365 - 1.3856011SWIR-11.565 - 1.6552012SWIR-22.100 - 2.28020Additional NotesOverviews exist with a spatial resolution of 150m and are updated every quarter based on the best and latest imagery available at that time.To work with source images at all scales, the ‘Lock Raster’ functionality is available.NOTE: ‘Lock Raster’ should only be used on the layer for short periods of time, as the imagery and associated record Object IDs may change daily.This ArcGIS Server dynamic imagery layer can be used in Web Maps and ArcGIS Desktop as well as Web and Mobile applications using the REST based Image services API.Images can be exported up to a maximum of 4,000 columns x 4,000 rows per request.Data SourceSentinel-2 imagery is the result of close collaboration between the (European Space Agency) ESA, the European Commission and USGS. Data is hosted by the Amazon Web Services as part of their Registry of Open Data. Users can access the imagery from Sentinel-2 on AWS , or alternatively access Sentinel2Look Viewer, EarthExplorer or the Copernicus Open Access Hub to download the scenes.For information on Sentinel-2 imagery, see Sentinel-2.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The SEN12 Global Urban Mapping (SEN12_GUM) dataset consists of Sentinel-1 SAR (VV + VH band) and Sentinel-2 MSI (10 spectral bands) satellite images acquired over the same area for 96 training and validation sites and an additional 60 test sites covering unique geographies across the globe. The satellite imagery was acquired as part of the European Space Agency's Earth observation program Copernicus and was preprocessed in Google Earth Engine. Built-up area labels for the 30 training and validation sites located in the United States, Canada, and Australia were obtained from Microsoft's open-access building footprints. The other 66 training sites located outside of the United States, Canada, and Australia are unlabeled but can be used for semi-supervised learning. Labels obtained from the SpaceNet7 dataset are provided for all 60 test sites.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was generated by the Remote Sensing Group of the TU Wien Department of Geodesy and Geoinformation, within Framework Contract (No. 939866-IPR-2020) as part of the provision of an automated, global, satellite-based flood monitoring product for the Copernicus Emergency Management Service (CEMS) managed by the European Commission. The Global Flood Monitoring (GFM) product is integrated within the user interface of the Global Flood Awareness System (GloFAS) of the CEMS. Open use of the dataset is granted under the CC BY 4.0 license.
The Copernicus Sentinel-1 constellation is a highly-capable monitoring mission and provides one of the most comprehensive global archives on satellite imagery. The satellite sensors acquire Synthetic Aperture Radar (SAR) images, and as such, they observe regardless of weather conditions and daylight. The regular and systematic observations generate rich information on the global land surface and its dynamics, which is used for---but not limited to---terrestrial applications like e.g. land cover mapping, flood detection, or drought monitoring.
The complete Sentinel-1 time-series dataset is challenging to analyze, primarily due to its sheer data volume of at the (global) scale of Petabytes. As a user-friendly alternative, this dataset provides a Harmonic (Fourier) series model that reduces the SAR backscatter seasonality to a relative small number of GeoTIFF files holding the harmonic coefficient values.
This dataset publication provides a temporal Sentinel-1 model for most of the world's land masses. Seven coefficients computed using (harmonic) least squares regression, along with the standard deviation of residuals and number of observations, comprise the harmonic parameter set. The parameters are being operationally used to determine the expected SAR backscatter signal for any day of the year as part of the TU Wien's method contributing to GFM's ensemble flood monitoring effort (Bauer-Marschallinger et. al, 2022). The Global Harmonic Parameters (HPARs) were derived from the whole Sentinel-1 VV temporal stack for the period 2019-2020 by least squares regression with a harmonic model formulation, running three sinusoidal iterations (k=3).
The model describes the typical seasonal Sentinel-1 backscatter variation on a 20 m pixel level. It was designed as a smoothed time-series approximation, removing short-term perturbations, such as speckle and transient events (like floods for instance). Hence, the model is suited to discern the seasonal changes brought about by varying water content, e.g., inundation or soil moisture, and progression of vegetation structure.
We encourage developers from the broader user community to exploit this extensive and functional data resource. In particular, we promote the use of these Sentinel-1 HPARs in models for various applications dealing with land cover, seasonal water mapping, or vegetation phenology.
For the datasets' theoretical formulation and primary use case as a non-flooded backscatter reference model, please refer to our peer-reviewed article. Additionally, the software used, computation process, and outlook are discussed in this conference paper.
The parameter sets are provided per Sentinel-1's relative orbit to account for geometric effects. The parameter files are sampled at 20 m pixel spacing, georeferenced to the Equi7Grid, and divided into six continental zones (Africa, Asia, Europe, North America, Oceania, and South America. For portability and easier downloads, further sub-divisions into continental parts are done, resulting in 12 compressed bundles (please refer to coverage map).
The parameter sets are provided per Sentinel-1's relative to account for geometric effects. The parameter files are sampled at 20 m pixel spacing, georeferenced to the Equi7Grid, and divided into six continental zones (Africa, Asia, Europe, North America, Oceania, and South America);. For portability and easier downloads, further sub-divisions into continental parts are done, resulting in 12 compressed bundles (please refer to coverage map).
The data itself is organised as square tiles of 300 km extent ("T3"-tiles). Note that the parameters are generated for each orbit, resulting in several orbit-sets per tile. Given this structure, a total of 98910 files for the 10990 tiled orbit-sets, comprising overall a compressed disk size of 3.7 TB.
The datasets follow the Yeoda filenaming convention (documentation here) where the core meta information is embedded. Notably, the file name is prefaced by the product name 'SIG0-HPAR-' and the particular parameter codes:
Orbit sets are distinguishable by orbit direction, i.e. (A - ascending and D - descending) and relative orbit number, for example: 'A175', 'D080'.
File naming scheme is as follows:
SIG0-HPAR-NNN_YYYYMMDD1_YYYYMMDD2_VV_OOOO_TTTTTTTTTT_GGGG_V02R01_S1IWGRDH.tif
*bold faced items are fixed for this product version.
For example:
'SIG0-HPAR-STD_20190101_20210101_VV_D111_E102N066T3_SA020M_V02R01_S1IWGRDH.tif'
The parameters' file format is an LZW-compressed GeoTIFF holding 16-bit integer values, with tagged metadata on encoding and georeference. Compatibility with common geographic information systems such as QGIS or ArcGIS, and geodata libraries as GDAL is given.
This repository provides all parameter sets per orbit for each tile and is organized in a folder structure per (sub-)continent. With this, twelve zipped dataset collections per (sub-)continent are available for download.
We suggest users to use the open-source Python package yeoda, a datacube storage access layer that offers functions to read, write, search, filter, split and load data from this repository as an HPAR datacube. The yeoda package is openly accessible on GitHub at https://github.com/TUW-GEO/yeoda.
Furthermore, for the usage of the Equi7Grid we provide data and tools via the python package available on GitHub at https://github.com/TUW-GEO/Equi7Grid. More details on the grid reference can be found in this publication .
A day-of-year estimate reader tool based on the packages above is likewise available on GitHub at https://github.com/TUW-GEO/hpar-reader.
The authors would like to thank our colleagues: Thomas Melzer of TU Wien for his invaluable insights on the parameter formulation, and Senmao Cao of Earth Observation Data Centre GmbH (EODC) for his contributions to the code base used to process dataset.
This work was partly funded by TU Wien, with co-funding from the project "Provision of an Automated, Global, Satellite-based Flood Monitoring Product for the Copernicus Emergency Management Service" (GFM), Contract No. 939866-IPR-2020 for the European Commission's Joint Research Centre (EC-JRC), and the project "Flood Event Monitoring and Documentation enabled by the Austrian Sentinel Data Cube" (ACube4Floods), Contract No. 878946 for the Austrian Research Promotion Agency (FFG, ASAP16).
The computational results presented have been achieved using the Vienna Scientific Cluster (VSC). We further would like to thank our colleagues at TU Wien and EODC for supporting us on technical tasks to cope with such a large and complex dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is linked to the publication "Recursive classification of satellite imaging time-series: An application to land cover mapping". In this paper, we introduce the recursive Bayesian classifier (RBC), which converts any instantaneous classifier into a robust online method through a probabilistic framework that is resilient to non-informative image variations. To reproduce the results presented in the paper, the RBC-SatImg folder and the code in the GitHub repository RBC-SatImg are required.
The RBC-SatImg folder contains:
The Sentinel-2 images and forest labels used in the deforestation detection experiment for the Amazon Rainforest have been obtained from the MultiEarth Challenge dataset.
The following paths can be changed in the configuration file from the GitHub repository as desired. The RBC-SatImg is organized as follows:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Influence of tumor localization and biopsy method on the SLN detection rate (n = 767).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This layer displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution. Each year is generated with Impact Observatory’s deep learning AI land classification model, trained using billions of human-labeled image pixels from the National Geographic Society. The global maps are produced by applying this model to the Sentinel-2 Level-2A image collection on Microsoft’s Planetary Computer, processing over 400,000 Earth observations per year.The algorithm generates LULC predictions for nine classes, described in detail below. The year 2017 has a land cover class assigned for every pixel, but its class is based upon fewer images than the other years. The years 2018-2024 are based upon a more complete set of imagery. For this reason, the year 2017 may have less accurate land cover class assignments than the years 2018-2024. Key Properties Variable mapped: Land use/land cover in 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)Extent: GlobalSource imagery: Sentinel-2 L2ACell Size: 10-metersType: ThematicAttribution: Esri, Impact ObservatoryAnalysis: Optimized for analysisClass Definitions: ValueNameDescription1WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2TreesAny significant clustering of tall (~15 feet or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields.10CloudsNo land cover information due to persistent cloud cover.11RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.NOTE: Land use focus does not provide the spatial detail of a land cover map. As such, for the built area classification, yards, parks, and groves will appear as built area rather than trees or rangeland classes.Usage Information and Best PracticesProcessing TemplatesThis layer includes a number of preconfigured processing templates (raster function templates) to provide on-the-fly data rendering and class isolation for visualization and analysis. Each processing template includes labels and descriptions to characterize the intended usage. This may include for visualization, for analysis, or for both visualization and analysis. VisualizationThe default rendering on this layer displays all classes.There are a number of on-the-fly renderings/processing templates designed specifically for data visualization.By default, the most recent year is displayed. To discover and isolate specific years for visualization in Map Viewer, try using the Image Collection Explorer. AnalysisIn order to leverage the optimization for analysis, the capability must be enabled by your ArcGIS organization administrator. More information on enabling this feature can be found in the ‘Regional data hosting’ section of this help doc.Optimized for analysis means this layer does not have size constraints for analysis and it is recommended for multisource analysis with other layers optimized for analysis. See this group for a complete list of imagery layers optimized for analysis.Prior to running analysis, users should always provide some form of data selection with either a layer filter (e.g. for a specific date range, cloud cover percent, mission, etc.) or by selecting specific images. To discover and isolate specific images for analysis in Map Viewer, try using the Image Collection Explorer.Zonal Statistics is a common tool used for understanding the composition of a specified area by reporting the total estimates for each of the classes. GeneralIf you are new to Sentinel-2 LULC, the Sentinel-2 Land Cover Explorer provides a good introductory user experience for working with this imagery layer. For more information, see this Quick Start Guide.Global land use/land cover maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover anywhere on Earth. Classification ProcessThese maps include Version 003 of the global Sentinel-2 land use/land cover data product. It is produced by a deep learning model trained using over five billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6-bands of Sentinel-2 L2A surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map for each year.The input Sentinel-2 L2A data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch. CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This web map displays the land use/land cover (LULC) timeseries layer derived from ESA Sentinel-2 imagery at 10m resolution. The visualization uses blend modes and is best used in the new Map Viewer. The time slider can be used to advance through the five years of data from 2017-2021. There are also a series of bookmarks for the locations below:Urban growth examplesOuagadougouCairo/GizaDubai, UAEKaty, Texas, USALoudoun County, VirginiaInfrastructureIstanbul International Airport, TurkeyGrand Ethiopian Renaissance Dam, EthiopiaDeforestationBorder of Acre and Rondonia states, BrazilHarz Mountains, GermanyWetlands lossPantanal, BrazilParana river, ArgentinaVegetation changing after fireNorthern California: Paradise, Redding, Clear Lake, Santa Rosa, Mendocino National ForestKangaroo Island, AustraliaVictoria and NSW, AustraliaYakutia, RussiaHurricane ImpactAbaco Island, BahamasRecent Lava FlowHawaii IslandSurface MiningBrown Coal, Cottbus, GermanyLand ReclamationMarkermeer, NetherlandsEconomic DevelopmentNorth vs South Korea
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for "Deep Learning with remote sensing data for image segmentation: example of rice crop mapping using Sentinel-2 images".
image_prediction_pt1 and _pt2 have the same content as image_prediction.zip but split in two parts for faster downloading with Google Colab (to avoid time out)
Contact
Ricardo Dalagnol
ricds@hotmail.com
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was generated by the Remote Sensing Group of the TU Wien Department of Geodesy and Geoinformation (https://mrs.geo.tuwien.ac.at/), within a dedicated project by the European Space Agency (ESA). Rights are reserved with ESA. Open use is granted under the CC BY 4.0 license.With this dataset publication, we open up a new perspective on Earth's land surface, providing a normalised microwave backscatter map from spaceborne Synthetic Aperture Radar (SAR) observations. The Sentinel-1 Global Backscatter Model (S1GBM) describes Earth for the period 2016-17 by the mean C-band radar cross section in VV- and VH-polarization at a 10 m sampling, giving a high-quality impression on surface- structures and -patterns.At TU Wien, we processed 0.5 million Sentinel-1 scenes totaling 1.1 PB and performed semi-automatic quality curation and backscatter harmonisation related to orbit geometry effects. The overall mosaic quality excels (the few) existing datasets, with minimised imprinting from orbit discontinuities and successful angle normalisation in large parts of the world. Supporting the designand verification of upcoming radar sensors, the obtained S1GBM data potentially also serve land cover classification and determination of vegetation and soil states, as well as water body mapping.We invite developers from the broader user community to exploit this novel data resource and to integrate S1GBM parameters in models for various variables of land cover, soil composition, or vegetation structure.Please be referred to our peer-reviewed article at TODO: LINK TO BE PROVIDED for details, generation methods, and an in-depth dataset analysis. In this publication, we demonstrate – as an example of the S1GBM's potential use – the mapping of permanent water bodies and evaluate the results against the Global Surface Water (GSW) benchmark.Dataset RecordThe VV and VH mosaics are sampled at 10 m pixel spacing, georeferenced to the Equi7Grid and divided into six continental zones (Africa, Asia, Europe, North America, Oceania, South America), which are further divided into square tiles of 100 km extent ("T1"-tiles). With this setup, the S1GBM consists of 16071 tiles over six continents, for VV and VH each, totaling to a compressed data volume of 2.67 TB.The tiles' file-format is a LZW-compressed GeoTIFF holding 16-bit integer values, with tagged metadata on encoding and georeference. Compatibility with common geographic information systems as QGIS or ArcGIS, and geodata libraries as GDAL is given.In this repository, we provide each mosaic as tiles that are organised in a folder structure per continent. With this, twelve zipped dataset-collections per continent are available for download.Web-Based Data ViewerIn addition to this data provision here, there is a web-based data viewer set up at the facilities of the Earth Observation Data Centre (EODC) under http://s1map.eodc.eu/. It offers an intuitive pan-and-zoom exploration of the full S1GBM VV and VH mosaics. It has been designed to quickly browse the S1GBM, providing an easy and direct visual impression of the mosaics.Code AvailabilityWe encourage users to use the open-source Python package yeoda, a datacube storage access layer that offers functions to read, write, search, filter, split and load data from the S1GBM datacube. The yeoda package is openly accessible on GitHub at https://github.com/TUW-GEO/yeoda.Furthermore, for the usage of the Equi7Grid we provide data and tools via the python package available on GitHub at https://github.com/TUW-GEO/Equi7Grid. More details on the grid reference can be found in https://www.sciencedirect.com/science/article/pii/S0098300414001629.AcknowledgementsThis study was partly funded by the project "Development of a Global Sentinel-1 Land Surface Backscatter Model", ESA Contract No. 4000122681/17/NL/MP for the European Union Copernicus Programme. The computational results presented have been achieved using the Vienna Scientific Cluster (VSC). We further would like to thank our colleagues at TU Wien and EODC for supporting us on technical tasks to cope with such a large and complex data set. Last but not least, we appreciate the kind assistance and swift support of the colleagues from the TU Wien Center for Research Data Management.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains maps of the main classes of agricultural land use (dominant crop types and other land use types) in Germany, which have been produced annually at the Thünen Institute beginning with the year 2017 on the basis of satellite data. The maps cover the entire open landscape, i.e., the agriculturally used area (UAA) and e.g., uncultivated areas. The map was derived from time series of Sentinel-1, Sentinel-2, Landsat 8 and additional environmental data. Map production is based on the methods described in Blickensdörfer et al. (2022).
All optical satellite data were managed, pre-processed and structured in an analysis-ready data (ARD) cube using the open-source software FORCE - Framework for Operational Radiometric Correction for Environmental monitoring (Frantz, D., 2019), in which SAR and environmental data were integrated.
The map extent covers all areas in Germany that are defined as agricultural land, grassland, small woody features, heathland, peatland or unvegetated areas according to ATKIS Basis-DLM (Geobasisdaten: © GeoBasis-DE / BKG, 2020).
Version v201:Post-processing of the maps included a sieve filter as well as a ruleset for the reduction of non-plausible areas using the Basis-DLM and the digital terrain model of Germany (Geobasisdaten: © GeoBasis-DE / BKG, 2015). The final post-processing step comprises the aggregation of the gridded data to homogeneous objects (fields) based on the approach that is described in Tetteh et al. (2021) and Tetteh et al. (2023).
The maps are available in FlatGeobuf format, which makes downloading the full dataset optional. All data can directly be accessed in QGIS, R, Python or any supported software of your choice using the provided URL to the datasets (right click on the respective data set --> “copy link address”). By doing so the entire map area or only the regions of interest can be accessed. QGIS legend files for data visualization can be downloaded separately.
Class-specific accuracies for each year are proveded in the respective tables. We provide this dataset "as is" without any warranty regarding the accuracy or completeness and exclude all liability.
Mailing list
If you do not want to miss the latest updates, please enroll to our mailing list.
References:Blickensdörfer, L., Schwieder, M., Pflugmacher, D., Nendel, C., Erasmi, S., & Hostert, P. (2022). Mapping of crop types and crop sequences with combined time series of Sentinel-1, Sentinel-2 and Landsat 8 data for Germany. Remote Sensing of Environment, 269, 112831.
BKG, Bundesamt für Kartographie und Geodäsie (2015). Digitales Geländemodell Gitterweite 10 m. DGM10. https://sg.geodatenzentrum.de/web_public/gdz/dokumentation/deu/dgm10.pdf (last accessed: 28. April 2022).
BKG, Bundesamt für Kartographie und Geodäsie (2020). Digitales Basis-Landschaftsmodell. https://sg.geodatenzentrum.de/web_public/gdz/dokumentation/deu/basis-dlm.pdf (last accessed: 28. April 2022).
Frantz, D. (2019). FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond. Remote Sensing, 11, 1124.
Tetteh, G.O., Gocht, A., Erasmi, S., Schwieder, M., & Conrad, C. (2021). Evaluation of Sentinel-1 and Sentinel-2 Feature Sets for Delineating Agricultural Fields in Heterogeneous Landscapes. IEEE Access, 9, 116702-116719.
Tetteh, G.O., Schwieder, M., Erasmi, S., Conrad, C., & Gocht, A. (2023). Comparison of an Optimised Multiresolution Segmentation Approach with Deep Neural Networks for Delineating Agricultural Fields from Sentinel-2 Images. PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science
National-scale crop type maps for Germany from combined time series of Sentinel-1, Sentinel-2 and Landsat data (2017 to 2021) © 2024 by Schwieder, Marcel; Tetteh, Gideon Okpoti; Blickensdörfer, Lukas; Gocht, Alexander; Erasmi, Stefan; licensed under CC BY 4.0.
Funding was provided by the German Federal Ministry of Food and Agriculture as part of the joint project “Monitoring der biologischen Vielfalt in Agrarlandschaften” (MonViA, Monitoring of biodiversity in agricultural landscapes).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The monitoring of tropical forests has benefited from the increased availability of high-resolution earth observation data. However, the seasonality and openness of the canopy of dry tropical forests remains a challenge for optical sensors. The availability of time series of remote sensing images at 10-meters is changing this paradigm.
In the context of REDD+ national reporting requirements, we investigated a methodology that is reproducible and adaptable in order to ensure user appropriation. The overall methodology consists of three main steps: (i) the generation of Sentinel-1 (S1) and Sentinel-2 (S2) layers, (ii) the collection of an ad-hoc training/validation dataset and (iii) the classification of the satellite data. Three different classification workflows are compared in terms of their capability to capture the canopy cover of forests in East Africa. Two types of maps are derived from these mapping approaches: i) binary tree cover/no tree cover (TC/NTC) maps, and ii) maps of canopy cover classes. The method is applied at scale, over Tanzania and one final map for each workflow is shared. Two big data computing platforms are combined to exploit the important volume of satellite data available over a yearly period.
The reference dataset (training and validation), the three best maps and the codes to produce the S1 and S2 composites on Google Earth Engine are shared here.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
About the dataLand use land cover (LULC) maps are an increasingly important tool for decision-makers in many industry sectors and developing nations around the world. The information provided by these maps helps inform policy and land management decisions by better understanding and quantifying the impacts of earth processes and human activity.ArcGIS Living Atlas of the World provides a detailed, accurate, and timely LULC map of the world. The data is the result of a three-way collaboration among Esri, Impact Observatory, and Microsoft. For more information about the data, see Sentinel-2 10m Land Use/Land Cover Time Series.About the appOne of the foremost capabilities of this app is the dynamic change analysis. The app provides dynamic visual and statistical change by comparing annual slices of the Sentinel-2 10m Land Use/Land Cover data as you explore the map.Overview of capabilities:Visual change analysis with either 'Step Mode' or 'Swipe Mode'Dynamic statistical change analysis by year, map extent, and classFilter by selected land cover classRegional class statistics summarized by administrative boundariesImagery mode for visual investigation and validation of land coverSelect imagery renderings (e.g. SWIR to visualize forest burn scars)Data download for offline use
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This layer displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution. Each year is generated with Impact Observatory’s deep learning AI land classification model, trained using billions of human-labeled image pixels from the National Geographic Society. The global maps are produced by applying this model to the Sentinel-2 Level-2A image collection on Microsoft’s Planetary Computer, processing over 400,000 Earth observations per year.The algorithm generates LULC predictions for nine classes, described in detail below. The year 2017 has a land cover class assigned for every pixel, but its class is based upon fewer images than the other years. The years 2018-2023 are based upon a more complete set of imagery. For this reason, the year 2017 may have less accurate land cover class assignments than the years 2018-2023.Variable mapped: Land use/land cover in 2017, 2018, 2019, 2020, 2021, 2022, 2023Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)Extent: GlobalSource imagery: Sentinel-2 L2ACell Size: 10-metersType: ThematicAttribution: Esri, Impact ObservatoryWhat can you do with this layer?Global land use/land cover maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover anywhere on Earth. This layer can also be used in analyses that require land use/land cover input. For example, the Zonal toolset allows a user to understand the composition of a specified area by reporting the total estimates for each of the classes. NOTE: Land use focus does not provide the spatial detail of a land cover map. As such, for the built area classification, yards, parks, and groves will appear as built area rather than trees or rangeland classes.Class definitionsValueNameDescription1WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2TreesAny significant clustering of tall (~15 feet or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields.10CloudsNo land cover information due to persistent cloud cover.11RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.Classification ProcessThese maps include Version 003 of the global Sentinel-2 land use/land cover data product. It is produced by a deep learning model trained using over five billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6-bands of Sentinel-2 L2A surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map for each year.The input Sentinel-2 L2A data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch.CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.
This dataset shows the tiling grid and their IDs for Sentinel 2 satellite imagery. The tiling grid IDs are useful for selecting imagery of an area of interest. Sentinel 2 is an Earth observation satellite developed and operated by the European Space Agency (ESA). Its imagery has 13 bands in the visible, near infrared and short wave infrared part of the spectrum. It has a spatial resolution of 10 m, 20 m and 60 m depending on the spectral band. Sentinel-2 has a 290 km field of view when capturing its imagery. This imagery is then projected on to a UTM grid and made available publicly on 100x100 km2 tiles. Each tile has a unique ID. This ID scheme allows all imagery for a given tile to be located. Provenance: The ESA make the tiling grid available as a KML file (see links). We were, however, unable to convert this KML into a shapefile for deployment on the eAtlas. The shapefile used for this layer was sourced from the Git repository developed by Justin Meyers (https://github.com/justinelliotmeyers/Sentinel-2-Shapefile-Index). Why is this dataset in the eAtlas?: Sentinel 2 imagery is very useful for the studying and mapping of reef systems. Selecting imagery for study often requires knowing what the tile grid IDs are for the area of interest. This dataset is intended as a reference layer. The eAtlas is not a custodian of this dataset and copies of the data should be obtained from the original sources. Data Dictionary: Name: UTM code associated with each tile. For example 55KDV
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Important Note: This item is in mature support as of February 2023 and will be retired in December 2025. A new version of this item is available for your use. Esri recommends updating your maps and apps to use the new version. This layer displays change in pixels of the Sentinel-2 10m Land Use/Land Cover product developed by Esri, Impact Observatory, and Microsoft. Available years to compare with 2021 are 2018, 2019 and 2020. By default, the layer shows all comparisons together, in effect showing what changed 2018-2021. But the layer may be changed to show one of three specific pairs of years, 2018-2021, 2019-2021, or 2020-2021.Showing just one pair of years in ArcGIS Online Map ViewerTo show just one pair of years in ArcGIS Online Map viewer, create a filter. 1. Click the filter button. 2. Next, click add expression. 3. In the expression dialogue, specify a pair of years with the ProductName attribute. Use the following example in your expression dialogue to show only places that changed between 2020 and 2021:ProductNameis2020-2021By default, places that do not change appear as a
transparent symbol in ArcGIS Pro. But in ArcGIS Online Map Viewer, a transparent
symbol may need to be set for these places after a filter is
chosen. To do this:4. Click the styles button. 5. Under unique values click style options. 6. Click the symbol next to No Change at the bottom of the legend. 7. Click the slider next to "enable fill" to turn the symbol off.Showing just one pair of years in ArcGIS ProTo show just one pair of years in ArcGIS Pro, choose one of the layer's processing templates to single out a particular pair of years. The processing template applies a definition query that works in ArcGIS Pro. 1. To choose a processing template, right click the layer in the table of contents for ArcGIS Pro and choose properties. 2. In the dialogue that comes up, choose the tab that says processing templates. 3. On the right where it says processing template, choose the pair of years you would like to display. The processing template will stay applied for any analysis you may want to perform as well.How the change layer was created, combining LULC classes from two yearsImpact Observatory, Esri, and Microsoft used artificial intelligence to classify the world in 10 Land Use/Land Cover (LULC) classes for the years 2017-2021. Mosaics serve the following sets of change rasters in a single global layer: Change between 2018 and 2021Change between 2019 and 2021Change between 2020 and 2021To make this change layer, Esri used an arithmetic operation
combining the cells from a source year and 2021 to make a change index
value. ((from year * 16) + to year) In the example of the change between 2020 and 2021, the from year (2020) was multiplied by 16, then added to the to year (2021). Then the combined number is served as an index in an 8 bit unsigned mosaic with an attribute table which describes what changed or did not change in that timeframe. Variable mapped: Change in land cover between 2018, 2019, or 2020 and 2021 Data Projection: Universal Transverse Mercator (UTM)Mosaic Projection: WGS84Extent: GlobalSource imagery: Sentinel-2Cell Size: 10m (0.00008983152098239751 degrees)Type: ThematicSource: Esri Inc.Publication date: January 2022What can you do with this layer?Global LULC maps provide information on conservation planning, food security,
and hydrologic modeling, among other things. This dataset can be used to
visualize land cover anywhere on Earth. This
layer can also be used in analyses that require land cover input. For
example, the Zonal Statistics tools allow a user to understand the
composition of a specified area by reporting the total estimates for
each of the classes. Land Cover processingThis map was produced by a deep learning model trained using over 5 billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world. The underlying deep learning model uses 6 bands of Sentinel-2 surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map. Processing platformSentinel-2 L2A/B data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch.Class definitions1. WaterAreas
where water was predominantly present throughout the year; may not
cover areas with sporadic or ephemeral water; contains little to no
sparse vegetation, no rock outcrop nor built up features like docks;
examples: rivers, ponds, lakes, oceans, flooded salt plains.2. TreesAny
significant clustering of tall (~15-m or higher) dense vegetation,
typically with a closed or dense canopy; examples: wooded vegetation,
clusters of dense tall vegetation within savannas, plantations, swamp or
mangroves (dense/tall vegetation with ephemeral water or canopy too
thick to detect water underneath).4. Flooded vegetationAreas
of any type of vegetation with obvious intermixing of water throughout a
majority of the year; seasonally flooded area that is a mix of
grass/shrub/trees/bare ground; examples: flooded mangroves, emergent
vegetation, rice paddies and other heavily irrigated and inundated
agriculture.5. CropsHuman
planted/plotted cereals, grasses, and crops not at tree height;
examples: corn, wheat, soy, fallow plots of structured land.7. Built AreaHuman
made structures; major road and rail networks; large homogenous
impervious surfaces including parking structures, office buildings and
residential housing; examples: houses, dense villages / towns / cities,
paved roads, asphalt.8. Bare groundAreas
of rock or soil with very sparse to no vegetation for the entire year;
large areas of sand and deserts with no to little vegetation; examples:
exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried
lake beds, mines.9. Snow/IceLarge
homogenous areas of permanent snow or ice, typically only in mountain
areas or highest latitudes; examples: glaciers, permanent snowpack, snow
fields. 10. CloudsNo land cover information due to persistent cloud cover.11. Rangeland Open
areas covered in homogenous grasses with little to no taller
vegetation; wild cereals and grasses with no obvious human plotting
(i.e., not a plotted field); examples: natural meadows and fields with
sparse to no tree cover, open savanna with few to no trees, parks/golf
courses/lawns, pastures. Mix of small clusters of plants or single
plants dispersed on a landscape that shows exposed soil or rock;
scrub-filled clearings within dense forests that are clearly not taller
than trees; examples: moderate to sparse cover of bushes, shrubs and
tufts of grass, savannas with very sparse grasses, trees or other
plants.CitationKarra,
Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep
learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote
Sensing Symposium. IEEE, 2021.AcknowledgementsTraining
data for this project makes use of the National Geographic Society
Dynamic World training dataset, produced for the Dynamic World Project
by National Geographic Society in partnership with Google and the World
Resources Institute.For questions please email environment@esri.com
Usually, the information related to the crop types available in a given territory is annual information, that is, we only know the type of main crop grown over a year and we do not know any crops that have followed one another during the year and also we do not know when a particular crop is sown and when it is harvested. The main objective of this dataset is to create the basis for experimenting with suitable solutions to give a reliable answer to the above questions, or to propose models capable of producing dynamic segmentation maps that show when a crop begins to grow and when it is collected. Consequently, being able to understand if more than one crop has been grown in a territory within a year. In this dataset, we have 20 coverage classes as ground-truth values provided by Regine Lombardia. The mapping of the class labels used (see file lombardia-classes/classes25pc.txt) brings together some classes and provides the time intervals within which that category grows. The last two columns of the following table are respectively the date (month-day) of the start and end of the interval in which the class is visible during the construction of our dataset.