The C2S-MS Floods Dataset is a dataset of global flood events with labeled Sentinel-1 & Sentinel-2 pairs. There are 900 sets (1800 total) of near-coincident Sentinel-1 and Sentinel-2 chips (512 x 512 pixels) from 18 global flood events. Each chip contains a water label for both Sentinel-1 and Sentinel-2, as well as a cloud/cloud shadow mask for Sentinel-2. The dataset was constructed by Cloud to Street in collaboration with and funded by the Microsoft Planetary Computer team.
Dataset Card for S2-100K
The S2-100K dataset is a dataset of 100,000 multi-spectral satellite images sampled from Sentinel-2 via the Microsoft Planetary Computer. Copernicus Sentinel data is captured between Jan 1, 2021 and May 17, 2023. The dataset is sampled approximately uniformly over landmass and only includes images without cloud coverage. The dataset is available for research purposes only. If you use the dataset, please cite our paper. More information on the dataset can⊠See the full description on the dataset page: https://huggingface.co/datasets/davanstrien/satclip.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This layer displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution. Each year is generated with Impact Observatoryâs deep learning AI land classification model, trained using billions of human-labeled image pixels from the National Geographic Society. The global maps are produced by applying this model to the Sentinel-2 Level-2A image collection on Microsoftâs Planetary Computer, processing over 400,000 Earth observations per year.The algorithm generates LULC predictions for nine classes, described in detail below. The year 2017 has a land cover class assigned for every pixel, but its class is based upon fewer images than the other years. The years 2018-2023 are based upon a more complete set of imagery. For this reason, the year 2017 may have less accurate land cover class assignments than the years 2018-2023.Variable mapped: Land use/land cover in 2017, 2018, 2019, 2020, 2021, 2022, 2023Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)Extent: GlobalSource imagery: Sentinel-2 L2ACell Size: 10-metersType: ThematicAttribution: Esri, Impact ObservatoryWhat can you do with this layer?Global land use/land cover maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover anywhere on Earth. This layer can also be used in analyses that require land use/land cover input. For example, the Zonal toolset allows a user to understand the composition of a specified area by reporting the total estimates for each of the classes. NOTE: Land use focus does not provide the spatial detail of a land cover map. As such, for the built area classification, yards, parks, and groves will appear as built area rather than trees or rangeland classes.Class definitionsValueNameDescription1WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2TreesAny significant clustering of tall (~15 feet or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields.10CloudsNo land cover information due to persistent cloud cover.11RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.Classification ProcessThese maps include Version 003 of the global Sentinel-2 land use/land cover data product. It is produced by a deep learning model trained using over five billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6-bands of Sentinel-2 L2A surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map for each year.The input Sentinel-2 L2A data was accessed via Microsoftâs Planetary Computer and scaled using Microsoft Azure Batch.CitationKarra, Kontgis, et al. âGlobal land use/land cover with Sentinel-2 and deep learning.â IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This layer displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution. Each year is generated with Impact Observatoryâs deep learning AI land classification model, trained using billions of human-labeled image pixels from the National Geographic Society. The global maps are produced by applying this model to the Sentinel-2 Level-2A image collection on Microsoftâs Planetary Computer, processing over 400,000 Earth observations per year.The algorithm generates LULC predictions for nine classes, described in detail below. The year 2017 has a land cover class assigned for every pixel, but its class is based upon fewer images than the other years. The years 2018-2024 are based upon a more complete set of imagery. For this reason, the year 2017 may have less accurate land cover class assignments than the years 2018-2024.Variable mapped: Land use/land cover in 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)Extent: GlobalSource imagery: Sentinel-2 L2ACell Size: 10-metersType: ThematicAttribution: Esri, Impact ObservatoryWhat can you do with this layer?Global land use/land cover maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover anywhere on Earth. This layer can also be used in analyses that require land use/land cover input. For example, the Zonal toolset allows a user to understand the composition of a specified area by reporting the total estimates for each of the classes. NOTE: Land use focus does not provide the spatial detail of a land cover map. As such, for the built area classification, yards, parks, and groves will appear as built area rather than trees or rangeland classes.Class definitionsValueNameDescription1WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2TreesAny significant clustering of tall (~15 feet or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields.10CloudsNo land cover information due to persistent cloud cover.11RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.Classification ProcessThese maps include Version 003 of the global Sentinel-2 land use/land cover data product. It is produced by a deep learning model trained using over five billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6-bands of Sentinel-2 L2A surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map for each year.The input Sentinel-2 L2A data was accessed via Microsoftâs Planetary Computer and scaled using Microsoft Azure Batch.CitationKarra, Kontgis, et al. âGlobal land use/land cover with Sentinel-2 and deep learning.â IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.
Sentinel-5 Precursor (5P) is a low Earth orbit polar satellite mission dedicated to monitoring air pollution. The satellite carries the state-of-the-art TROPOspheric Monitoring Instrument (TROPOMI). With TROPOMI, Sentinel-5P images air pollutants more accurately and at a higher spatial resolution than any other spaceborne instrument.This layer provides daily global composite images of atmospheric methane measurements. Methane (CH4) is the second most important contributor to the anthropogenically enhanced greenhouse effect. Roughly three-quarters of methane emissions are anthropogenic and as such it is important to continue the record of satellite-based measurements. TROPOMI aims at providing CH4 column concentrations with high sensitivity to the Earth's surface, good spatio/temporal coverage, and sufficient accuracy to facilitate inverse modelling of sources and sinks. More...Key PropertiesGeographic Coverage: GlobalTemporal Coverage: 01-Jan-2019 to PresentSpatial Resolution: 5 x 5 KmTemporal Resolution: Daily*Product Level: Level 3Units/Physical Quantity: Parts per billion, Column averaged dry air mixing ratioTypical Value Range: 1,600 - 2,000 ppb* While imagery is collected globally each day, the time between data collection and product availability from ESA can vary. This layer is updated as imagery products are made publicly available. See Data Collection Notes below for additional details on data availability.Layer ConfigurationThe default rendering is a colorized rolling 7-day mean. This is applied with the "Colorized Methane (CH4) in Parts Per Billion" processing template, plus a layer definition query to select the most recent 7 days available ('Best' < 8).The default mosaic operator is "mean" or "average of all pixels". In practice, this means that unless a user has locked on to a single day/image, the values returned will be the mean for all images displayed. Layer filtering/definition queries can be used to customize your timeframe of interest. If no definition query is included on the layer, a mean for the latest 31 days will be displayed.It is important to note that display performance correlates with number if images. It will take longer to render a 31-day mean than is does for a 7-day mean.Users can temporally select/group data by either the 'AcquisitionDate' or 'Best' fields/attributes.The 'Best' field grades each item in the Image Service based on its recentness within the product's record. The most recent daily data is given the lowest values while the oldest daily data gets the highest values.Level 3 Processing OverviewThe original Sentinel-5P Level 2 data from the European Space Agency, hosted on the Microsoft Planetary Computer, has been re-gridded and merged to create a single Level 3 Cloud Optimized GeoTiff for each day of collection for each product. The software package HARP-Convert has been used to merge and re-grid the data in order to keep a single grid per orbit (that is, no aggregation across products is performed). The complete conversion to Level 3 using HARP-Convert utilizes the "bin_spatial" operation that spatially averages pixel values between overlapping scenes for a given day of collection. In addition to the merging operation, HARP Convert is used to filter pixel values for quality as well as other variables depending on product. After the merging and regridding process, OptimizeRasters is invoked to create a Level 3 Cloud Optimized GeoTiff for a given day's worth of data for each product.Data Collection NotesSentinel-5 Mission Status Reports are available from the Sentinel-5P team at the European Space Agency. These reports provide information on the status of the satellite and the instrument, the associated ground segment, and any mission milestones. From time to time, temporal data gaps may be present due to ground station outages, satellite repositioning, or recalibration of satellite instrumentation. In all cases, any disruption to the Sentinel-5 collection pattern are documented in the Mission Status Reports. Daily collections with any data gaps are omitted from this layer.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Important Note: This item is in mature support as of February 2023 and will be retired in December 2025. A new version of this item is available for your use. Esri recommends updating your maps and apps to use the new version. This layer displays change in pixels of the Sentinel-2 10m Land Use/Land Cover product developed by Esri, Impact Observatory, and Microsoft. Available years to compare with 2021 are 2018, 2019 and 2020. By default, the layer shows all comparisons together, in effect showing what changed 2018-2021. But the layer may be changed to show one of three specific pairs of years, 2018-2021, 2019-2021, or 2020-2021.Showing just one pair of years in ArcGIS Online Map ViewerTo show just one pair of years in ArcGIS Online Map viewer, create a filter. 1. Click the filter button. 2. Next, click add expression. 3. In the expression dialogue, specify a pair of years with the ProductName attribute. Use the following example in your expression dialogue to show only places that changed between 2020 and 2021:ProductNameis2020-2021By default, places that do not change appear as a
transparent symbol in ArcGIS Pro. But in ArcGIS Online Map Viewer, a transparent
symbol may need to be set for these places after a filter is
chosen. To do this:4. Click the styles button. 5. Under unique values click style options. 6. Click the symbol next to No Change at the bottom of the legend. 7. Click the slider next to "enable fill" to turn the symbol off.Showing just one pair of years in ArcGIS ProTo show just one pair of years in ArcGIS Pro, choose one of the layer's processing templates to single out a particular pair of years. The processing template applies a definition query that works in ArcGIS Pro. 1. To choose a processing template, right click the layer in the table of contents for ArcGIS Pro and choose properties. 2. In the dialogue that comes up, choose the tab that says processing templates. 3. On the right where it says processing template, choose the pair of years you would like to display. The processing template will stay applied for any analysis you may want to perform as well.How the change layer was created, combining LULC classes from two yearsImpact Observatory, Esri, and Microsoft used artificial intelligence to classify the world in 10 Land Use/Land Cover (LULC) classes for the years 2017-2021. Mosaics serve the following sets of change rasters in a single global layer: Change between 2018 and 2021Change between 2019 and 2021Change between 2020 and 2021To make this change layer, Esri used an arithmetic operation
combining the cells from a source year and 2021 to make a change index
value. ((from year * 16) + to year) In the example of the change between 2020 and 2021, the from year (2020) was multiplied by 16, then added to the to year (2021). Then the combined number is served as an index in an 8 bit unsigned mosaic with an attribute table which describes what changed or did not change in that timeframe. Variable mapped: Change in land cover between 2018, 2019, or 2020 and 2021 Data Projection: Universal Transverse Mercator (UTM)Mosaic Projection: WGS84Extent: GlobalSource imagery: Sentinel-2Cell Size: 10m (0.00008983152098239751 degrees)Type: ThematicSource: Esri Inc.Publication date: January 2022What can you do with this layer?Global LULC maps provide information on conservation planning, food security,
and hydrologic modeling, among other things. This dataset can be used to
visualize land cover anywhere on Earth. This
layer can also be used in analyses that require land cover input. For
example, the Zonal Statistics tools allow a user to understand the
composition of a specified area by reporting the total estimates for
each of the classes. Land Cover processingThis map was produced by a deep learning model trained using over 5 billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world. The underlying deep learning model uses 6 bands of Sentinel-2 surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map. Processing platformSentinel-2 L2A/B data was accessed via Microsoftâs Planetary Computer and scaled using Microsoft Azure Batch.Class definitions1. WaterAreas
where water was predominantly present throughout the year; may not
cover areas with sporadic or ephemeral water; contains little to no
sparse vegetation, no rock outcrop nor built up features like docks;
examples: rivers, ponds, lakes, oceans, flooded salt plains.2. TreesAny
significant clustering of tall (~15-m or higher) dense vegetation,
typically with a closed or dense canopy; examples: wooded vegetation,
clusters of dense tall vegetation within savannas, plantations, swamp or
mangroves (dense/tall vegetation with ephemeral water or canopy too
thick to detect water underneath).4. Flooded vegetationAreas
of any type of vegetation with obvious intermixing of water throughout a
majority of the year; seasonally flooded area that is a mix of
grass/shrub/trees/bare ground; examples: flooded mangroves, emergent
vegetation, rice paddies and other heavily irrigated and inundated
agriculture.5. CropsHuman
planted/plotted cereals, grasses, and crops not at tree height;
examples: corn, wheat, soy, fallow plots of structured land.7. Built AreaHuman
made structures; major road and rail networks; large homogenous
impervious surfaces including parking structures, office buildings and
residential housing; examples: houses, dense villages / towns / cities,
paved roads, asphalt.8. Bare groundAreas
of rock or soil with very sparse to no vegetation for the entire year;
large areas of sand and deserts with no to little vegetation; examples:
exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried
lake beds, mines.9. Snow/IceLarge
homogenous areas of permanent snow or ice, typically only in mountain
areas or highest latitudes; examples: glaciers, permanent snowpack, snow
fields. 10. CloudsNo land cover information due to persistent cloud cover.11. Rangeland Open
areas covered in homogenous grasses with little to no taller
vegetation; wild cereals and grasses with no obvious human plotting
(i.e., not a plotted field); examples: natural meadows and fields with
sparse to no tree cover, open savanna with few to no trees, parks/golf
courses/lawns, pastures. Mix of small clusters of plants or single
plants dispersed on a landscape that shows exposed soil or rock;
scrub-filled clearings within dense forests that are clearly not taller
than trees; examples: moderate to sparse cover of bushes, shrubs and
tufts of grass, savannas with very sparse grasses, trees or other
plants.CitationKarra,
Kontgis, et al. âGlobal land use/land cover with Sentinel-2 and deep
learning.â IGARSS 2021-2021 IEEE International Geoscience and Remote
Sensing Symposium. IEEE, 2021.AcknowledgementsTraining
data for this project makes use of the National Geographic Society
Dynamic World training dataset, produced for the Dynamic World Project
by National Geographic Society in partnership with Google and the World
Resources Institute.For questions please email environment@esri.com
Jointly managed by NASA and the USGS, Landsat is the longest running spaceborne earth imaging and observation program in history. Landsat Collection 2 Level-2 science products, imagery from 1982 to present, are made publicly available by the USGS. The continuity in this scientific record allows for critical and reliable observation and analysis of Earth processes and changes over time.
This imagery layer provides global Analysis-Optimized Landsat 4, 5, 7, 8, and 9 imagery. This layer is time-enabled and includes a number of predefined band combinations and indices for visualization and analysis.
Key Properties
Geographic Coverage: Global landmasses
Temporal Coverage: August 22, 1982 to present
Spatial Resolution: 30-meter
Revisit Time: ~8-days
Product Level*: Collection 2 Level-2 Science Products (Surfact Reflectance and Surface Temperature)
Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84
Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)
*NOTE: The appropriate scale and offset, as provided by USGS, are dynamically applied to this imagery to provide scientific floating point values for both Surface Reflectance and Surface Temperature.
Usage Tips
The default rendering on this layer is Natural Color (bands 4,3,2) for Visualization.
Visualization templates should not be used as input to analysis tools.
To discover and isolate specific images in Map Viewer, try using the Image Collection Explorer to create custom layers in your maps.
The Landsat Explorer provides a good introductory user experience for working with this imagery layer. For more information, see this Quick Start Guide or this Detailed Tutorial.
Multispectral Bands
Band
Description
Wavelength* (”m)
Spatial Resolution (m)
1
Coastal aerosol**
0.43 - 0.45
30
2
Blue
0.45 - 0.52
30
3
Green
0.52 - 0.60
30
4
Red
0.63 - 0.69
30
5
NIR
0.76 - 0.90
30
6
SWIR 1
1.55 - 1.75
30
7
SWIR 2
2.08 - 2.35
30
8
Pixel QA
NA
30
9
Surface Temperature (Kelvin)
10.4-12.5
30***
10
Surface Temperature QA
NA
30
*This is the max range for each band based on the combined missions. For reference to the distinct ranges for each mission see this document.
**Coastal Aerosol is only available from Landsat 8 and 9. This band is simply a place holder and does not contain data for the other missions.
***The thermal band is acquired at 100 or 120 meter resolution and resampled to 30 meters.
Learn more about theQuality Assessment (QA) Bands
Data Source
Landsat imagery is sourced from the U.S. Geological Survey (USGS) and the National Aeronautics and Space Administration (NASA). The imagery in this layer is hosted in Azure as part of the Microsoft Planetary Computer Data Catalog.
Sentinel-1 Radiometric Terrain Corrected (RTC) 10-meter C-band synthetic aperture radar (SAR) imagery with on-the-fly functions for visualization and unit conversions for analysis. The Sentinel-1 RTC data in this collection is an analysis ready product derived from the Ground Range Detected (GRD) Level-1 products produced by the European Space Agency. Radiometric Terrain Correction accounts for terrain variations that affect both the position of a given point on the Earth's surface and the brightness of the radar return. For more information in the source data, see Sentinel-1 Radiometric Terrain Corrected (RTC) in the Microsoft Planetary Computer data catalog.With the ability to see through cloud and smoke cover, and because it does not rely on solar illumination of the Earth's surface, Sentinel-1 is able to collect useful imagery in most weather conditions, during both day and night. This data is good for wide range of land and maritime applications, from deforestation monitoring to oil spill mapping.Key PropertiesGeographic Coverage: Global - approximately 80° North to 80° SouthTemporal Coverage: 10/10/2014 â PresentSpatial Resolution: 10 x 10 meterRevisit Time*: ~6-days from 10/10/2014 â 12/23/2021; ~12-days from 12/23/2021 â PresentProduct Type: Ground Range Detected (GRD)Product Level: Radiometrically terrain corrected (RTC) and analysis readyFrequency Band: C-bandInstrument Mode: Interferometric Wide Swath Mode (IW)Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)*Prior to Dec 23, 2021, the mission included two satellites, Sentinel-1A and Sentinel-1B. On Dec 23, 2021, Sentinel-1B experienced a power anomaly resulting in permanent loss of data transmission. The mission is currently comprised of a single satellite, Sentinel-1A.ApplicationsThe RTC product can be used for a wide range of applications, including:Land cover classification such as forests, wetlands, water bodies, urban areas, and agricultural landChange detection such as deforestation and urban growthNatural hazard monitoring such as floodsOceanography such as oil spill monitoring and ship detectionAvailable Bands/PolarizationsDynamic Renderings and Data TransformationsThe default rendering is False Color (VV, VH, VV-VH) in dB scale with Dynamic Range Adjustment (DRA)The DRA version of each layer enables visualization of the full dynamic range of the images.Various pre-defined on-the-fly Raster Functions can be selected, or custom functions created. Name Description
Sentinel-1 RGB dB with DRA RGB color composite of VV,VH,VV-VH in dB scale with a dynamic stretch applied for visualization only
Sentinel-1 RGB dB RGB color composite of VV,VH,VV-VH in dB scale for visualization and some numerical analysis
Sentinel-1 RTC VV Power VV data in Power scale for numerical analysis
Sentinel-1 RTC VH Power VH data in Power scale for numerical analysis
Sentinel-1 RTC VV Amplitude VV data in Amplitude scale for numerical analysis
Sentinel-1 RTC VH Amplitude VH data in Amplitude scale for numerical analysis
Sentinel-1 RTC VV dB VV data in dB scale for visualization and some numerical analysis
Sentinel-1 RTC VV dB with DRA VV data in dB scale with a dynamic stretch applied for visualization only
Sentinel-1 RTC VH dB VH data in dB scale for visualization and some numerical analysis
Sentinel-1 RTC VH dB with DRA VH data in dB scale with a dynamic stretch applied for visualization only Image Selection/FilteringA number of fields are available for filtering, including Polarization Type, Sensor, Orbit Direction, Acquisition Date, and Numdate.To isolate and work with specific images, either use the âImage Filterâ to create custom layers or add a âQuery Filterâ to restrict the default layer display to a specified image or group of images. NOTE: Image Filter is currently only available in Map Viewer Classic.Additional Usage NotesImage exports and Raster Analysis are limited to 4000 columns x 4000 rows per request.This dynamic imagery layer can be used in Web Maps and ArcGIS Pro as well as web and mobile applications using the ArcGIS REST APIs.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This results from a prototype change alert system (Bunting et al., 2023) that has been developed to identify mangrove losses on a monthly basis. Implemented on the Microsoft Planetary Computer, the Global Mangrove Watch v3.0 mangrove baseline extent map (Bunting et al., 2022) for 2018 was refined and used to define the mangrove extent mask under which potential losses would be identified. The study period was 2018-2022 due to the availability of the Copernicus Sentinel-2 imagery used for the study. The alert system is based on optimised NDVI thresholds used to identify mangrove losses and a temporal scoring system used to filter false positives. The alert system was found to have an estimated overall accuracy of 92.1 %, with the alert commission and omission estimated to be 10.4 % and 20.6 %, respectively. The alert system is presently limited to Africa, where significant losses were identified in the study period, with 90 % of the loss alerts identified in Nigeria, Guinea-Bissau, Madagascar, Mozambique and Guinea. The drivers of those losses vary, with West Africa primarily driven by economic activities such as agricultural conversion and infrastructure development. At the same time, East Africa is dominated by climatic drivers, primarily storm frequency and intensity. Production of the monthly loss alerts for Africa will be continued as part of the wider Global Mangrove Watch project, and the spatial coverage is expected to be expanded over the coming months and years. Future updates of the mangrove loss alerts will be via the Global Mangrove Watch portal: https://www.globalmangrovewatch.org
This layer displays change in pixels of the Sentinel-2 10m Land Use/Land Cover product developed by Esri, Impact Observatory, and Microsoft. Available years to compare with 2021 are 2018, 2019 and 2020.By default, the layer shows all comparisons together, in effect showing what changed 2018-2021. But the layer may be changed to show one of three specific pairs of years, 2018-2021, 2019-2021, or 2020-2021.Showing just one pair of years in ArcGIS Online Map ViewerTo show just one pair of years in ArcGIS Online Map viewer, create a filter.1. Click the filter button.2. Next, click add expression.3. In the expression dialogue, specify a pair of years with the ProductName attribute. Use the following example in your expression dialogue to show only places that changed between 2020 and 2021:ProductNameis2020-2021By default, places that do not change appear as a transparent symbol in ArcGIS Pro. But in ArcGIS Online Map Viewer, a transparent symbol may need to be set for these places after a filter is chosen. To do this:4. Click the styles button.5. Under unique values click style options.6. Click the symbol next to No Change at the bottom of the legend.7. Click the slider next to "enable fill" to turn the symbol off.Showing just one pair of years in ArcGIS ProTo show just one pair of years in ArcGIS Pro, choose one of the layer's processing templates to single out a particular pair of years. The processing template applies a definition query that works in ArcGIS Pro.1. To choose a processing template, right click the layer in the table of contents for ArcGIS Pro and choose properties.2. In the dialogue that comes up, choose the tab that says processing templates.3. On the right where it says processing template, choose the pair of years you would like to display.The processing template will stay applied for any analysis you may want to perform as well.How the change layer was created, combining LULC classes from two yearsImpact Observatory, Esri, and Microsoft used artificial intelligence to classify the world in 10 Land Use/Land Cover (LULC) classes for the years 2017-2021. Mosaics serve the following sets of change rasters in a single global layer:Change between 2018 and 2021Change between 2019 and 2021Change between 2020 and 2021To make this change layer, Esri used an arithmetic operation combining the cells from a source year and 2021 to make a change index value. ((from year * 16) + to year) In the example of the change between 2020 and 2021, the from year (2020) was multiplied by 16, then added to the to year (2021). Then the combined number is served as an index in an 8 bit unsigned mosaic with an attribute table which describes what changed or did not change in that timeframe.Variable mapped: Change in land cover between 2018, 2019, or 2020 and 2021Data Projection: Universal Transverse Mercator (UTM)Mosaic Projection: WGS84Extent: GlobalSource imagery: Sentinel-2Cell Size: 10m (0.00008983152098239751 degrees)Type: ThematicSource: Esri Inc.Publication date: January 2022What can you do with this layer?Global LULC maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land cover anywhere on Earth. This layer can also be used in analyses that require land cover input. For example, the Zonal Statistics tools allow a user to understand the composition of a specified area by reporting the total estimates for each of the classes.Land Cover processingThis map was produced by a deep learning model trained using over 5 billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world. The underlying deep learning model uses 6 bands of Sentinel-2 surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map.Processing platformSentinel-2 L2A/B data was accessed via Microsoftâs Planetary Computer and scaled using Microsoft Azure Batch.Class definitions1. WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2. TreesAny significant clustering of tall (~15-m or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4. Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5. CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7. Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8. Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9. Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields. 10. CloudsNo land cover information due to persistent cloud cover.11. RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.CitationKarra, Kontgis, et al. âGlobal land use/land cover with Sentinel-2 and deep learning.â IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.For questions please email environment@esri.com
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The C2S-MS Floods Dataset is a dataset of global flood events with labeled Sentinel-1 & Sentinel-2 pairs. There are 900 sets (1800 total) of near-coincident Sentinel-1 and Sentinel-2 chips (512 x 512 pixels) from 18 global flood events. Each chip contains a water label for both Sentinel-1 and Sentinel-2, as well as a cloud/cloud shadow mask for Sentinel-2. The dataset was constructed by Cloud to Street in collaboration with and funded by the Microsoft Planetary Computer team.