24 datasets found
  1. Dynamic World V1

    • developers.google.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Google, Dynamic World V1 [Dataset]. http://doi.org/10.1038/s41597-022-01307-4
    Explore at:
    Dataset provided by
    Googlehttp://google.com/
    World Resources Institute
    Time period covered
    Jun 27, 2015 - Mar 28, 2026
    Area covered
    Earth
    Description

    Dynamic World is a 10m near-real-time (NRT) Land Use/Land Cover (LULC) dataset that includes class probabilities and label information for nine classes. Dynamic World predictions are available for the Sentinel-2 L1C collection from 2015-06-27 to present. The revisit frequency of Sentinel-2 is between 2-5 days depending on latitude. Dynamic World predictions are generated for Sentinel-2 L1C images with CLOUDY_PIXEL_PERCENTAGE <= 35%. Predictions are masked to remove clouds and cloud shadows using a combination of S2 Cloud Probability, Cloud Displacement Index, and Directional Distance Transform. Images in the Dynamic World collection have names matching the individual Sentinel-2 L1C asset names from which they were derived, e.g: ee.Image('COPERNICUS/S2/20160711T084022_20160711T084751_T35PKT') has a matching Dynamic World image named: ee.Image('GOOGLE/DYNAMICWORLD/V1/20160711T084022_20160711T084751_T35PKT'). All probability bands except the "label" band collectively sum to 1. To learn more about the Dynamic World dataset and see examples for generating composites, calculating regional statistics, and working with the time series, see the Introduction to Dynamic World tutorial series. Given Dynamic World class estimations are derived from single images using a spatial context from a small moving window, top-1 "probabilities" for predicted land covers that are in-part defined by cover over time, like crops, can be comparatively low in the absence of obvious distinguishing features. High-return surfaces in arid climates, sand, sunglint, etc may also exhibit this phenomenon. To select only pixels that confidently belong to a Dynamic World class, it is recommended to mask Dynamic World outputs by thresholding the estimated "probability" of the top-1 prediction.

  2. Dynamic World training dataset for global land use and land cover...

    • doi.pangaea.de
    html, tsv
    Updated Jul 7, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander M Tait; Steven P Brumby; Samantha Brooks Hyde; Joseph Mazzariello; Melanie Corcoran (2021). Dynamic World training dataset for global land use and land cover categorization of satellite imagery [Dataset]. http://doi.org/10.1594/PANGAEA.933475
    Explore at:
    tsv, htmlAvailable download formats
    Dataset updated
    Jul 7, 2021
    Dataset provided by
    PANGAEA
    Authors
    Alexander M Tait; Steven P Brumby; Samantha Brooks Hyde; Joseph Mazzariello; Melanie Corcoran
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 28, 2017 - Dec 12, 2019
    Area covered
    Variables measured
    File content, Binary Object, Binary Object (File Size)
    Description

    The Dynamic World Training Data is a dataset of over 5 billion pixels of human-labeled ESA Sentinel-2 satellite image, distributed over 24000 tiles collected from all over the world. The dataset is designed to train and validate automated land use and land cover mapping algorithms. The 10m resolution 5.1km-by-5.1km tiles are densely labeled using a ten category classification schema indicating general land use land cover categories. The dataset was created between 2019-08-01 and 2020-02-28, using satellite imagery observations from 2019, with approximately 10% of observations extending back to 2017 in very cloudy regions of the world. This dataset is a component of the National Geographic Society - Google - World Resources Institute Dynamic World project. […]

  3. Dynamic World Expert Consensus Validation Tiles

    • zenodo.org
    zip
    Updated May 17, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christopher Forrest Brown; Christopher Forrest Brown; Steven P. Brumby; Brookie Guzder-Williams; Tanya Birch; Samantha Brooks Hyde; Joseph Mazzariello; Wanda Czerwinski; Valerie J. Pasquarella; Robert Haertel; Simon Ilyushchenko; Kurt Schwehr; Mikaela Weisse; Fred Stolle; Craig Hanson; Oliver Guinan; Rebecca Moore; Alexander M. Tait; Steven P. Brumby; Brookie Guzder-Williams; Tanya Birch; Samantha Brooks Hyde; Joseph Mazzariello; Wanda Czerwinski; Valerie J. Pasquarella; Robert Haertel; Simon Ilyushchenko; Kurt Schwehr; Mikaela Weisse; Fred Stolle; Craig Hanson; Oliver Guinan; Rebecca Moore; Alexander M. Tait (2021). Dynamic World Expert Consensus Validation Tiles [Dataset]. http://doi.org/10.5281/zenodo.4766451
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 17, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Christopher Forrest Brown; Christopher Forrest Brown; Steven P. Brumby; Brookie Guzder-Williams; Tanya Birch; Samantha Brooks Hyde; Joseph Mazzariello; Wanda Czerwinski; Valerie J. Pasquarella; Robert Haertel; Simon Ilyushchenko; Kurt Schwehr; Mikaela Weisse; Fred Stolle; Craig Hanson; Oliver Guinan; Rebecca Moore; Alexander M. Tait; Steven P. Brumby; Brookie Guzder-Williams; Tanya Birch; Samantha Brooks Hyde; Joseph Mazzariello; Wanda Czerwinski; Valerie J. Pasquarella; Robert Haertel; Simon Ilyushchenko; Kurt Schwehr; Mikaela Weisse; Fred Stolle; Craig Hanson; Oliver Guinan; Rebecca Moore; Alexander M. Tait
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    World
    Description

    From the Dynamic World dataset, DOI: PENDING DYNAMICWORLD DOI

    These data comprise the expert consensus set of GeoTIFF test tiles used for validating Dynamic World and other LULC maps. A metadata CSV is also included to enable cross-walking between these tiles and the Sentinel-2 L2A used to annotate the tile.

    The Dynamic World abstract:

    We developed a new automated approach for globally consistent, high resolution, near real-time (NRT) land use land cover (LULC) mapping leveraging deep learning on 10m Sentinel-2 imagery. When compared to other global LULC datasets, our data exceeded the next best global product agreement with an expert consensus test set by 7.5%. We utilize a highly scalable cloud based system for generating LULC maps and provide an open, continuous feed of LULC in parallel with Sentinel-2 acquisitions. This NRT product accommodates a variety of user needs ranging from extremely up-to-date LULC data to annual global maps. Furthermore, the continuous nature of the product's outputs enables refinement, extension, and even redefinition of the LULC classification. In combination, these unique attributes enable unprecedented flexibility for a diverse community of users across a variety of disciplines.

  4. Z

    Wetland Land-Cover Segmentation and Classification in the Netherlands...

    • data.niaid.nih.gov
    Updated Apr 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gmelich Meijling, Eva (2025). Wetland Land-Cover Segmentation and Classification in the Netherlands (Sentinel-2 satellite imagery and Dynamic World labels) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_15125548
    Explore at:
    Dataset updated
    Apr 2, 2025
    Dataset provided by
    University of Amsterdam
    Authors
    Gmelich Meijling, Eva
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    World, Netherlands
    Description

    This dataset contains preprocessed Sentinel-2 imagery and corresponding Dynamic World land-cover labels for six wetland areas in the Netherlands. It was created to support land-cover classification and segmentation tasks in ecologically dynamic floodplain environments. The data covers the period January 2017 to November 2024 and includes only scenes with less than 5% cloud cover.

    Sentinel-2 imagery was retrieved using the Google Earth Engine (GEE) API from the COPERNICUS/S2 SR HARMONIZED collection, which provides Harmonized Level-2A data at 10 m spatial resolution. From the 26 available bands, 9 were selected based on their relevance for wetland delineation: RGB, Red Edge 1–3, Near-Infrared (NIR), and Shortwave Infrared (SWIR 1–2). The imagery was tiled into 256×256 pixel patches and filtered for quality (e.g., excluding patches with >10% black pixels).

    Dynamic World land-cover labels (Brown et al., 2022) were used to generate pixel-wise semantic segmentation masks by selecting the most probable class (out of 9 land-cover types) for each pixel. The resulting masks are single-band images where pixel values 0–8 represent land-cover classes as follows:

    0: Water 1: Trees 2: Grass 3: Flooded Vegetation 4: Crops 5: Shrub & Scrub 6: Built 7: Bare 8: Snow & Ice

    The dataset includes the following splits:

    Training set: Gelderse Poort, Oostvaardersplassen, Loosdrechtse Plassen, Land van Saeftinghe (1,701 images)

    Validation set: Lauwersmeer (948 images)

    Test set: Biesbosch (1,140 images)

    This resource enables benchmarking of supervised and self-supervised learning methods for wetland classification in medium-resolution optical satellite data.

    Reference:Brown, C.F., Brumby, S.P., Guzder-Williams, B., Birch, T., Hyde, S.B., Mazzariello, J., Czerwinski, W., Pasquarella, V.J., Haertel, R., Ilyushchenko, S., Schwehr, K., Weisse, M., Stolle, F., Hanson, C., Guinan, O., Moore, R., & Tait, A.M. (2022). Dynamic World, Near real-time global 10 m land use land cover mapping. Scientific Data, 9(1). https://doi.org/10.1038/s41597-022-01307-4

  5. o

    google/dynamicworld — oosmetrics

    • oosmetrics.com
    Updated Mar 24, 2026
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    oosmetrics (2026). google/dynamicworld — oosmetrics [Dataset]. https://oosmetrics.com/repo/google/dynamicworld
    Explore at:
    Dataset updated
    Mar 24, 2026
    Dataset authored and provided by
    oosmetrics
    Description

    google/dynamicworld is growing at +0/day (Grade B). Track its momentum, acceleration, and originality score on oosmetrics.

  6. MAV Forest Cover Classification

    • gis-fws.opendata.arcgis.com
    Updated Jul 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Fish & Wildlife Service (2024). MAV Forest Cover Classification [Dataset]. https://gis-fws.opendata.arcgis.com/datasets/mav-forest-cover-classification-
    Explore at:
    Dataset updated
    Jul 30, 2024
    Dataset provided by
    U.S. Fish and Wildlife Servicehttp://www.fws.gov/
    Authors
    U.S. Fish & Wildlife Service
    Area covered
    Description

    This image classification of forest cover in the MAV was created using Google Dynamic World (https://www.nature.com/articles/s41597-022-01307-4 - https://dynamicworld.app/) to determine what was classified as forest. This dataset is a result of an automated land classification for every Sentinel image that is released. The code used for this process is as follows. ee.ImageCollection('GOOGLE/DYNAMICWORLD/V1') \ .filterBounds(geometry) \ .filterDate(oldstartDate, oldendDate) \ .select('label') \ .mode() \ .eq(1) \ .updateMask(urban) We selected the Dynamic World dataset and filtered by our area of interest by the extents of the Lower Mississippi Joint Venture boundary (i.e. Mississippi Alluvial Valley and West Gulf Coastal Plain ecological bird conservation regions (BCRs).We filtered the dataset based on a start and end date which is the first of 2021 and the last day of 2021.With this dataset each class has a band that represents probability of that pixel having complete coverage of that class (https://developers.google.com/earth-engine/datasets/catalog/GOOGLE_DYNAMICWORLD_V1#bands)Data accuracy was assessed at @82% accuracy and data resolution is 10m. Each image has a ‘label’ band with a discrete classification of LULC, but also 9 probability bands with class-specific probability scores generated by the deep learning model on the basis of the pixel’s spatial context. To generate an annual LULC composite comparable with WC and Esri, we calculated the mode of the predicted LULC class in the ‘label’ band of all DW images for 2020.Michael Mitchell with Ducks Unlimited Southern Regional Office led the development of this effort, in coordination and collaboration with Lower Mississippi Valley Joint Venture staff.

  7. Lake McConaughy Lake Dynamic Tracking Data

    • figshare.com
    xlsx
    Updated Feb 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Weekley (2022). Lake McConaughy Lake Dynamic Tracking Data [Dataset]. http://doi.org/10.6084/m9.figshare.8080664.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Feb 2, 2022
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    David Weekley
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    McConaughy Lake
    Description

    This data is long-term lake dynamic (surface elevation, area, and volume) derived from topographic datasets and Landsat imagery in Google Earth Engine.

  8. d

    The Dynamic World Global Surface Water Data: 2015-2023 (version 1)

    • search-orc-1.dataone.org
    • dataone.org
    • +1more
    Updated Dec 13, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adnan Rajib; Arushi Khare (2025). The Dynamic World Global Surface Water Data: 2015-2023 (version 1) [Dataset]. http://doi.org/10.4211/hs.9d60389f55b648149a788a2ff7bc3766
    Explore at:
    Dataset updated
    Dec 13, 2025
    Dataset provided by
    Hydroshare
    Authors
    Adnan Rajib; Arushi Khare
    Time period covered
    Jan 1, 2015 - Dec 31, 2023
    Area covered
    Description

    Advances in data availability, Earth observation technologies, and geospatial sciences have transformed our ability to map Global Surface Water Extents (GSWE). However, traditional GSWE mapping has been limited to static estimates, with more recent efforts focusing on annual averages and temporal attributes like frequency and occurrence of long-term variations. We harnessed remotely sensed Sentinel-2 based near real-time Dynamic World land cover product to produce the first public, routinely available 10-meter resolution global surface water datasets. Our key contribution is an Open Science operational framework to rapidly extract the latest available Dynamic World products every 2-5 days, run geospatial analytics, and create actionable water information for educators, researchers, and stakeholders at any scale of practical interest.

    This dataset was developed by the Hydrology & Hydroinformatics Innovation Lab at the University of Texas at Arlington, United States.

  9. e

    Dynamic World Private Limited Import Export Data & Shipment Details

    • eximpedia.app
    Updated Mar 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Dynamic World Private Limited Import Export Data & Shipment Details [Dataset]. https://www.eximpedia.app/companies/dynamic-world-private-limited/65260955
    Explore at:
    Dataset updated
    Mar 28, 2025
    Description

    View Dynamic World Private Limited import export trade data, including shipment records, HS codes, top buyers, suppliers, trade values, and global market insights.

  10. Data from: Our Dynamic World

    • storymaps-k12.hub.arcgis.com
    Updated Aug 6, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri K12 GIS Organization (2021). Our Dynamic World [Dataset]. https://storymaps-k12.hub.arcgis.com/datasets/our-dynamic-world
    Explore at:
    Dataset updated
    Aug 6, 2021
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri K12 GIS Organization
    Description

    Summary: Creating the world’s first open-source, high-resolution, land cover map of the worldStorymap metadata page: URL forthcoming Possible K-12 Next Generation Science standards addressed:Grade level(s) K: Standard K-ESS3-1 - Earth and Human Activity - Use a model to represent the relationship between the needs of different plants or animals (including humans) and the places they liveGrade level(s) K: Standard K-ESS3-3 - Earth and Human Activity - Communicate solutions that will reduce the impact of humans on the land, water, air, and/or other living things in the local environmentGrade level(s) 2: Standard 2-ESS2-1 - Earth’s Systems - Compare multiple solutions designed to slow or prevent wind or water from changing the shape of the landGrade level(s) 2: Standard 2-ESS2-2 - Earth’s Systems - Develop a model to represent the shapes and kinds of land and bodies of water in an areaGrade level(s) 3: Standard 3-LS4-1 - Biological Evolution: Unity and Diversity - Analyze and interpret data from fossils to provide evidence of the organisms and the environments in which they lived long ago.Grade level(s) 3: Standard 3-LS4-1 - Biological Evolution: Unity and Diversity - Analyze and interpret data from fossils to provide evidence of the organisms and the environments in which they lived long ago.Grade level(s) 3: Standard 3-LS4-4 - Biological Evolution: Unity and Diversity - Make a claim about the merit of a solution to a problem caused when the environment changes and the types of plants and animals that live there may changeGrade level(s) 4: Standard 4-ESS1-1 - Earth’s Place in the Universe - Identify evidence from patterns in rock formations and fossils in rock layers to support an explanation for changes in a landscape over timeGrade level(s) 4: Standard 4-ESS2-2 - Earth’s Systems - Analyze and interpret data from maps to describe patterns of Earth’s featuresGrade level(s) 5: Standard 5-ESS2-1 - Earth’s Systems - Develop a model using an example to describe ways the geosphere, biosphere, hydrosphere, and/or atmosphere interact.Grade level(s) 6-8: Standard MS-ESS2-2 - Earth’s Systems - Construct an explanation based on evidence for how geoscience processes have changed Earth’s surface at varying time and spatial scalesGrade level(s) 6-8: Standard MS-ESS2-6 - Earth’s Systems - Develop and use a model to describe how unequal heating and rotation of the Earth cause patterns of atmospheric and oceanic circulation that determine regional climates.Grade level(s) 6-8: Standard MS-ESS3-3 - Earth and Human Activity - Apply scientific principles to design a method for monitoring and minimizing a human impact on the environment.Grade level(s) 9-12: Standard HS-ESS2-1 - Earth’s Systems - Develop a model to illustrate how Earth’s internal and surface processes operate at different spatial and temporal scales to form continental and ocean-floor features.Grade level(s) 9-12: Standard HS-ESS2-7 - Earth’s Systems - Construct an argument based on evidence about the simultaneous coevolution of Earth’s systems and life on EarthGrade level(s) 9-12: Standard HS-ESS3-4 - Earth and Human Activity - Evaluate or refine a technological solution that reduces impacts of human activities on natural systems.Grade level(s) 9-12: Standard HS-ESS3-6 - Earth and Human Activity - Use a computational representation to illustrate the relationships among Earth systems and how those relationships are being modified due to human activityMost frequently used words:areaslandclassesApproximate Flesch-Kincaid reading grade level: 9.7. The FK reading grade level should be considered carefully against the grade level(s) in the NGSS content standards above.

  11. Sentinel-2 10m Land Use/Land Cover Time Series

    • cacgeoportal.com
    • climat.esri.ca
    • +8more
    Updated Oct 19, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2022). Sentinel-2 10m Land Use/Land Cover Time Series [Dataset]. https://www.cacgeoportal.com/datasets/cfcb7609de5f478eb7666240902d4d3d
    Explore at:
    Dataset updated
    Oct 19, 2022
    Dataset authored and provided by
    Esrihttp://esri.com/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    This layer displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution. Each year is generated with Impact Observatory’s deep learning AI land classification model, trained using billions of human-labeled image pixels from the National Geographic Society. The global maps are produced by applying this model to the Sentinel-2 Level-2A image collection on Microsoft’s Planetary Computer, processing over 400,000 Earth observations per year.The algorithm generates LULC predictions for nine classes, described in detail below. The year 2017 has a land cover class assigned for every pixel, but its class is based upon fewer images than the other years. The years 2018-2024 are based upon a more complete set of imagery. For this reason, the year 2017 may have less accurate land cover class assignments than the years 2018-2024. Key Properties Variable mapped: Land use/land cover in 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)Extent: GlobalSource imagery: Sentinel-2 L2ACell Size: 10-metersType: ThematicAttribution: Esri, Impact ObservatoryAnalysis: Optimized for analysisClass Definitions: ValueNameDescription1WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2TreesAny significant clustering of tall (~15 feet or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields.10CloudsNo land cover information due to persistent cloud cover.11RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.NOTE: Land use focus does not provide the spatial detail of a land cover map. As such, for the built area classification, yards, parks, and groves will appear as built area rather than trees or rangeland classes.Usage Information and Best PracticesProcessing TemplatesThis layer includes a number of preconfigured processing templates (raster function templates) to provide on-the-fly data rendering and class isolation for visualization and analysis. Each processing template includes labels and descriptions to characterize the intended usage. This may include for visualization, for analysis, or for both visualization and analysis. VisualizationThe default rendering on this layer displays all classes.There are a number of on-the-fly renderings/processing templates designed specifically for data visualization.By default, the most recent year is displayed. To discover and isolate specific years for visualization in Map Viewer, try using the Image Collection Explorer. AnalysisIn order to leverage the optimization for analysis, the capability must be enabled by your ArcGIS organization administrator. More information on enabling this feature can be found in the ‘Regional data hosting’ section of this help doc.Optimized for analysis means this layer does not have size constraints for analysis and it is recommended for multisource analysis with other layers optimized for analysis. See this group for a complete list of imagery layers optimized for analysis.Prior to running analysis, users should always provide some form of data selection with either a layer filter (e.g. for a specific date range, cloud cover percent, mission, etc.) or by selecting specific images. To discover and isolate specific images for analysis in Map Viewer, try using the Image Collection Explorer.Zonal Statistics is a common tool used for understanding the composition of a specified area by reporting the total estimates for each of the classes. GeneralIf you are new to Sentinel-2 LULC, the Sentinel-2 Land Cover Explorer provides a good introductory user experience for working with this imagery layer. For more information, see this Quick Start Guide.Global land use/land cover maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover anywhere on Earth. Classification ProcessThese maps include Version 003 of the global Sentinel-2 land use/land cover data product. It is produced by a deep learning model trained using over five billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6-bands of Sentinel-2 L2A surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map for each year.The input Sentinel-2 L2A data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch. CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.

  12. G

    Copernicus Global Land Cover Layers: CGLS-LC100 Collection 3

    • developers.google.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Copernicus, Copernicus Global Land Cover Layers: CGLS-LC100 Collection 3 [Dataset]. http://doi.org/10.5281/ZENODO.3518036
    Explore at:
    Dataset provided by
    Copernicus
    Time period covered
    Jan 1, 2015 - Dec 31, 2019
    Area covered
    Earth
    Description

    The Copernicus Global Land Service (CGLS) is earmarked as a component of the Land service to operate a multi-purpose service component that provides a series of bio-geophysical products on the status and evolution of land surface at global scale. The Dynamic Land Cover map at 100 m resolution (CGLS-LC100) is …

  13. Map of built-up expansion ("nedbygging") over Norway 2017-2022 version 2

    • zenodo.org
    • resodate.org
    • +1more
    zip
    Updated Jun 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zander Venter; Zander Venter; Nyborg Støstad Mads; Solvang Ruben; Kumano-Ensby Anne Linn; Mon Su Thet; Nyborg Støstad Mads; Solvang Ruben; Kumano-Ensby Anne Linn; Mon Su Thet (2024). Map of built-up expansion ("nedbygging") over Norway 2017-2022 version 2 [Dataset]. http://doi.org/10.5281/zenodo.12566926
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 27, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Zander Venter; Zander Venter; Nyborg Støstad Mads; Solvang Ruben; Kumano-Ensby Anne Linn; Mon Su Thet; Nyborg Støstad Mads; Solvang Ruben; Kumano-Ensby Anne Linn; Mon Su Thet
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Norway
    Description

    Version 2 of the dataset https://zenodo.org/records/10566644

    Changes from first version include:

    Data can be viewed interactively here: https://nina.earthengine.app/view/nedbygging

    (see Norwegian description below)

    1. Dataset Information

    - Title: Map of built-up expansion over Norway 2017-2022

    - Author(s): Zander Venter (NINA), Mads Nyborg Støstad (NRK), Ruben Solvang (NRK), Anne Linn Kumano-Ensby (NRK), Su Thet Mon (NRK)

    - Contact Information: zander.venter@nina.no

    - Date of Data Generation: 06.01.2024

    - Version: 1

    - Description: This is the dataset used in the NRK article published on 06.01.2024. The data contains polygons outlining potential “nedbygging” (hereafter translated to “built-up expansion” in English) events between 2017 and 2022 over Norway. The built-up expansion polygons were identified using a combination of Sentinel-2 satellite imagery, a fully convolutional neural network (a type of AI model) from Google called Dynamic World and NINA’s time series analysis thereof. The method to create the map will be published by NINA at a later date. The original map was created by NINA, but NRK performed some post-processing which included joining some polygons which were part of the same built-up expansion event (e.g. a long road). It is important to note that the map is a result of AI and has errors in it. Therefore, users are encouraged to read the sections on data quality and usage information below. Users can refer to Venter et al. (2024) for details on the scientific best practice which the NRK journalists followed to ensure that their reported area estimates in the article were not biased. In summary, the map is wrong 18% of the time. Users should expect to find that on average 1 in 5 square meter is incorrectly identified as built-up expansion. There are also many instances of built-up expansion which will be missed in the map such as forestry road development, building of small cabins etc.

    2. File Details

    - Format: Shapefile (.shp, .shx, .dbf, .prj)

    - Size: 13.27 MB

    3. Geospatial Information

    - Coordinate System: EPSG:32632, UTM zone 32N

    - Spatial Resolution: 10m

    - Geographical Coverage: Norway mainland (excludes Svalbard)

    - Temporal Coverage: 2017 to 2022

    4. Data Content

    - Attributes Included:

    - *id*: unique identity number for each polygon

    - *undersøkt*: whether the polygon has been investigated manually using visual interpretation of orthophotos. “ja” = “yes” and “nei” = “no”

    - *undersøkt_source*: whether the data was collected by the NRK team or the crowdsourcing effort

    - *kategori_1*: the type of built-up expansion labelled by the NRK team - see Google Translate for translations

    - *year*: the year in which the built-up expansion occurred as defined by the crowdsourcing volunteers

    - *ai_feil*: whether the AI model method correctly (“riktig”) or incorrectly (“feil”) identified natural habitat conversion to built-up surface. Values where *undersøkt* == “nei” are labelled as “ikke_verifisert”

    5. Data Quality

    - Accuracy: As described above, the false positive rate of the map was 18% based on 500 locations used for map validation and accuracy assessment. We did not quantify a false negative rate and balanced accuracy estimates because this would have required a denser sample for manual verification. Therefore, it is likely that there are many instances of built-up expansion that our map does not capture. After the formal accuracy assessment using the 500 stratified random points, NRK verified additional polygons (total of 3875) in the dataset during their investigative journalism workflow. Although these were not collected in a systematic manner, then can still be useful for some downstream tasks such as exploring what causes the AI model to misidentify built-up expansion.

    - Validation Methods: A design-based approach was used to quantify map accuracy and estimate uncertainty around the resulting area estimate reported in the NRK article. The details of this method are reported in Venter et al. (2024). This approach quantifies the error in the AI-derived map, and corrects for this using a stratified area estimator. Therefore, the total built-up expansion of 208 km<2> reported in the NRK article has been bias-corrected. We also quantified 95% confidence intervals around this are estimate of 9.8 km<2>. It is important to note that the validation approach was conducted on individual Sentinel-2 pixels of 10x10m and not at the polygon level. Therefore, we did not quantify the error in the precision of the polygon shape in terms of capturing the full extent of a given built-up expansion event.

    6. Usage Information

    - Use Limitations: Considering the map error described above, users should proceed with caution when analysing the map to derive area statistics or overlays with other maps. As described in Venter et al. (2024), simply adding the areas of the polygons (or “pixel counting” with maps formatted as images) without accounting for the error in the map will lead to incorrect area statistics. We recommend that users validate the map for their municipality or study area before proceeding with analysis. It is likely that the margin of error is highly variable between municipalities. For example, although we have not quantified it, we noticed many AI mistakes in mountainous regions due to snow and ice interference and therefore high-altitude municipalities might have more errors than low-altitude ones.

    Norwegian description:

    1. Datasettinformasjon

    - Tittel: Kart over nedbygging over Norge 2017-2022

    - Forfatter(e): Zander Venter (NINA), Mads Nyborg Støstad (NRK), Ruben Solvang (NRK), Anne Linn Kumano-Ensby (NRK), Su Thet Mon (NRK)

    - Kontaktinformasjon: zander.venter@nina.no

    - Dato for datagenerering: 06.01.2024

    - Versjon: 1

    - Beskrivelse: Dette er datasettet som brukes i NRK-artikkelen publisert 06.01.2024. Dataene inneholder polygoner som skisserer potensiell nedbygging mellom 2017 og 2022 over Norge. Nedbyggingsområdene ble identifisert ved hjelp av en kombinasjon av Sentinel-2 satellittbilder, et fullstendig konvolusjonelt nevralt nettverk (en type KI-modell) fra Google kalt Dynamic World og NINAs tidsserie-analyse av dette. Metoden for å lage kartet vil bli publisert av NINA på et senere tidspunkt. Det originale kartet ble laget av NINA, men NRK utførte en del etterbehandling som inkluderte sammenføyning av noen polygoner som var en del av den samme oppbygde utvidelseshendelsen (f.eks. en lang vei). Det er viktig å merke seg at kartet er produsert ved hjelp av kunstig intelligens og inneholder feil. Derfor oppfordres brukere til å lese avsnittene om datakvalitet og bruksinformasjon nedenfor. Brukere kan referere til Venter et al. (2024) for detaljer om den vitenskapelige beste praksisen som NRK-journalistene fulgte for å sikre at deres rapporterte arealstatistikk i artikkelen er korrekt. Oppsummert er 18 % av arealet i kartet feil. Brukere bør forvente å finne at i gjennomsnitt 1 av 5 kvadratmeter er feilaktig identifisert som nedbygging. Det er også mange tilfeller av nedbygging som som ikke vil vises i kartet, som skogsveiutbygging, bygging av småhytter mm.

    2. Fildetaljer

    - Format: Shapefil (.shp, .shx, .dbf, .prj)

    - Størrelse: 13,27 MB

    3. Geospatial informasjon

    - Koordinatsystem: EPSG:32632, UTM-sone 32N

    - Rolig oppløsning: 10m

    - Geografisk dekning: Norges fastland (ekskluderer Svalbard)

    - Tidlig dekning: 2017 til 2022

    4. Datainnhold

    - Attributter inkludert:

    - *id*: unikt identitetsnummer for hver polygon

    - *undersøkt*: om polygonet er undersøkt manuelt ved bruk av visuell tolkning av ortofoto.

    - *undersøkt_source*: om dataene er samlet inn av NRK-teamet eller crowdsourcing-innsatsen

    - *kategori_1*: typen nedbygging merket av NRK-teamet

    - *year*: året hvor nedbygging skjedde som definert av crowdsourcing

    - *ai_feil*: om AI-modellmetoden var “riktig” eller “feil”. Verdier der *undersøkt* == «nei» er merket som «ikke_verifisert»

    5. Datakvalitet

    - Nøyaktighet: Som beskrevet ovenfor var andelen falske positive punkter i kartet 18 % basert på 500 steder (prøveflater) brukt for kartvalidering og nøyaktighetsvurdering. Vi kvantifiserte ikke andelen falske negative punkter

  14. r

    Land Cover 2024 & Time Series

    • opendata.rcmrd.org
    Updated Mar 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MapMaker (2025). Land Cover 2024 & Time Series [Dataset]. https://opendata.rcmrd.org/datasets/2f73767ada1e4e2cb3041f56abb1b314
    Explore at:
    Dataset updated
    Mar 29, 2025
    Dataset authored and provided by
    MapMaker
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Land cover describes what is visible on Earth’s surface, such as forests, grasslands, cropland, and built places. Knowing what land cover is in each location can help us understand a landscape. Tracking land cover over many years can tell us how the landscape is changing over time. It allows us to track urban growth, measure the loss of wetlands and wild places to prioritize conservation and preservation efforts, and plan for the potential impacts of climate change. For example, a study published in Nature (https://doi.org/10.1038/s41467-021-22702-2) found that from 1960-2019, humans changed the use of approximately 32 percent of the land worldwide, an estimated four times higher than previously thought.This map layer was created using data from the Copernicus Sentinel-1 and 2 earth observation satellites collected by the European Space Agency from 2017-2024. This map displays data from 2024. Each value in this map layer applies to a single grid cell, or about 10 square meters (approximately 108 square feet). All these grid cells combined form a raster, a dataset made of rows and columns of cells that include regularly spaced data for all the land area on Earth. If you zoom in, the dataset seems to pixelate. Each pixel is one 10-meter by 10-meter cell (108 feet by 108 feet). Within each cell, the predominant land category is assigned. For example, if the cell overlaps a park with many trees but catches part of the parking lot next to it where hikers leave their cars or bikes, the cell would be classified as tree cover. If you select an imagery basemap and adjust the transparency of the map layer, you will be able to find such examples.The data on this map has been sorted into nine categories (water, trees, flooded vegetation, crops, built area, bare ground, snow and ice, clouds, and rangeland) by a computer using machine learning. Machine learning is part of artificial intelligence. It uses computer systems designed to process large amounts of information and learn and adapt based on the data without specific instructions or code from a programmer. This technology allows us to use data that would otherwise take such a long time to examine it would no longer be useful, as it would be out of data. What is the most common land cover near you?Data CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.

  15. a

    Sentinel-2 10m Land Use Land Cover Time Series

    • arc-gis-hub-home-arcgishub.hub.arcgis.com
    • opendata.rcmrd.org
    • +1more
    Updated Oct 2, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Geospatial Analysis Lab (GsAL) at USF (2024). Sentinel-2 10m Land Use Land Cover Time Series [Dataset]. https://arc-gis-hub-home-arcgishub.hub.arcgis.com/content/42945cf091f84444ab43c9850959edc3
    Explore at:
    Dataset updated
    Oct 2, 2024
    Dataset authored and provided by
    Geospatial Analysis Lab (GsAL) at USF
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    This layer displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution. Each year is generated with Impact Observatory’s deep learning AI land classification model, trained using billions of human-labeled image pixels from the National Geographic Society. The global maps are produced by applying this model to the Sentinel-2 Level-2A image collection on Microsoft’s Planetary Computer, processing over 400,000 Earth observations per year.The algorithm generates LULC predictions for nine classes, described in detail below. The year 2017 has a land cover class assigned for every pixel, but its class is based upon fewer images than the other years. The years 2018-2023 are based upon a more complete set of imagery. For this reason, the year 2017 may have less accurate land cover class assignments than the years 2018-2023.Variable mapped: Land use/land cover in 2017, 2018, 2019, 2020, 2021, 2022, 2023Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)Extent: GlobalSource imagery: Sentinel-2 L2ACell Size: 10-metersType: ThematicAttribution: Esri, Impact ObservatoryWhat can you do with this layer?Global land use/land cover maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover anywhere on Earth. This layer can also be used in analyses that require land use/land cover input. For example, the Zonal toolset allows a user to understand the composition of a specified area by reporting the total estimates for each of the classes. NOTE: Land use focus does not provide the spatial detail of a land cover map. As such, for the built area classification, yards, parks, and groves will appear as built area rather than trees or rangeland classes.Class definitionsValueNameDescription1WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2TreesAny significant clustering of tall (~15 feet or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields.10CloudsNo land cover information due to persistent cloud cover.11RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.Classification ProcessThese maps include Version 003 of the global Sentinel-2 land use/land cover data product. It is produced by a deep learning model trained using over five billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6-bands of Sentinel-2 L2A surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map for each year.The input Sentinel-2 L2A data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch.CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.

  16. m

    Pixel class probabilities investigations data

    • data.mendeley.com
    Updated Nov 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Myers (2023). Pixel class probabilities investigations data [Dataset]. http://doi.org/10.17632/zyds7t4pst.1
    Explore at:
    Dataset updated
    Nov 6, 2023
    Authors
    Daniel Myers
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Pixel class probabilities investigations data

    These data are associated with the manuscript:

    Land cover pixel class probabilities create customizable layers for forested and urban landscapes Daniel T. Myers1* (ORCID 0000-0002-1932-5775), Diana Oviedo-Vargas1, Melinda Daniels1, Yog Aryal2

    1 Stroud Water Research Center, 970 Spencer Road, Avondale, Pennsylvania 19311, USA 2 Department of Geography, Indiana University Bloomington, Student Building 120, 701 E. Kirkwood Avenue, Bloomington, IN 47405, USA * Corresponding author (dmyers@stroudcenter.org)

    They can be analyzed using the following scripts: https://github.com/Danmyers901/Calibration/tree/master/Pixel_class_probabilities

    Our data includes water quality measurements from the United States National Park Service, and remotely sensed landcover images from Dynamic World. Water quality data were downloaded from the Water Quality Portal at https://www.waterqualitydata.us/ using the Project ID search term “NCRNWQ01”.

    Brown, C. F. et al. Dynamic World, Near real-time global 10 m land use land cover mapping. Scientific Data 2022 9:1 9, 1–17 (2022).

    Norris, M., Pieper, J., Watts, T. & Cattani, A. National Capital Region Network Inventory and Monitoring Program Water Chemistry and Quantity Monitoring Protocol Version 2.0 Water chemistry, nutrient dynamics, and surface water dynamics vital signs. Natural Resource Report NPS/NCRN/NRR—2011/423 (2011).

    References for other data sources and packages we used for model development and analyses are below:

    III, K. G. R. et al. StreamStats, version 4. Fact Sheet (2017) doi:10.3133/FS20173046.

    Jin, S. et al. Overall Methodology Design for the United States National Land Cover Database 2016 Products. Remote Sensing 2019, Vol. 11, Page 2971 11, 2971 (2019).

    Lindsay, J. B. The Whitebox Geospatial Analysis Tools Project and Open-Access GIS. (2022).

    United States Department of Agriculture. National Agriculture Imagery Program (NAIP) - Catalog. https://catalog.data.gov/dataset/national-agriculture-imagery-program-naip.

    For more information contact:

    Dan Myers, PhD Postdoctoral Associate Stroud Water Research Center 970 Spencer Road, Avondale, PA 19311 610-268-2153 ext. 1274 dmyers@stroudcenter.org www.stroudcenter.org

  17. Data from: Integrated Approach to Global Land Use and Land Cover Reference...

    • zenodo.org
    bin, zip
    Updated Oct 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bernard Silva de Oliveira; Bernard Silva de Oliveira; Nathália Monteiro Teles; Vinícius Vieira Mesquita; Leandro Leal Parente; Laerte Guimarães Ferreira; Nathália Monteiro Teles; Vinícius Vieira Mesquita; Leandro Leal Parente; Laerte Guimarães Ferreira (2024). Integrated Approach to Global Land Use and Land Cover Reference Data Harmonization [Dataset]. http://doi.org/10.5281/zenodo.13951976
    Explore at:
    bin, zipAvailable download formats
    Dataset updated
    Oct 18, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Bernard Silva de Oliveira; Bernard Silva de Oliveira; Nathália Monteiro Teles; Vinícius Vieira Mesquita; Leandro Leal Parente; Laerte Guimarães Ferreira; Nathália Monteiro Teles; Vinícius Vieira Mesquita; Leandro Leal Parente; Laerte Guimarães Ferreira
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    INTRODUCTION

    This document outlines the creation of a global inventory of reference samples and Earth Observation (EO) / gridded datasets for the Global Pasture Watch (GPW) initiative. This inventory supports the training and validation of machine-learning models for GPW grassland mapping. This documentation outlines methodology, data sources, workflow, and results.

    Keywords: Grassland, Land Use, Land Cover, Gridded Datasets, Harmonization

    OBJECTIVES

    • Create a global inventory of existing reference samples for land use and land cover (LULC);

    • Compile global EO / gridded datasets that capture LULC classes and harmonize them to match the GPW classes;

    • Develop automated scripts for data harmonization and integration.

    DATA COLLECTION

    Datasets incorporated:

    Datasets

    Spatial distribution

    Time periodNumber of individual samples
    WorldCerealGlobal2016-202138,267,911
    Global Land Cover Mapping and Estimation (GLanCE)Global1985-202131,061,694
    EuroCropsEurope2015-202214,742,648
    GeoWiki G-GLOPS training datasetGlobal202111,394,623
    MapBiomas BrazilBrazil1985-20183,234,370
    Land Use/Land Cover
    Area Frame Survey (LUCAS)
    Europe2006-20181,351,293
    Dynamic WorldGlobal2019-20201,249,983
    Land Change Monitoring,
    Assessment, and Projection (LCMap)
    U.S. (CONUS)1984-2018874,836
    GeoWiki 2012Global2011-2012151,942
    PREDICTSGlobal1984-201316,627
    CropHarvestGlobal2018-20219,714

    Total: 102,355,642 samples

    WORKFLOW

    Harmonization Process

    We harmonized global reference samples and EO/gridded datasets to align with GPW classes, optimizing their integration into the GPW machine-learning workflow.

    We considered reference samples derived by visual interpretation with spatial support of at least 30 m (Landsat and Sentinel), that could represent LULC classes for a point or region.

    Each dataset was processed using automated Python scripts to download vector files and convert the original LULC classes into the following GPW classes:

    0. Other land cover

    1. Natural and Semi-natural grassland

    2. Cultivated grassland

    3. Crops and other related agricultural practices

    We empirically assigned a weight to each sample based on the original dataset's class description, reflecting the level of mixture within the class. The weights range from 1 (Low) to 3 (High), with higher weights indicating greater mixture. Samples with low mixture levels are more accurate and effective for differentiating typologies and for validation purposes.

    The harmonized dataset includes these columns:

    Attribute NameDefinition
    dataset_nameOriginal dataset name
    reference_yearReference year of samples from the original dataset
    original_lulc_classLULC class from the original dataset
    gpw_lulc_classGlobal Pasture Watch LULC class
    sample_weightSample's weight based on the mixture level within the original LULC class

    ACKNOWLEDGMENTS

    The development of this global inventory of reference samples and EO/gridded datasets relied on valuable contributions from various sources. We would like to express our sincere gratitude to the creators and maintainers of all datasets used in this project.

    REFERENCES

    • Brown, C.F., Brumby, S.P., Guzder-Williams, B. et al. Dynamic World, Near real-time global 10 m land use land cover mapping. Sci Data 9, 251 (2022). https://doi.org/10.1038/s41597-022-01307-4Van Tricht, K. et al. Worldcereal: a dynamic open-source system for global-scale, seasonal, and reproducible crop and irrigation mapping. Earth Syst. Sci. Data 15, 5491–5515, 10.5194/essd-15-5491-2023 (2023)

    • Buchhorn, M.; Smets, B.; Bertels, L.; De Roo, B.; Lesiv, M.; Tsendbazar, N.E., Linlin, L., Tarko, A. (2020): Copernicus Global Land Service: Land Cover 100m: Version 3 Globe 2015-2019: Product User Manual; Zenodo, Geneve, Switzerland, September 2020; doi: 10.5281/zenodo.3938963

    • d’Andrimont, R. et al. Harmonised lucas in-situ land cover and use database for field surveys from 2006 to 2018 in the european union. Sci. data 7, 352, 10.1038/s41597-019-0340-y (2020)

    • Fritz, S. et al. Geo-Wiki: An online platform for improving global land cover, Environmental Modelling & Software, 31, https://doi.org/10.1016/j.envsoft.2011.11.015 (2012)

    • Fritz, S., See, L., Perger, C. et al. A global dataset of crowdsourced land cover and land use reference data. Sci Data 4, 170075 https://doi.org/10.1038/sdata.2017.75 (2017)

    • Schneider, M., Schelte, T., Schmitz, F. & Körner, M. Eurocrops: The largest harmonized open crop dataset across the european union. Sci. Data 10, 612, 10.1038/s41597-023-02517-0 (2023)

    • Souza, C. M. et al. Reconstructing Three Decades of Land Use and Land Cover Changes in Brazilian Biomes with Landsat Archive and Earth Engine. Remote. Sens. 12, 2735, 10.3390/rs12172735 (2020)

    • Stanimirova, R. et al. A global land cover training dataset from 1984 to 2020. Sci. Data 10, 879 (2023)

    • Stehman, S. V., Pengra, B. W., Horton, J. A. & Wellington, D. F. Validation of the us geological survey’s land change monitoring, assessment and projection (lcmap) collection 1.0 annual land cover products 1985–2017. Remot Sensing environment 265, 112646, 10.1016/j.rse.2021.112646 (2021).
    • Tsendbazar, N. et al. Product validation report (d12-pvr) v 1.1 (2021).

    • Tseng, G., Zvonkov, I., Nakalembe, C. L., & Kerner, H. (2021). CropHarvest: A global dataset for crop-type classification. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
  18. Mapping forests and deforestation activities in Madagascar using satellite...

    • zenodo.org
    bin, tiff, zip
    Updated Jul 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oladimeji Mudele; Marissa Childs; Jayden Personnat; Christopher Golden; Oladimeji Mudele; Marissa Childs; Jayden Personnat; Christopher Golden (2024). Mapping forests and deforestation activities in Madagascar using satellite imagery [Dataset]. http://doi.org/10.5281/zenodo.12775424
    Explore at:
    tiff, bin, zipAvailable download formats
    Dataset updated
    Jul 31, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Oladimeji Mudele; Marissa Childs; Jayden Personnat; Christopher Golden; Oladimeji Mudele; Marissa Childs; Jayden Personnat; Christopher Golden
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jul 30, 2024
    Area covered
    Madagascar
    Description

    This dataset includes output data from the following article:

    Oladimeji Mudele, Marissa Childs, Jayden Personnat, and Christopher Golden. 2024. Mapping forests and deforestation activities in Madagascar using satellite imagery. Under Review.

    • Forest agreement maps covering 2016 to 2020 obtained from comparing seven remote sensing data products can be found files with names in this format: agreement_map_{YEAR}_30m.tif
    • Forest extent maps for the year 2020 from all data available in that year can be found in file names with the format: {DATA_NAME}_2020_30m.tif, e.g. esri_2020_30M.tif
    • District-aggregated forested area data extracted and used for temporal analyses can be found in files with names formatted as: forested_area_dustrict_year_{DATA_NAME}.geojson. e.g. forested_area_dustrict_year_{DW}.geojson for Dynamic World data.
    • District-aggregated deforested area data extracted and used for temporal analyses can be found in files with names formatted as: in: deforested_area_dustrict_year_{DATA_NAME}.geojson. e.g. deforested_area_dustrict_year_{DW}.geojson for Dynamic World data.
    • Madagascar_admin_boundaries.zip contains the administrative boundaries of Madagascar, obtained from here.

    All raster data are available as GeoTIFF raster files at 30m resolution and coordinate reference system EPSG:4326.

  19. Google Global Landsat-based CCDC Segments (1999-2019)

    • developers.google.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Google, Google Global Landsat-based CCDC Segments (1999-2019) [Dataset]. https://developers.google.com/earth-engine/datasets/catalog/GOOGLE_GLOBAL_CCDC_V1
    Explore at:
    Dataset provided by
    Googlehttp://google.com/
    Time period covered
    Jan 1, 1999 - Jan 1, 2020
    Area covered
    Description

    This collection contains precomputed results from running the Continuous Change Detection and Classification (CCDC) algorithm on 20 years of Landsat surface reflectance data. CCDC is a break-point finding algorithm that uses harmonic fitting with a dynamic RMSE threshold to detect breakpoints in time-series data. The dataset was created from the Landsat 5, 7, and 8 Collection-1, Tier-1, surface reflectance time series, using all daytime images between 1999-01-01 and 2019-12-31. Each image was preprocessed to mask pixels identified as cloud, shadow, or snow (according to the 'pixel_qa' band), saturated pixels, and pixels with an atmospheric opacity > 300 (as identified by the 'sr_atmos_opacity' and 'sr_aerosol' bands). Pixels repeated in north/south scene overlap were deduplicated. The results were output in 2-degree tiles for all landmasses between -60° and +85° latitude. The images are suitable to simply mosaic() into one global image. The CCDC algorithm was run with the default algorithm parameters except for the dateFormat: tmaskBands: ['green', 'swir'] minObservations: 6 chiSquareProbability: 0.99 minNumOfYearsScaler: 1.33 dateFormat: 1 (fractional year) lambda: 20 maxIterations: 25000 Each pixel in the output is encoded using variable length arrays. The outer length of each array (axis 0) corresponds to the number of breakpoints found at that location. The coefs bands contain 2-D arrays, where each inner array contains the scaling factors for the 8 terms in the linear harmonic model, in the order: [offset, t, cos(ωt), sin(ωt), cos(2ωt), sin(2ωt), cos(3ωt), sin(3ωt)], where ω = 2Π. The models are scale to produce refelectance units (0.0 - 1.0) for the optical bands and degrees (K) / 100.0 for the thermal band. Note that since the output bands are arrays and can only be downsampled using a SAMPLE pyramiding policy. At lower zoom levels, the results are usually no longer representative of the full-resolution data, and, for instance, tile boundaries can be seen due to the downsampled masks. It's therefore not recommended to use this dataset at resolutions less than 240m/pixel. There are no current plans to add post-2019 assets to this dataset.

  20. a

    India: Esri 2020 Land Cover (Mature Support)

    • hub.arcgis.com
    Updated Jun 25, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GIS Online (2021). India: Esri 2020 Land Cover (Mature Support) [Dataset]. https://hub.arcgis.com/maps/1a1257b264bc4a6dbfc64a72f7d7b9ab
    Explore at:
    Dataset updated
    Jun 25, 2021
    Dataset authored and provided by
    GIS Online
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Important Note: This item is in mature support as of March 2022 and will be retired in December 2024. A new version of this item is available for your use. Esri recommends updating your maps and apps to use the new version. This layer displays a global map of land use/land cover (LULC). The map is derived from ESA Sentinel-2 imagery at 10m resolution. It is a composite of LULC predictions for 10 classes throughout the year in order to generate a representative snapshot of 2020.Variable mapped: 2020 land use/land coverData Projection: Universal Transverse Mercator (UTM)Mosaic Projection: WGS84Extent: GlobalSource imagery: Sentinel-2Cell Size: 10m (0.00008983152098239751 degrees)Type: ThematicSource: Esri Inc.Publication date: July 2021What can you do with this layer?Global LULC maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover (LULC) anywhere on Earth. This layer can also be used in analyses that require LULC input. For example, the Zonal Statistics tools allow a user to understand the composition of a specified area by reporting the total estimates for each of the classes. Individual GeoTIFF scenes can be downloaded here. LULC processingThis map was produced by a deep learning model trained using over 5 billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6 bands of Sentinel-2 surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map of 2020.Processing platformSentinel-2 L2A/B data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch.Class definitions1. WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2. TreesAny significant clustering of tall (~15-m or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).3. GrassOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures.4. Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5. CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.6. Scrub/shrubMix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants7. Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8. Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9. Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields. 10. CloudsNo land cover information due to persistent cloud cover.Global 2020 map accuracy assessment by Impact ObservatoryFollowing best practices for accuracy assessment, Impact Observatory adjusted the acreage estimates for each class using its respective user’s accuracy as computed from the comparison to the validation set. This approach also allowed Impact Observatory to produce a 95% confidence interval for each acreage estimate, providing users with a clearer picture of the accuracy and total area for each class. For more information, please see Olofsson et al., RSE (2013).Confusion matrix of pixel counts evaluated against "three expert strict" gold standard validation tiles. The model achieves an overall accuracy of 86% on the validation set.CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.Note-This dataset was produced by Impact Observatory for Esri. © 2021 Esri. This dataset is available under a Creative Commons BY-4.0 license and any copy of or work based on this dataset requires the following attribution: This dataset is based on the dataset produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.For questions please email environment@esri.com

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Google, Dynamic World V1 [Dataset]. http://doi.org/10.1038/s41597-022-01307-4
Organization logo

Dynamic World V1

Explore at:
Dataset provided by
Googlehttp://google.com/
World Resources Institute
Time period covered
Jun 27, 2015 - Mar 28, 2026
Area covered
Earth
Description

Dynamic World is a 10m near-real-time (NRT) Land Use/Land Cover (LULC) dataset that includes class probabilities and label information for nine classes. Dynamic World predictions are available for the Sentinel-2 L1C collection from 2015-06-27 to present. The revisit frequency of Sentinel-2 is between 2-5 days depending on latitude. Dynamic World predictions are generated for Sentinel-2 L1C images with CLOUDY_PIXEL_PERCENTAGE <= 35%. Predictions are masked to remove clouds and cloud shadows using a combination of S2 Cloud Probability, Cloud Displacement Index, and Directional Distance Transform. Images in the Dynamic World collection have names matching the individual Sentinel-2 L1C asset names from which they were derived, e.g: ee.Image('COPERNICUS/S2/20160711T084022_20160711T084751_T35PKT') has a matching Dynamic World image named: ee.Image('GOOGLE/DYNAMICWORLD/V1/20160711T084022_20160711T084751_T35PKT'). All probability bands except the "label" band collectively sum to 1. To learn more about the Dynamic World dataset and see examples for generating composites, calculating regional statistics, and working with the time series, see the Introduction to Dynamic World tutorial series. Given Dynamic World class estimations are derived from single images using a spatial context from a small moving window, top-1 "probabilities" for predicted land covers that are in-part defined by cover over time, like crops, can be comparatively low in the absence of obvious distinguishing features. High-return surfaces in arid climates, sand, sunglint, etc may also exhibit this phenomenon. To select only pixels that confidently belong to a Dynamic World class, it is recommended to mask Dynamic World outputs by thresholding the estimated "probability" of the top-1 prediction.

Search
Clear search
Close search
Google apps
Main menu