89 datasets found
  1. Harmonized Sentinel-2 MSI: MultiSpectral Instrument, Level-2A (SR)

    • developers.google.com
    Updated Jan 30, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    European Union/ESA/Copernicus (2020). Harmonized Sentinel-2 MSI: MultiSpectral Instrument, Level-2A (SR) [Dataset]. https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR_HARMONIZED
    Explore at:
    Dataset updated
    Jan 30, 2020
    Dataset provided by
    European Space Agencyhttp://www.esa.int/
    Time period covered
    Mar 28, 2017 - Mar 27, 2025
    Area covered
    Description

    After 2022-01-25, Sentinel-2 scenes with PROCESSING_BASELINE '04.00' or above have their DN (value) range shifted by 1000. The HARMONIZED collection shifts data in newer scenes to be in the same range as in older scenes. Sentinel-2 is a wide-swath, high-resolution, multi-spectral imaging mission supporting Copernicus Land Monitoring studies, including the …

  2. G

    Zharmonizowany Sentinel-2 MSI: instrument wielospektralny, poziom 2A (SR)

    • developers.google.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unia Europejska/ESA/Copernicus, Zharmonizowany Sentinel-2 MSI: instrument wielospektralny, poziom 2A (SR) [Dataset]. https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR_HARMONIZED?hl=pl
    Explore at:
    Dataset provided by
    Unia Europejska/ESA/Copernicus
    Time period covered
    Mar 28, 2017 - Mar 13, 2025
    Area covered
    Description

    Po 2022-01-25 sceny Sentinel-2 z wartością PROCESSING_BASELINE „04.00” lub wyższą mają zakres DN (wartość) przesunięty o 1000. Zbiór HARMONIZED przesuwa dane w nowszych scenach, aby były one w tym samym zakresie co w starszych scenach. Sentinel-2 to misja obrazowania wielospektralnego o szerokim pasie i wysokiej rozdzielczości, która wspiera badania monitorowania lądu w ramach programu Copernicus, w tym monitorowanie pokrycia roślinnego, gleby i wody, a także obserwację dróg wodnych i obszarów przybrzeżnych. Dane Sentinel-2 L2 są pobierane z CDSE. Zostały one obliczone przez sen2cor. Ostrzeżenie: w zbiorze EE pokrycie na poziomie 2 z lat 2017–2018 nie jest jeszcze globalne. Zasoby zawierają 12 pasm widmowych UINT16 reprezentujących SR przeskalowane o 10 tys. (w odróżnieniu od danych L1 nie ma B10). Istnieje też kilka dodatkowych pasm L2 (szczegóły znajdziesz na liście pasm). Więcej informacji znajdziesz w podręczniku użytkownika Sentinel-2. QA60 to pasmo bitmaski, które zawierało rastrowane wielokąty maski chmury do 25 stycznia 2022 r., kiedy przestały one być generowane. Od 28 lutego 2024 r. pasma zgodne ze starszymi wersjami QA60 są tworzone na podstawie pasm klasyfikacji chmur MSK_CLASSI. Więcej informacji znajdziesz w pełnym opisie sposobu obliczania masek chmur. Identyfikatory komponentów EE dla komponentów Sentinel-2 L2 mają format: COPERNICUS/S2_SR/20151128T002653_20151128T102149_T56MNN. Pierwszy element liczbowy reprezentuje datę i godzinę pomiaru, drugi element liczbowy reprezentuje datę i godzinę wygenerowania produktu, a ostatni 6-znakowy ciąg znaków to unikalny identyfikator ziarna, który wskazuje odniesienie do siatki UTM (patrz MGRS). Zbiór danych COPERNICUS/S2_CLOUD_PROBABILITY i GOOGLE/CLOUD_SCORE_PLUS/V1/S2_HARMONIZED pomoże w detekcji chmury lub jej cienia. Więcej informacji o rozdzielczości radiometrycznej Sentinel-2 znajdziesz tutaj.

  3. Sentinel-2: Cloud Probability

    • developers.google.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    European Union/ESA/Copernicus/SentinelHub, Sentinel-2: Cloud Probability [Dataset]. https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_CLOUD_PROBABILITY
    Explore at:
    Dataset provided by
    European Space Agencyhttp://www.esa.int/
    Time period covered
    Jun 27, 2015 - Mar 26, 2025
    Area covered
    Description

    The S2 cloud probability is created with the sentinel2-cloud-detector library (using LightGBM). All bands are upsampled using bilinear interpolation to 10m resolution before the gradient boost base algorithm is applied. The resulting 0..1 floating point probability is scaled to 0..100 and stored as an UINT8. Areas missing any or all …

  4. Z

    Sentinel-2: Cloud Probability in Earth Engine

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kurt Schwehr (2024). Sentinel-2: Cloud Probability in Earth Engine [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7410995
    Explore at:
    Dataset updated
    Jul 15, 2024
    Dataset authored and provided by
    Kurt Schwehr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Earth
    Description

    Links:

    Sentinel-2: Cloud Probability in Earth Engine's Public Data Catalog

    Sentinel-2: Cloud Probability in Earth Engine STAC viewed with STAC Browser

    The S2 cloud probability is created with the sentinel2-cloud-detector library (using LightGBM). All bands are upsampled using bilinear interpolation to 10m resolution before the gradient boost base algorithm is applied. The resulting 0..1 floating point probability is scaled to 0..100 and stored as a UINT8. Areas missing any or all of the bands are masked out. Higher values are more likely to be clouds or highly reflective surfaces (e.g. roof tops or snow).

    Sentinel-2 is a wide-swath, high-resolution, multi-spectral imaging mission supporting Copernicus Land Monitoring studies, including the monitoring of vegetation, soil and water cover, as well as observation of inland waterways and coastal areas.

    The Level-2 data can be found in the collection COPERNICUS/S2_SR. The Level-1B data can be found in the collection COPERNICUS/S2. Additional metadata is available on assets in those collections.

    See this tutorial explaining how to apply the cloud mask.

  5. u

    Data from: Sentinel2GlobalLULC: A dataset of Sentinel-2 georeferenced RGB...

    • observatorio-cientifico.ua.es
    • zenodo.org
    Updated 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benhammou, Yassir; Alcaraz-Segura, Domingo; Guirado, Emilio; Khaldi, Rohaifa; Tabik, Siham; Benhammou, Yassir; Alcaraz-Segura, Domingo; Guirado, Emilio; Khaldi, Rohaifa; Tabik, Siham (2022). Sentinel2GlobalLULC: A dataset of Sentinel-2 georeferenced RGB imagery annotated for global land use/land cover mapping with deep learning (License CC BY 4.0) [Dataset]. https://observatorio-cientifico.ua.es/documentos/668fc45eb9e7c03b01bdb38a
    Explore at:
    Dataset updated
    2022
    Authors
    Benhammou, Yassir; Alcaraz-Segura, Domingo; Guirado, Emilio; Khaldi, Rohaifa; Tabik, Siham; Benhammou, Yassir; Alcaraz-Segura, Domingo; Guirado, Emilio; Khaldi, Rohaifa; Tabik, Siham
    Description

    Sentinel2GlobalLULC is a deep learning-ready dataset of RGB images from the Sentinel-2 satellites designed for global land use and land cover (LULC) mapping. Sentinel2GlobalLULC v2.1 contains 194,877 images in GeoTiff and JPEG format corresponding to 29 broad LULC classes. Each image has 224 x 224 pixels at 10 m spatial resolution and was produced by assigning the 25th percentile of all available observations in the Sentinel-2 collection between June 2015 and October 2020 in order to remove atmospheric effects (i.e., clouds, aerosols, shadows, snow, etc.). A spatial purity value was assigned to each image based on the consensus across 15 different global LULC products available in Google Earth Engine (GEE). Our dataset is structured into 3 main zip-compressed folders, an Excel file with a dictionary for class names and descriptive statistics per LULC class, and a python script to convert RGB GeoTiff images into JPEG format. The first folder called "Sentinel2LULC_GeoTiff.zip" contains 29 zip-compressed subfolders where each one corresponds to a specific LULC class with hundreds to thousands of GeoTiff Sentinel-2 RGB images. The second folder called "Sentinel2LULC_JPEG.zip" contains 29 zip-compressed subfolders with a JPEG formatted version of the same images provided in the first main folder. The third folder called "Sentinel2LULC_CSV.zip" includes 29 zip-compressed CSV files with as many rows as provided images and with 12 columns containing the following metadata (this same metadata is provided in the image filenames): Land Cover Class ID: is the identification number of each LULC class Land Cover Class Short Name: is the short name of each LULC class Image ID: is the identification number of each image within its corresponding LULC class Pixel purity Value: is the spatial purity of each pixel for its corresponding LULC class calculated as the spatial consensus across up to 15 land-cover products GHM Value: is the spatial average of the Global Human Modification index (gHM) for each image Latitude: is the latitude of the center point of each image Longitude: is the longitude of the center point of each image Country Code: is the Alpha-2 country code of each image as described in the ISO 3166 international standard. To understand the country codes, we recommend the user to visit the following website where they present the Alpha-2 code for each country as described in the ISO 3166 international standard:https: //www.iban.com/country-codes Administrative Department Level1: is the administrative level 1 name to which each image belongs Administrative Department Level2: is the administrative level 2 name to which each image belongs Locality: is the name of the locality to which each image belongs Number of S2 images : is the number of found instances in the corresponding Sentinel-2 image collection between June 2015 and October 2020, when compositing and exporting its corresponding image tile For seven LULC classes, we could not export from GEE all images that fulfilled a spatial purity of 100% since there were millions of them. In this case, we exported a stratified random sample of 14,000 images and provided an additional CSV file with the images actually contained in our dataset. That is, for these seven LULC classes, we provide these 2 CSV files: A CSV file that contains all exported images for this class A CSV file that contains all images available for this class at spatial purity of 100%, both the ones exported and the ones not exported, in case the user wants to export them. These CSV filenames end with "including_non_downloaded_images". To clearly state the geographical coverage of images available in this dataset, we included in the version v2.1, a compressed folder called "Geographic_Representativeness.zip". This zip-compressed folder contains a csv file for each LULC class that provides the complete list of countries represented in that class. Each csv file has two columns, the first one gives the country code and the second one gives the number of images provided in that country for that LULC class. In addition to these 29 csv files, we provided another csv file that maps each ISO Alpha-2 country code to its original full country name. © Sentinel2GlobalLULC Dataset by Yassir Benhammou, Domingo Alcaraz-Segura, Emilio Guirado, Rohaifa Khaldi, Boujemâa Achchab, Francisco Herrera & Siham Tabik is marked with Attribution 4.0 International (CC-BY 4.0)

  6. SEN12TP - Sentinel-1 and -2 images, timely paired

    • zenodo.org
    • data.niaid.nih.gov
    json, txt, zip
    Updated Apr 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Roßberg; Thomas Roßberg; Michael Schmitt; Michael Schmitt (2023). SEN12TP - Sentinel-1 and -2 images, timely paired [Dataset]. http://doi.org/10.5281/zenodo.7342060
    Explore at:
    json, zip, txtAvailable download formats
    Dataset updated
    Apr 20, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Thomas Roßberg; Thomas Roßberg; Michael Schmitt; Michael Schmitt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The SEN12TP dataset (Sentinel-1 and -2 imagery, timely paired) contains 2319 scenes of Sentinel-1 radar and Sentinel-2 optical imagery together with elevation and land cover information of 1236 distinct ROIs taken between 28 March 2017 and 31 December 2020. Each scene has a size of 20km x 20km at 10m pixel spacing. The time difference between optical and radar images is at most 12h, but for almost all scenes it is around 6h since the orbits of Sentinel-1 and -2 are shifted like that. Next to the \(\sigma^\circ\) radar backscatter also the radiometric terrain corrected \(\gamma^\circ\) radar backscatter is calculated and included. \(\gamma^\circ\) values are calculated using the volumetric model presented by Vollrath et. al 2020.

    The uncompressed dataset has a size of 222 GB and is split spatially into a train (~90%) and a test set (~10%). For easier download the train set is split into four separate zip archives.

    Please cite the following paper when using the dataset, in which the design and creation is detailed:
    T. Roßberg and M. Schmitt. A globally applicable method for NDVI estimation from Sentinel-1 SAR backscatter using a deep neural network and the SEN12TP dataset. PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science, 2023. https://doi.org/10.1007/s41064-023-00238-y.

    The file sen12tp-metadata.json includes metadata of the selected scenes. It includes for each scene the geometry, an ID for the ROI and the scene, the climate and land cover information used when sampling the central point, the timestamps (in ms) when the Sentinel-1 and -2 image was taken, the month of the year, and the EPSG code of the local UTM Grid (e.g. EPSG:32643 - WGS 84 / UTM zone 43N).

    Naming scheme: The images are contained in directories called {roi_id}_{scene_id}, as for some unique regions image pairs of multiple dates are included. In each directory are six files for the different modalities with the naming {scene_id}_{modality}.tif. Multiple modalities are included: radar backscatter and multispectral optical images, the elevation as DSM (digital surface model) and different land cover maps.

    Data modalities
    nameModalityGEE collection
    s1Sentinel-1 radar backscatterCOPERNICUS/S1_GRD
    s2Sentinel-2 Level-2A (Bottom of atmosphere, BOA) multispectral optical data with added cloud probability bandCOPERNICUS/S2_SR
    COPERNICUS/S2_CLOUD_PROBABILITY
    dsm30m digital surface modelJAXA/ALOS/AW3D30/V3_2
    worldcoverland cover, 10m resolutionESA/WorldCover/v100

    The following bands are included in the tif files, for an further explanation see the documentation on GEE. All bands are resampled to 10m resolution and reprojected to the coordinate reference system of the Sentinel-2 image.

    Modality Bands
    ModalityBand countBand names in tif fileNotes
    s15VV_sigma0, VH_sigma0, VV_gamma0flat, VH_gamma0flat, incAngleVV/VH_sigma0 are the \(\sigma^\circ\) values,
    VV/VH_gamma0flat are the radiometric terrain corrected \(\gamma^\circ\) backscatter values
    incAngle is the incident angle
    s213B1, B2, B3, B4, B5, B7, B7, B8, B8A, B9, B11, B12, cloud_probabilitymultispectral optical bands and the probability that a pixel is cloudy, calculated with the sentinel2-cloud-detector library
    optical reflectances are bottom of atmosphere (BOA) reflectances calculated using sen2cor
    dsm1DSMHeight above sea level. Signed 16 bits. Elevation (in meter) converted from the ellipsoidal height based on ITRF97 and GRS80, using EGM96†1 geoid model.
    worldcover1MapLandcover class

    Checking the file integrity
    After downloading and decompression the file integrity can be checked using the provided file of md5 checksum.
    Under Linux: md5sum --check --quiet md5sums.txt

    References:

    Vollrath, Andreas, Adugna Mullissa, Johannes Reiche (2020). "Angular-Based Radiometric Slope Correction for Sentinel-1 on Google Earth Engine". In: Remote Sensing 12.1, Art no. 1867. https://doi.org/10.3390/rs12111867.

  7. G

    MSI harmonizado do Sentinel-2: instrumento multiespectral, nível 2A (SR)

    • developers.google.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    União Europeia/ESA/Copernicus, MSI harmonizado do Sentinel-2: instrumento multiespectral, nível 2A (SR) [Dataset]. https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR_HARMONIZED?hl=pt-br
    Explore at:
    Dataset provided by
    União Europeia/ESA/Copernicus
    Time period covered
    Mar 28, 2017 - Mar 26, 2025
    Area covered
    Description

    Após 25 de janeiro de 2022, as cenas do Sentinel-2 com PROCESSING_BASELINE "04.00" ou superior vão ter o intervalo de DN (valor) alterado em 1.000. A coleção HARMONIZED muda os dados das cenas mais recentes para o mesmo intervalo das cenas mais antigas. A Sentinel-2 é uma missão de imagens multiespectrais de alta resolução e ampla faixa que oferece suporte aos estudos de monitoramento de terras do Copernicus, incluindo o

  8. MSI di Sentinel-2 armonizzato: MultiSpectral Instrument, livello 1C (TOA)

    • developers.google.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unione Europea/ESA/Copernicus, MSI di Sentinel-2 armonizzato: MultiSpectral Instrument, livello 1C (TOA) [Dataset]. https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_HARMONIZED?hl=it
    Explore at:
    Dataset provided by
    European Space Agencyhttp://www.esa.int/
    Time period covered
    Jun 27, 2015 - Mar 22, 2025
    Area covered
    Description

    Dopo il 25/01/2022, l'intervallo DN (valore) delle scene Sentinel-2 con PROCESSING_BASELINE "04.00" o versioni successive è stato spostato di 1000. La raccolta HARMONIZED sposta i dati nelle scene più recenti nello stesso intervallo di quelle precedenti. Sentinel-2 è una missione di acquisizione di immagini multispettrali ad ampio spettro e ad alta risoluzione che supporta gli studi di monitoraggio del suolo di Copernicus, tra cui il monitoraggio della vegetazione, del suolo e della copertura d'acqua, nonché l'osservazione di vie navigabili interne e aree costiere. I dati di Sentinel-2 contengono 13 bande spettrali UINT16 che rappresentano la riflettanza TOA scalata per 10000. Per maggiori dettagli, consulta il manuale dell'utente di Sentinel-2. QA60 è una banda di bitmask che conteneva poligoni di maschera delle nuvole rasterizzati fino a febbraio 2022, quando la produzione di questi poligoni è stata interrotta. A partire da febbraio 2024, le bande QA60 coerenti con le precedenti vengono create dalle bande di classificazione del cloud MSK_CLASSI. Per maggiori dettagli, consulta la spiegazione completa di come vengono calcolate le maschere cloud. Ogni prodotto Sentinel-2 (archivio ZIP) può contenere più granuli. Ogni granulo diventa un asset Earth Engine separato. Gli ID asset EE per gli asset Sentinel-2 hanno il seguente formato: COPERNICUS/S2/20151128T002653_20151128T102149_T56MNN. Qui la prima parte numerica rappresenta la data e l'ora del rilevamento, la seconda parte numerica rappresenta la data e l'ora di generazione del prodotto e la stringa finale di 6 caratteri è un identificatore univoco del granulo che indica il suo riferimento alla griglia UTM (vedi MGRS). I dati di livello 2 prodotti dall'ESA sono disponibili nella raccolta COPERNICUS/S2_SR. Per i set di dati utili per il rilevamento di nuvole e/o ombre delle nuvole, consulta COPERNICUS/S2_CLOUD_PROBABILITY e GOOGLE/CLOUD_SCORE_PLUS/V1/S2_HARMONIZED. Per ulteriori dettagli sulla risoluzione radiometrica di Sentinel-2, visita questa pagina.

  9. d

    Leveraging machine learning and remote sensing to improve grassland...

    • dataone.org
    • borealisdata.ca
    • +1more
    Updated Dec 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ng, Tsz Wing (2023). Leveraging machine learning and remote sensing to improve grassland inventory in British Columbia [Dataset]. http://doi.org/10.5683/SP3/LYIKH3
    Explore at:
    Dataset updated
    Dec 28, 2023
    Dataset provided by
    Borealis
    Authors
    Ng, Tsz Wing
    Time period covered
    Apr 1, 2022 - Aug 30, 2022
    Description

    Machine learning algorithms have been widely adopted in the monitoring ecosystem. British Columbia suffers from grassland degradation but the province does not have an accurate spatial database for effective grassland management. Moreover, computational power and storage space remain two of the limiting factors in developing the database. In this study, we leverage supervised machine learning algorithms using the Google Earth Engine to better annual grassland inventory through an automated process. The pilot study was conducted over the Rocky Mountain district. We compared two different classification algorithms: the Random forest, and the Support vector machine. Training data was sampled through stratified and grided sampling. 19 predictor variables were chosen from Sentinel-1 and Sentinel-2 imageries and relevant topological derivatives, spectral indices, and textural indices using a wrapper-based feature selection method. The resultant map was post-processed to remove land features that were confounded with grasslands. Random forest was chosen as the prototype because the algorithm predicted features relevant to the project’s scope at relatively higher accuracy (67% - 86%) than its counterparts (50% - 76%). The prototype was good at delineating the boundaries between treed and non-treed areas and ferreting out opened patches among closed forests. These opened patches are usually disregarded by the VRI but they are deemed essential to grassland stewardship and wildlife ecologists. The prototype demonstrated the feasibility of automating grassland delineation by a Random forest classifier using the Google Earth Engine. Furthermore, grassland stewards can use the product to identify monitoring and restoration areas strategically in the future.

  10. H

    Using Google Earth Engine to Evaluate Spatial Extent Changes of Bear Lake

    • hydroshare.org
    • beta.hydroshare.org
    • +1more
    zip
    Updated Apr 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Motasem Abualqumboz (2023). Using Google Earth Engine to Evaluate Spatial Extent Changes of Bear Lake [Dataset]. https://www.hydroshare.org/resource/fec47a05c2d94e68aef39f33ae07165d
    Explore at:
    zip(72.3 MB)Available download formats
    Dataset updated
    Apr 14, 2023
    Dataset provided by
    HydroShare
    Authors
    Motasem Abualqumboz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 9, 2023 - Apr 28, 2023
    Area covered
    Description

    This project aims to use remote sensing data from the Landsata database from Google Earth Engine to evaluate the spatial extent changes in the Bear Lake located between the US states of Utah and Idaho. This work is part of a term project submitted to Dr Alfonso Torres-Rua as a requirment to pass the Remote Sensing of Land Surfaces class (CEE6003). More information about the course is provided below. This project uses the geemap Python package (https://github.com/giswqs/geemap) for dealing with the google earth engine datasets. The content of this notebook can be used to:

    learn how to retrive the Landsat 8 remote sensed data. The same functions and methodology can also be used to get the data of other Landsat satallites and other satallites such as Sentinel-2, Sentinel-3 and many others. However, slight changes might be required when dealing with other satallites then Landsat. Learn how to create time lapse images that visulaize changes in some parameters over time. Learn how to use supervised classification to track the changes in the spatial extent of water bodies such as Bear Lake that is located between the US states of Utah and Idaho. Learn how to use different functions and tools that are part of the geemap Python package. More information about the geemap Pyhton package can be found at https://github.com/giswqs/geemap and https://github.com/diviningwater/RS_of_Land_Surfaces_laboratory Course information:

    Name: Remote Sensing of Land Surfaces class (CEE6003) Instructor: Alfonso Torres-Rua (alfonso.torres@usu.edu) School: Utah State University Semester: Spring semester 2023

  11. Z

    Sentinel2 RGB chips over BENELUX with JRC GHSL Population Density 2015 for...

    • data.niaid.nih.gov
    • zenodo.org
    Updated May 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fabio A. González (2023). Sentinel2 RGB chips over BENELUX with JRC GHSL Population Density 2015 for Learning with Label Proportions [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7939347
    Explore at:
    Dataset updated
    May 18, 2023
    Dataset provided by
    Raúl Ramos-Pollan
    Fabio A. González
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Benelux
    Description

    Region of Interest (ROI) is comprised of the Belgium, the Netherlands and Luxembourg

    We use the communes adminitrative division which is standardized across Europe by EUROSTAT at: https://ec.europa.eu/eurostat/web/gisco/geodata/reference-data/administrative-units-statistical-units This is roughly equivalent to the notion municipalities in most countries.

    From the link above, communes definition are taken from COMM_RG_01M_2016_4326.shp and country borders are taken from NUTS_RG_01M_2021_3035.shp.

    images: Sentinel2 RGB from 2020-01-01 to 2020-31-12 filtered out pixels with clouds acoording to QA60 band following the example given in GEE dataset info page at: see https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR_HARMONIZED

      see also https://github.com/rramosp/geetiles/blob/main/geetiles/defs/sentinel2rgbmedian2020.py
    

    labels: Global Human Settlement Layers, Population Grid 2015

      labels range from 0 to 31, with the following meaning:
        label value   original value in GEE dataset
        0        0
        1        1-10
        2        11-20
        3        21-30
        ...
        31       >=291 
    
    
      see https://developers.google.com/earth-engine/datasets/catalog/JRC_GHSL_P2016_POP_GPW_GLOBE_V1
    
    
      see also https://github.com/rramosp/geetiles/blob/main/geetiles/defs/humanpop2015.py
    

    _aschips.geojson the image chips geometries along with label proportions for easy visualization with QGIS, GeoPandas, etc.

    _communes.geojson the communes geometries with their label prortions for easy visualization with QGIS, GeoPandas, etc.

    splits.csv contains two splits of image chips in train, test, val - with geographical bands at 45° angles in nw-se direction - the same as above reorganized to that all chips within the same commune fall within the same split.

    data/ a pickle file for each image chip containing a dict with - the 100x100 RGB sentinel 2 chip image - the 100x100 chip level lavels - the label proportions of the chip - the aggregated label proportions of the commune the chip belongs to

  12. r

    North Australia Sentinel 2 Satellite Composite Imagery - 15th percentile...

    • researchdata.edu.au
    Updated Nov 30, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hammerton, Marc; Hammerton, Marc (2021). North Australia Sentinel 2 Satellite Composite Imagery - 15th percentile true colour (NESP MaC 3.17, AIMS) [Dataset]. http://doi.org/10.26274/HD2Z-KM55
    Explore at:
    Dataset updated
    Nov 30, 2021
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Hammerton, Marc; Hammerton, Marc
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jun 27, 2015 - May 31, 2024
    Area covered
    Description

    This dataset contains cloud free composite satellite images for the northern Australia region based on 10 m resolution Sentinel 2 imagery from 2015 – 2024. This image collection was created as part of the NESP MaC 3.17 project and is intended to allow mapping of the reef features in northern Australia. A new, improved version (version 2, published July 2024) has succeeded the draft version (published March 2024).

    This collection contains composite imagery for 333 Sentinel 2 tiles around the northern coast line of Australia, including the Great Barrier Reef. This dataset uses a true colour contrast and colour enhancement style using the bands B2 (blue), B3 (green), and B4 (red). This is useful to interpreting what shallow features are and in mapping the vegetation on cays and identifying beach rock.

    Changelog:

    This dataset will be progressively improved and made available for download. These additions will be noted in this change log. 2024-07-22 - Version 2 composites using an improved contrast enhancement and a noise prediction algorithm to only include low noise images in composite (Git tag: "composites_v2") 2024-03-07 - Initial release draft composites using 15th percentile (Git tag: "composites_v1")

    Methods:

    The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile filter the "COPERNICUS/S2_HARMONIZED" image collection by - tile ID - maximum cloud cover 20% - date between '2015-06-27' and '2024-05-31' - asset_size > 100000000 (remove small fragments of tiles) Note: A maximum cloud cover of 20% was used to improve the processing times. In most cases this filtering does not have an effect on the final composite as images with higher cloud coverage mostly result in higher noise levels and are not used in the final composite. 2. Split images by "SENSING_ORBIT_NUMBER" (see "Using SENSING_ORBIT_NUMBER for a more balanced composite" for more information). 3. For each SENSING_ORBIT_NUMBER collection filter out all noise-adding images: 3.1 Calculate image noise level for each image in the collection (see "Image noise level calculation for more information") and sort collection by noise level. 3.2 Remove all images with a very high noise index (>15). 3.3 Calculate a baseline noise level using a minimum number of images (min_images_in_collection=30). This minimum number of images is needed to ensure a smoth composite where cloud "holes" in one image are covered by other images. 3.4 Iterate over remaining images (images not used in base noise level calculation) and check if adding image to the composite adds to or reduces the noise. If it reduces the noise add it to the composite. If it increases the noise stop iterating over images. 4. Combine SENSING_ORBIT_NUMBER collections into one image collection. 5. Remove sun-glint (true colour only) and apply atmospheric correction on each image (see "Sun-glint removal and atmospheric correction" for more information). 6. Duplicate image collection to first create a composite image without cloud masking and using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 7. Apply cloud masking to all images in the original image collection (see "Cloud Masking" for more information) and create a composite by using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 8. Combine the two composite images (no cloud mask composite and cloud mask composite). This solves the problem of some coral cays and islands being misinterpreted as clouds and therefore creating holes in the composite image. These holes are "plugged" with the underlying composite without cloud masking. (Lawrey et al. 2022) 9. The final composite was exported as cloud optimized 8 bit GeoTIFF

    Note: The following tiles were generated with no "maximum cloud cover" as they did not have enough images to create a composite with the standard settings: - 46LGM - 46LGN - 46LHM - 50KKD - 50KPG - 53LMH - 53LMJ - 53LNH - 53LPH - 53LPJ - 54LVP - 57JVH - 59JKJtime then the resulting image would be cloud free. (Lawrey et al. 2022)

    Image noise level calculation:

    The noise level for each image in this dataset is calculated to ensure high-quality composites by minimizing the inclusion of noisy images. This process begins by creating a water mask using the Normalized Difference Water Index (NDWI) derived from the NIR and Green bands. High reflectance areas in the NIR and SWIR bands, indicative of sun-glint, are identified and masked by the water mask to focus on water areas affected by sun-glint. The proportion of high sun-glint pixels within these water areas is calculated and amplified to compute a noise index. If no water pixels are detected, a high noise index value is assigned.

    Sun glint removal and atmospheric correction:

    Sun glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun glint detected by B8 correlates very highly with the sun glint experienced by the visible channels (B2, B3 and B4) and so the sun glint in these channels can be removed by subtracting B8 from these channels.

    Eric Lawrey developed this algorithm by fine tuning the value of the scaling between the B8 channel and each individual visible channel (B2, B3 and B4) so that the maximum level of sun glint would be removed. This work was based on a representative set of images, trying to determine a set of values that represent a good compromise across different water surface conditions.

    This algorithm is an adjustment of the algorithm already used in Lawrey et al. 2022

    Cloud Masking:

    Each image was processed to mask out clouds and their shadows before creating the composite image. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.

    A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 35% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.

    A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.

    The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm.

    Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations was adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations. With 4-pixel filter resolutions these operations were still using over 90% of the total processing resulting in each image taking approximately 10 min to compute on the Google Earth Engine. (Lawrey et al. 2022)

    Format:

    GeoTiff - LZW compressed, 8 bit channels, 0 as NoData, Imagery as values 1 - 255. Internal tiling and overviews. Average size: 12500 x 11300 pixels and 300 MB per image.

    The images in this dataset are all named using a naming convention. An example file name is AU_AIMS_MARB-S2-comp_p15_TrueColour_51KTV_v2_2015-2024.tif. The name is made up from: - Dataset name (AU_AIMS_MARB-S2-comp) - An algorithm descriptor (p15 for 15th percentile), - Colour and contrast enhancement applied (TrueColour), - Sentinel 2 tile (example: 54LZP), - Version (v2), - Date range (2015 to 2024 for version 2)

    References:

    Google (n.d.) Sentinel-2: Cloud Probability. Earth Engine Data Catalog. Accessed 10 April 2021 from https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_CLOUD_PROBABILITY

    Zupanc, A., (2017) Improving Cloud Detection with Machine Learning. Medium. Accessed 10 April 2021 from https://medium.com/sentinel-hub/improving-cloud-detection-with-machine-learning-c09dc5d7cf13

    Lawrey, E., & Hammerton, M. (2022). Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and Landsat 8) 2015 – 2021 (AIMS) [Data set]. eAtlas. https://doi.org/10.26274/NH77-ZW79

    Data Location:

    This dataset is filed in the eAtlas enduring data repository at: data\custodian\2023-2026-NESP-MaC-3\3.17_Northern-Aus-reef-mapping The source code is available on GitHub.

  13. Sentinel2 RGB chips over Colombia (NE) with JRC GHSL Population Density 2015...

    • zenodo.org
    zip
    Updated May 18, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Raúl Ramos-Pollan; Raúl Ramos-Pollan; Fabio A. González; Fabio A. González (2023). Sentinel2 RGB chips over Colombia (NE) with JRC GHSL Population Density 2015 for Learning with Label Proportions [Dataset]. http://doi.org/10.5281/zenodo.7939365
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 18, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Raúl Ramos-Pollan; Raúl Ramos-Pollan; Fabio A. González; Fabio A. González
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Region of Interest (ROI) is comprised of the east - northeast region of Colombia covering
    parts of Santander, Norte de Santander, Boyacá, Bolívar, Antioquia and Cundinamarca.

    We use the communes administrative division defined by DANE (Departamento Administrativo
    Nacional de Estadística) under "municipios" in the MGN2021 at
    https://geoportal.dane.gov.co/geovisores/territorio/mgn-marco-geoestadistico-nacional/

    images: Sentinel2 RGB from 2020-01-01 to 2020-31-12
    filtered out pixels with clouds during the observation period according to QA60 band following the example
    given in GEE dataset info page, and took the median of the resulting pixels

    see https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR_HARMONIZED

    see also https://github.com/rramosp/geetiles/blob/main/geetiles/defs/sentinel2rgbmedian2020.py

    labels: Global Human Settlement Layers, Population Grid 2015

    labels range from 0 to 31, with the following meaning:
    label value original value in GEE dataset
    0 0
    1 1-10
    2 11-20
    3 21-30
    ...
    31 >=291

    see https://developers.google.com/earth-engine/datasets/catalog/JRC_GHSL_P2016_POP_GPW_GLOBE_V1

    see also https://github.com/rramosp/geetiles/blob/main/geetiles/defs/humanpop2015.py

    _aschips.geojson  the image chips geometries along with label proportions
              for easy visualization with QGIS, GeoPandas, etc.
    
    _communes.geojson  the communes geometries with their label prortions
              for easy visualization with QGIS, GeoPandas, etc.
    
    splits.csv     contains two splits of image chips in train, test, val
              - with geographical bands at 45° angles in nw-se direction
              - the same as above reorganized to that all chips within the same
               commune fall within the same split.
    
    data/        a pickle file for each image chip containing a dict with
              - the 100x100 RGB sentinel 2 chip image
              - the 100x100 chip level lavels
              - the label proportions of the chip
              - the aggregated label proportions of the commune the chip belongs to
    

  14. f

    Specific information of selected Sentinel-2 images.

    • figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianfeng Li; Biao Peng; Yulu Wei; Huping Ye (2023). Specific information of selected Sentinel-2 images. [Dataset]. http://doi.org/10.1371/journal.pone.0253209.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Jianfeng Li; Biao Peng; Yulu Wei; Huping Ye
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Specific information of selected Sentinel-2 images.

  15. Z

    A Google Earth Engine code to analyze residential buildings' real estate...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guerri Giulia (2022). A Google Earth Engine code to analyze residential buildings' real estate values, summer surface thermal anomaly patterns and urban features: a Florence (Italy) case study [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6831531
    Explore at:
    Dataset updated
    Jul 14, 2022
    Dataset provided by
    Crisci, Alfonso
    Morabito, Marco
    Guerri Giulia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Italy, Florence
    Description

    The layers included in the code were from the study conducted by the research group of CNR-IBE (Institute of BioEconomy of the National Research Council of Italy) and ISPRA (Italian National Institute for Environmental Protection and Research), published by the Sustainability journal (https://doi.org/10.3390/su14148412).

    Link to the Google Earth Engine (GEE) code (link: https://code.earthengine.google.com/715aa44e13b3640b5f6370165edd3002)

    You can analyze and visualize the following spatial layers by accessing the GEE link:

    Daytime summer land surface temperature (raster data, horizontal resolution 30 m, from Landsat-8 remote sensing data, years 2015-2019)

    Surface thermal hot-spot (raster data, horizontal resolution 30 m) was obtained by using a statistical-spatial method based on the Getis-Ord Gi* approach through the ArcGIS Pro tool.

    Surface albedo (raster data, horizontal resolution 10 m, Sentinel-2A remote sensing data, year 2017)

    Impervious area (raster data, horizontal resolution 10 m, ISPRA data, year 2017)

    Tree cover (raster data, horizontal resolution 10 m, ISPRA data, year 2018)

    Grassland area (raster data, horizontal resolution 10 m, ISPRA data, year 2017)

    Water bodies (raster data, horizontal resolution 2 m, Geoscopio Platform of Tuscany, year 2016)

    Sky View Factor (raster data, horizontal resolution 1 m, lidar data from the OpenData platform of Florence, year 2016)

    Buildings' units of Florence (shapefile from the OpenData platform of Florence) include data on the residential real estate value from the Real Estate Market Observatory (OMI) of the National Revenue Agency of Italy (source: https://www1.agenziaentrate.gov.it/servizi/Consultazione/ricerca.htm, accessed on 14 July 2022). Data on the characterization of the buffer area (50 m) surrounding the buildings are included in this shapefile [the names of table attributes are reported in the square brackets]: averaged values of the daytime summer land surface temperature [LST_media], thermal hot-spot pattern [Thermal_cl], mean values of sky view factor [SVF_medio], surface albedo [alb_medio], and average percentage areas of imperviousness [ImperArea%], tree cover [TreeArea%], grassland [GrassArea%] and water bodies [WaterArea%].

    Here attached the .txt file of the GEE code.

    E-mail

    Giulia Guerri, CNR-IBE, giulia.guerri@ibe.cnr.it

    Marco Morabito, CNR-IBE, marco.morabito@cnr.it

    Alfonso Crisci, CNR-IBE, alfonso.crisci@ibe.cnr.it

  16. Coral Sea Sentinel 2 Marine Satellite Composite Draft Imagery version 0...

    • catalogue.eatlas.org.au
    • researchdata.edu.au
    Updated Nov 21, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Institute of Marine Science (AIMS) (2021). Coral Sea Sentinel 2 Marine Satellite Composite Draft Imagery version 0 (AIMS) [Dataset]. https://catalogue.eatlas.org.au/geonetwork/srv/api/records/2932dc63-9c9b-465f-80bf-09073aacaf1c
    Explore at:
    www:link-1.0-http--related, www:link-1.0-http--downloaddataAvailable download formats
    Dataset updated
    Nov 21, 2021
    Dataset provided by
    Australian Institute Of Marine Sciencehttp://www.aims.gov.au/
    Time period covered
    Oct 1, 2016 - Sep 20, 2021
    Area covered
    Coral Sea
    Description

    This dataset contains composite satellite images for the Coral Sea region based on 10 m resolution Sentinel 2 imagery from 2015 – 2021. This image collection is intended to allow mapping of the reef and island features of the Coral Sea. This is a draft version of the dataset prepared from approximately 60% of the available Sentinel 2 image. An improved version of this dataset was released https://doi.org/10.26274/NH77-ZW79.

    This collection contains composite imagery for 31 Sentinel 2 tiles in the Coral Sea. For each tile there are 5 different colour and contrast enhancement styles intended to highlight different features. These include: - DeepFalse - Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This technique doesn't work where the water is not as clear as the ultraviolet get scattered easily. - DeepMarine - Bands: B2 (blue), B3 (green), B4 (red): This is a contrast enhanced version of the true colour imagery, focusing on being able to better see the deeper features. Shallow features are over exposed due to the increased contrast. - ReefTop - Bands: B3 (red): This imagery is contrast enhanced to create an mask (black and white) of reef tops, delineating areas that are shallower or deeper than approximately 4 - 5 m. This mask is intended to assist in the creating of a GIS layer equivalent to the 'GBR Dry Reefs' dataset. The depth mapping exploits the limited water penetration of the red channel. In clear water the red channel can only see features to approximately 6 m regardless of the substrate type. - Shallow - Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Feature less than a couple of metres appear dark blue, dry areas are white. - TrueColour - Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to interpreting what shallow features are and in mapping the vegetation on cays and identifying beach rock.

    For most Sentinel tiles there are two versions of the DeepFalse and DeepMarine imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery so that mapped features could be checked against two images. Typically the R2 imagery will have more artefacts from clouds.

    The satellite imagery was processed in tiles (approximately 100 x 100 km) to keep each final image small enough to manage. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs.

    Methods:

    The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile, the set of Sentinel images from 2015 – 2021 were reviewed manually. In some tiles the cloud cover threshold was raised to gather more images, particularly if there were less than 20 images available. The Google Earth Engine image IDs of the best images were recorded. These were the images with the clearest water, lowest waves, lowest cloud, and lowest sun glint. 2. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later). 3. The contrast of the images was enhanced to create a series of products for different uses. The true colour image retained the full range of tones visible, so that bright sand cays still retained some detail. The marine enhanced version stretched the blue, green and red channels so that they focused on the deeper, darker marine features. This stretching was done to ensure that when converted to 8-bit colour imagery that all the dark detail in the deeper areas were visible. This contrast enhancement resulted in bright areas of the imagery clipping, leading to loss of detail in shallow reef areas and colours of land areas looking off. A reef top estimate was produced from the red channel (B4) where the contrast was stretched so that the imagery contains almost a binary mask. The threshold was chosen to approximate the 5 m depth contour for the clear waters of the Coral Sea. Lastly a false colour image was produced to allow mapping of shallow water features such as cays and islands. This image was produced from B5 (far red), B8 (nir), B11 (nir), where blue represents depths from approximately 0.5 – 5 m, green areas with 0 – 0.5 m depth, and brown and white corresponding to dry land. 4. The various contrast enhanced composite images were exported from Google Earth Engine (default of 32 bit GeoTiff) and reprocessed to smaller LZW compresed 8 bit GeoTiff images GDAL.

    Cloud Masking

    Prior to combining the best images each image was processed to mask out clouds and their shadows. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.

    A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.

    A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.

    The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm.

    Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations. With 4-pixel filter resolutions these operations were still using over 90% of the total processing resulting in each image taking approximately 10 min to compute on the Google Earth Engine.

    Sun glint removal and atmospheric correction.

    Sun glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun glint detected by B8 correlates very highly with the sun glint experienced by the ultra violet and visible channels (B1, B2, B3 and B4) and so the sun glint in these channels can be removed by subtracting B8 from these channels.

    This simple sun glint correction fails in very shallow and land areas. On land areas B8 is very bright and thus subtracting it from the other channels results in black land. In shallow areas (< 0.5 m) the B8 channel detects the substrate, resulting in too much sun glint correction. To resolve these issues the sun glint correction was adjusted by transitioning to B11 for shallow areas as it penetrates the water even less than B8. We don't use B11 everywhere because it is half the resolution of B8.

    Land areas need their tonal levels to be adjusted to match the water areas after sun glint correction. Ideally this would be achieved using an atmospheric correction that compensates for the contrast loss due to haze in the atmosphere. Complex models for atmospheric correction involve considering the elevation of the surface (higher areas have less atmosphere to pass through) and the weather conditions. Since this dataset is focused on coral reef areas, elevation compensation is unnecessary due to the very low and flat land features being imaged. Additionally the focus of the dataset it on marine features and so only a basic atmospheric correction is needed. Land areas (as determined by very bright B8 areas) where assigned a fixed smaller correction factor to approximate atmospheric correction. This fixed atmospheric correction was determined iteratively so that land areas matched the tonal value of shallow and water areas.

    Image selection

    Available Sentinel 2 images with a cloud cover of less than 0.5% were manually reviewed using an Google Earth Engine App 01-select-sentinel2-images.js. Where there were few images available (less than 30 images) the cloud cover threshold was raised to increase the set of images that were raised.

    Images were excluded from the composites primarily due to two main factors: sun glint and fine scattered clouds. The images were excluded if there was any significant uncorrected sun glint in the image, i.e. the brightness of the sun glint exceeded the sun glint correction. Fine

  17. r

    Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and...

    • researchdata.edu.au
    Updated Feb 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hammerton, Marc; Lawrey, Eric, Dr; mailto:b.robson@aims.gov.au; eAtlas Data Manager; e-Atlas; Wolfe, Kennedy (Dr); Lawrey, Eric, Dr.; Lawrey, Eric, Dr (2024). Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and Landsat 8) 2015 – 2021 (AIMS) [Dataset]. http://doi.org/10.26274/NH77-ZW79
    Explore at:
    Dataset updated
    Feb 29, 2024
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Hammerton, Marc; Lawrey, Eric, Dr; mailto:b.robson@aims.gov.au; eAtlas Data Manager; e-Atlas; Wolfe, Kennedy (Dr); Lawrey, Eric, Dr.; Lawrey, Eric, Dr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 1, 2016 - Sep 20, 2021
    Area covered
    Description

    This dataset contains Sentinel 2 and Landsat 8 cloud free composite satellite images of the Coral Sea reef areas and some parts of the Great Barrier Reef. It also contains raw depth contours derived from the satellite imagery. This dataset was developed as the base information for mapping the boundaries of reefs and coral cays in the Coral Sea. It is likely that the satellite imagery is useful for numerous other applications. The full source code is available and can be used to apply these techniques to other locations.

    This dataset contains two sets of raw satellite derived bathymetry polygons for 5 m, 10 m and 20 m depths based on both the Landsat 8 and Sentinel 2 imagery. These are intended to be post-processed using clipping and manual clean up to provide an estimate of the top structure of reefs. This dataset also contains select scenes on the Great Barrier Reef and Shark bay in Western Australia that were used to calibrate the depth contours. Areas in the GBR were compared with the GA GBR30 2020 (Beaman, 2017) bathymetry dataset and the imagery in Shark bay was used to tune and verify the Satellite Derived Bathymetry algorithm in the handling of dark substrates such as by seagrass meadows. This dataset also contains a couple of small Sentinel 3 images that were used to check the presence of reefs in the Coral Sea outside the bounds of the Sentinel 2 and Landsat 8 imagery.

    The Sentinel 2 and Landsat 8 imagery was prepared using the Google Earth Engine, followed by post processing in Python and GDAL. The processing code is available on GitHub (https://github.com/eatlas/CS_AIMS_Coral-Sea-Features_Img).

    This collection contains composite imagery for Sentinel 2 tiles (59 in Coral Sea, 8 in GBR) and Landsat 8 tiles (12 in Coral Sea, 4 in GBR and 1 in WA). For each Sentinel tile there are 3 different colour and contrast enhancement styles intended to highlight different features. These include: - TrueColour - Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to identifying shallow features are and in mapping the vegetation on cays. - DeepFalse - Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This imagery has a high level of contrast enhancement applied to the imagery and so it appears more noisy (in particular showing artefact from clouds) than the TrueColour styling. - Shallow - Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Features less than a couple of metres appear dark blue, dry areas are white. This imagery is intended to help identify coral cay boundaries.

    For Landsat 8 imagery only the TrueColour and DeepFalse stylings were rendered.

    All Sentinel 2 and Landsat 8 imagery has Satellite Derived Bathymetry (SDB) depth contours. - Depth5m - This corresponds to an estimate of the area above 5 m depth (Mean Sea Level). - Depth10m - This corresponds to an estimate of the area above 10 m depth (Mean Sea Level). - Depth20m - This corresponds to an estimate of the area above 20 m depth (Mean Sea Level).

    For most Sentinel and some Landsat tiles there are two versions of the DeepFalse imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery. This allows any mapped features to be checked against two images. Typically the R2 imagery will have more artefacts from clouds. In one Sentinel 2 tile a third image was created to help with mapping the reef platform boundary.

    The satellite imagery was processed in tiles (approximately 100 x 100 km for Sentinel 2 and 200 x 200 km for Landsat 8) to keep each final image small enough to manage. These tiles were not merged into a single mosaic as it allowed better individual image contrast enhancement when mapping deep features. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs and where their might have been potential new reef platforms indicated by existing bathymetry datasets and the AHO Marine Charts. The extent of the imagery was limited by those available through the Google Earth Engine.

    Methods:

    The Sentinel 2 imagery was created using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile, images from 2015 – 2021 were reviewed manually after first filtering to remove cloudy scenes. The allowable cloud cover was adjusted so that at least the 50 least cloud free images were reviewed. The typical cloud cover threshold was 1%. Where very few images were available the cloud cover filter threshold was raised to 100% and all images were reviewed. The Google Earth Engine image IDs of the best images were recorded, along with notes to help sort the images based on those with the clearest water, lowest waves, lowest cloud, and lowest sun glint. Images where there were no or few clouds over the known coral reefs were preferred. No consideration of tides was used in the image selection process. The collection of usable images were grouped into two sets that would be combined together into composite images. The best were added to the R1 composite, and the next best images into the R2 composite. Consideration was made as to whether each image would improve the resultant composite or make it worse. Adding clear images to the collection reduces the visual noise in the image allowing deeper features to be observed. Adding images with clouds introduces small artefacts to the images, which are magnified due to the high contrast stretching applied to the imagery. Where there were few images all available imagery was typically used. 2. Sunglint was removed from the imagery using estimates of the sunglint using two of the infrared bands (described in detail in the section on Sun glint removal and atmospheric correction). 3. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later). 4. The brightness of the composite image was normalised so that all tiles would have a similar average brightness for deep water areas. This correction was applied to allow more consistent contrast enhancement. Note: this brightness adjustment was applied as a single offset across all pixels in the tile and so this does not correct for finer spatial brightness variations. 5. The contrast of the images was enhanced to create a series of products for different uses. The TrueColour colour image retained the full range of tones visible, so that bright sand cays still retain detail. The DeepFalse style was optimised to see features at depth and the Shallow style provides access to far red and infrared bands for assessing shallow features, such as cays and island. 6. The various contrast enhanced composite images were exported from Google Earth Engine and optimised using Python and GDAL. This optimisation added internal tiling and overviews to the imagery. The depth polygons from each tile were merged into shapefiles covering the whole for each depth.

    Cloud Masking

    Prior to combining the best images each image was processed to mask out clouds and their shadows.

    The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.

    A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.

    A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.

    The buffering was applied as the cloud masking would often miss significant portions of the edges of clouds and their shadows. The buffering allowed a higher percentage of the cloud to be excluded, whilst retaining as much of the original imagery as possible.

    The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. The algorithm used is significantly better than the default Sentinel 2 cloud masking and slightly better than the COPERNICUS/S2_CLOUD_PROBABILITY cloud mask because it masks out shadows, however there is potentially significant improvements that could be made to the method in the future.

    Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations.

  18. G

    조정된 Sentinel-2 MSI: 다중 스펙트럴 기기, Level-2A (SR)

    • developers.google.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    조정된 Sentinel-2 MSI: 다중 스펙트럴 기기, Level-2A (SR) [Dataset]. https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR_HARMONIZED?hl=ko
    Explore at:
    Dataset provided by
    유럽 연합/ESA/코페르니쿠스
    Time period covered
    Mar 28, 2017 - Mar 26, 2025
    Area covered
    Description

    2022년 1월 25일 이후 PROCESSING_BASELINE이 '04.00' 이상인 Sentinel-2 장면의 DN (값) 범위가 1, 000으로 이동합니다. HARMONIZED 컬렉션은 최신 장면의 데이터를 이전 장면과 동일한 범위로 이동합니다. Sentinel-2는 식생, 토양, 수역 모니터링, 내륙 수로 및 연안 지역 관찰을 비롯한 코페르니쿠스 육지 모니터링 연구를 지원하는 광범위한 고해상도 다중 스펙트럼 이미지 처리 임무입니다. Sentinel-2 L2 데이터는 CDSE에서 다운로드됩니다. sen2cor를 실행하여 계산되었습니다. 경고: EE 컬렉션의 2017~2018 L2 노출 범위는 아직 전 세계가 아닙니다. 애셋에는 10,000으로 크기가 조정된 SR을 나타내는 12개의 UINT16 스펙트럼 밴드가 포함됩니다 (L1 데이터와 달리 B10은 없음). L2 관련 밴드도 여러 개 있습니다 (자세한 내용은 밴드 목록 참고). 자세한 내용은 Sentinel-2 사용자 핸드북을 참고하세요. QA60은 2022년 1월 25일에 이러한 다각형의 생성이 중단될 때까지 래스터화된 구름 마스크 다각형이 포함된 비트 마스크 밴드입니다. 2024년 2월 28일부터 기존과 일치하는 QA60 밴드는 MSK_CLASSI 클라우드 분류 밴드에서 생성됩니다. 자세한 내용은 구름 마스크 계산 방법에 관한 전체 설명을 참고하세요. Sentinel-2 L2 애셋의 EE 애셋 ID는 다음과 같은 형식입니다. COPERNICUS/S2_SR/20151128T002653_20151128T102149_T56MNN 여기서 첫 번째 숫자 부분은 감지 날짜와 시간을 나타내고, 두 번째 숫자 부분은 제품 생성 날짜와 시간을 나타내며, 마지막 6자리 문자열은 UTM 그리드 참조를 나타내는 고유한 그래뉼 식별자입니다 (MGRS 참고). 구름 또는 구름 그림자 감지를 지원하는 데이터 세트는 COPERNICUS/S2_CLOUD_PROBABILITY 및 GOOGLE/CLOUD_SCORE_PLUS/V1/S2_HARMONIZED를 참고하세요. Sentinel-2 방사기호 해상도에 관한 자세한 내용은 이 페이지를 참고하세요.

  19. D

    Google Earth Engine Burnt Area Map (GEEBAM)

    • data.nsw.gov.au
    • researchdata.edu.au
    pdf, wms, zip
    Updated Sep 16, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NSW Department of Climate Change, Energy, the Environment and Water (2024). Google Earth Engine Burnt Area Map (GEEBAM) [Dataset]. https://data.nsw.gov.au/data/dataset/google-earth-engine-burnt-area-map-geebam
    Explore at:
    pdf, wms, zipAvailable download formats
    Dataset updated
    Sep 16, 2024
    Dataset provided by
    NSW Department of Climate Change, Energy, the Environment and Water
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PLEASE NOTE:

    _ GEEBAM is an interim product and there is no ground truthing or assessment of accuracy. Fire Extent and Severity Mapping (FESM) data should be used for accurate information on fire severity and loss of biomass in relation to bushfires._

    The intention of this dataset was to provide a rapid assessment of fire impact.

    In collaboration with the University of NSW, the NSW Department of Planning Infrastructure and Environment (DPIE) Remote Sensing and Landscape Science team has developed a rapid mapping approach to find out where wildfires in NSW have affected vegetation. We call it the Google Earth Engine Burnt Area Map (GEEBAM) and it relies on Sentinel 2 satellite imagery. The product output is a TIFF image with a resolution of 15m. Burnt Area Classes:

    1. Little change observed between pre and post fire

    2. Canopy unburnt - A green canopy within the fire ground that may act as refugia for native fauna, may be affected by fire

    3. Canopy partially affected - A mix of burnt and unburnt canopy vegetation

    4. Canopy fully affected -The canopy and understorey are most likely burnt

    Using GEEBAM at a local scale requires visual interpretation with reference to satellite imagery. This will ensure the best results for each fire or vegetation class.

    Important Note: GEEBAM is an interim product and there is no ground truthing or assessment of accuracy. It is updated fortnightly.

    Please see Google Earth Engine Burnt Area Factsheet

  20. Map of islands and shallow water areas in the Spermonde Archipelago...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Aug 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessio Rovere; Alessio Rovere (2023). Map of islands and shallow water areas in the Spermonde Archipelago (Indonesia) [Dataset]. http://doi.org/10.5281/zenodo.4407106
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 21, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alessio Rovere; Alessio Rovere
    Area covered
    Indonesia, Spermonde Archipelago
    Description

    This repository contains data and code used to make a map of islands and shallow water areas for the Spermonde Archipelago, Indonesia. The map was obtained using a two-stepped classification approach, described below, and simple statistics and graphs were then calculated in python.

    This work was inspired by the "Geoscientific Project" of Mr. Dennis Flenner, University of Bremen, who classified the same area with SENTINEL2 and QGIS tools. This work was supported through grant SEASCHANGE (RO-5245/1-1) from the Deutsche Forschungsgemeinschaft (DFG) as part of the Special Priority Program (SPP)-1889 “Regional Sea Level Change and Society”.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
European Union/ESA/Copernicus (2020). Harmonized Sentinel-2 MSI: MultiSpectral Instrument, Level-2A (SR) [Dataset]. https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR_HARMONIZED
Organization logo

Harmonized Sentinel-2 MSI: MultiSpectral Instrument, Level-2A (SR)

Explore at:
80 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jan 30, 2020
Dataset provided by
European Space Agencyhttp://www.esa.int/
Time period covered
Mar 28, 2017 - Mar 27, 2025
Area covered
Description

After 2022-01-25, Sentinel-2 scenes with PROCESSING_BASELINE '04.00' or above have their DN (value) range shifted by 1000. The HARMONIZED collection shifts data in newer scenes to be in the same range as in older scenes. Sentinel-2 is a wide-swath, high-resolution, multi-spectral imaging mission supporting Copernicus Land Monitoring studies, including the …

Search
Clear search
Close search
Google apps
Main menu