Facebook
TwitterSentinel-1 performs systematic acquisition of bursts in both IW and EW modes. The bursts overlap almost perfectly between different passes and are always located at the same place. With the deployment of the SAR processor S1-IPF 3.4, a new element has been added to the products annotations: the Burst ID, which should help the end user to identify a burst area of interest and facilitate searches. The Burst ID map is a complementary auxiliary product. The maps have a validity that covers the entire time span of the mission and they are global, i.e., they include as well information where no SAR data is acquired. Each granule contains information about burst and sub-swath IDs, relative orbit and burst polygon, and should allow for an easier link between a certain burst ID in a product and its corresponding geographic location.
Facebook
TwitterMultispectral imagery captured by Sentinel-2 satellites, featuring 13 spectral bands (visible, near-infrared, and short-wave infrared). Available globally since 2018 (Europe since 2017) with 10-60 m spatial resolution and revisit times of 2-3 days at mid-latitudes. Accessible through the EOSDA LandViewer platform for visualization, analysis, and download.
Facebook
TwitterThe main objective of this study is to determine whether the first (sentinel) lymph nodes in the drainage pathway of colonic tumour can be detected at the time of surgery using a new technique. The detection method is to inject a fluorescent dye (indocyanine green) adjacent to the tumour. The dye will then be seen as it fluoresces in the light form the near infrared spectrum that can be used at the time of the laparoscopic (keyhole) surgery. An endoscope is placed in the colon (colonoscopy) during surgery and the tracer fluorescent agent is injected around the tumour. The mesentery in which the lymph nodes draining the tumour are located will then be examined by laparoscopy as it is expected that fluorescence will be identified within approximately 5 minutes of the injection. The first lymph node or nodes that take up the fluorescent dye will then be marked by placing a clip or a stitch by them. After the surgery has been completed and colon removed all lymph nodes can be examined microscopically by the pathologist, paying a particular attention to whether any tumour cells are present in the sentinel lymph nodes and whether the presence or the absence of tumour cells in that node accurately reflects the tumour status of the rest of the specimen. If this pilot demonstrates that sentinel lymph nodes can be reliably detected, we have developed a technique which allows us to remove a small area (less than 5 cm) of the colon. Using this procedure should decrease complications following traditional surgery. We however also need a method that allows accurate assessment of the lymph nodes draining the tumour. This pilot trial will examine our ability to detect such ‘sentinel’ lymph nodes so that we can use their status (positive for cancer cells or negative) to determine whether a smaller operation such as full thickness localised excision is adequate treatment for the patient and that they can avoid a larger operation.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was curated for binary classification of Ragi and non-Ragi crop regions using satellite time series data. It comprises three major folders: Ground Data, Satellite Extracted Data, and Time Series Data. This dataset is designed to support agricultural research, crop mapping, and deep learning model development for precision farming in the Tumkuru District, Karnataka, India.
The Ground Data folder contains coordinate information of crop fields in the Tumkuru District, Karnataka, India, sourced from data.gov.in for the years 2018–2023.
longitude, latitude, year, and crop_type (Ragi or non-Ragi).The Satellite Extracted Data folder contains processed data from Sentinel-1 and Sentinel-2 satellites for ground coordinates in the Tumkuru District, covering July to December for each year (2018–2023).
For each coordinate in the Ground Data, six months of satellite data were extracted for each year.
S1_[Month]_[Year]_Batch[Number].csv
S1_September_2021_Batch5.csvS2_[Month]_[Year]_Batch[Number].csv
S2_September_2021_Batch5.csvThe Time Series Data folder contains the input data used for model training and is divided into:
IDBatchMonthLongitudeLatitudeLongitude, Latitude)| Folder | Example File Name | Description |
|---|---|---|
| Satellite Extracted Data | S1_September_2021_Batch5.csv or S2_September_2021_Batch5.csv | Batch number indicates sequential processing groups (e.g., Batch5 = 5th group of coordinates). |
| Time Series Data | Structured naming based on month, year, batch, and Ragi class (e.g., TimeSeries_July_2021_Batch5_Ragi.csv) |
If you use this dataset in your research, please cite:
Hanif S, Bansal P, S S, N S. Finger millet crop mapping Sentinel dataset. Kaggle. (2025). Available from: https://www.kaggle.com/datasets/safwanmohammed19/ragi-crop-mapping-sentinel-dataset-2025
Additionally, cite the original data sources: - Ground Data: data.gov.in, Open Government Data License – India. - Satellite Data: Sentinel-1 and Sentinel-2 data via Google Earth Engine API, [specify source terms if applicable].
This dataset includes Ground Data obtained from data.gov.in under the Open Government Data License – India, which allow...
Facebook
TwitterObjective: Newer technologies such as near-infrared (NIR) imaging of the fluorescent dye indocyanine green (ICG) and daVinci Xi Surgical System have become promising tools for sentinel lymph node (SLN) mapping. This meta-analysis was conducted to comprehensively evaluate the diagnostic value of SLN in assessing lymph nodal metastasis in pelvic malignancies, using ICG with NIR imaging in robotic-assisted surgery.Materials and Methods: A literature search was conducted using PubMed for studies in English before April 2019. The detection rate, sensitivity of SLN detection of metastatic disease, and factors associated with successful mapping (sample size, study design, mean age, mean body mass index, type of cancer) were synthesized for meta-analysis.Results: A total of 17 articles including 1,059 patients were finally included. The reported detection rates of SLN ranged from 76 to 100%, with a pooled average rate of 95% (95% CI: 93–97; 17 studies). The sensitivity of SLN detection of metastatic disease ranged from 50 to 100% and the pooled sensitivity was 86% (95% CI: 75–94; 8 studies). There were no complications related to ICG administration reported.Conclusions: NIR imaging system using ICG in robotic-assisted surgery is a feasible and safe method for SLN mapping. Due to its promising performance, it is considered to be an alternative to a complete pelvic lymph node dissection.
Facebook
TwitterThree ET datasets were generated to evaluate the potential integration of Landsat and Sentinel-2 data for improved ET mapping. The first ET dataset was generated by linear interpolation (Lint) of Landsat-based ET fraction (ETf) images of before and after the selected image dates. The second ET dataset was generated using the regular SSEBop approach using the Landsat image only (Lonly). The third ET dataset was generated from the proposed Landsat-Sentinel data fusion (L-S) approach by applying ETf images from Landsat and Sentinel. The scripts (two) used to generate these three ET datasets are included – one script for processing SSEBop model to generate ET maps from Lonly and another script for generating ET maps from Lint and L-S approach.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Additional potential factors influencing the SLN detection rate (n = 767).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sentinel2GlobalLULC is a deep learning-ready dataset of RGB images from the Sentinel-2 satellites designed for global land use and land cover (LULC) mapping. Sentinel2GlobalLULC v2.1 contains 194,877 images in GeoTiff and JPEG format corresponding to 29 broad LULC classes. Each image has 224 x 224 pixels at 10 m spatial resolution and was produced by assigning the 25th percentile of all available observations in the Sentinel-2 collection between June 2015 and October 2020 in order to remove atmospheric effects (i.e., clouds, aerosols, shadows, snow, etc.). A spatial purity value was assigned to each image based on the consensus across 15 different global LULC products available in Google Earth Engine (GEE).
Our dataset is structured into 3 main zip-compressed folders, an Excel file with a dictionary for class names and descriptive statistics per LULC class, and a python script to convert RGB GeoTiff images into JPEG format. The first folder called "Sentinel2LULC_GeoTiff.zip" contains 29 zip-compressed subfolders where each one corresponds to a specific LULC class with hundreds to thousands of GeoTiff Sentinel-2 RGB images. The second folder called "Sentinel2LULC_JPEG.zip" contains 29 zip-compressed subfolders with a JPEG formatted version of the same images provided in the first main folder. The third folder called "Sentinel2LULC_CSV.zip" includes 29 zip-compressed CSV files with as many rows as provided images and with 12 columns containing the following metadata (this same metadata is provided in the image filenames):
For seven LULC classes, we could not export from GEE all images that fulfilled a spatial purity of 100% since there were millions of them. In this case, we exported a stratified random sample of 14,000 images and provided an additional CSV file with the images actually contained in our dataset. That is, for these seven LULC classes, we provide these 2 CSV files:
To clearly state the geographical coverage of images available in this dataset, we included in the version v2.1, a compressed folder called "Geographic_Representativeness.zip". This zip-compressed folder contains a csv file for each LULC class that provides the complete list of countries represented in that class. Each csv file has two columns, the first one gives the country code and the second one gives the number of images provided in that country for that LULC class. In addition to these 29 csv files, we provided another csv file that maps each ISO Alpha-2 country code to its original full country name.
© Sentinel2GlobalLULC Dataset by Yassir Benhammou, Domingo Alcaraz-Segura, Emilio Guirado, Rohaifa Khaldi, Boujemâa Achchab, Francisco Herrera & Siham Tabik is marked with Attribution 4.0 International (CC-BY 4.0)
Facebook
TwitterThe Barest Earth Sentinel-2 Map Index dataset depicts the 1 to 250 000 maps sheet tile frames that have been used to generate individual tile downloads of the Barest Earth Sentinel-2 product. This web service is designed to be used in conjunction with the Barest Earth Sentinel-2 web service to provide users with direct links for imagery download.
Facebook
TwitterThe Sentinel-2 10m Land Use/Land Cover Time Series displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution.The World Imagery (Firefly) map is designed to be used as a neutral imagery basemap, with de-saturated colors, that is useful for overlaying other brightly styled layers.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The SEN12 Global Urban Mapping (SEN12_GUM) dataset consists of Sentinel-1 SAR (VV + VH band) and Sentinel-2 MSI (10 spectral bands) satellite images acquired over the same area for 96 training and validation sites and an additional 60 test sites covering unique geographies across the globe. The satellite imagery was acquired as part of the European Space Agency's Earth observation program Copernicus and was preprocessed in Google Earth Engine. Built-up area labels for the 30 training and validation sites located in the United States, Canada, and Australia were obtained from Microsoft's open-access building footprints. The other 66 training sites located outside of the United States, Canada, and Australia are unlabeled but can be used for semi-supervised learning. Labels obtained from the SpaceNet7 dataset are provided for all 60 test sites.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was generated by the Remote Sensing Group of the TU Wien Department of Geodesy and Geoinformation (https://mrs.geo.tuwien.ac.at/), within a dedicated project by the European Space Agency (ESA). Rights are reserved with ESA. Open use is granted under the CC BY 4.0 license.With this dataset publication, we open up a new perspective on Earth's land surface, providing a normalised microwave backscatter map from spaceborne Synthetic Aperture Radar (SAR) observations. The Sentinel-1 Global Backscatter Model (S1GBM) describes Earth for the period 2016-17 by the mean C-band radar cross section in VV- and VH-polarization at a 10 m sampling, giving a high-quality impression on surface- structures and -patterns.At TU Wien, we processed 0.5 million Sentinel-1 scenes totaling 1.1 PB and performed semi-automatic quality curation and backscatter harmonisation related to orbit geometry effects. The overall mosaic quality excels (the few) existing datasets, with minimised imprinting from orbit discontinuities and successful angle normalisation in large parts of the world. Supporting the designand verification of upcoming radar sensors, the obtained S1GBM data potentially also serve land cover classification and determination of vegetation and soil states, as well as water body mapping.We invite developers from the broader user community to exploit this novel data resource and to integrate S1GBM parameters in models for various variables of land cover, soil composition, or vegetation structure.Please be referred to our peer-reviewed article at TODO: LINK TO BE PROVIDED for details, generation methods, and an in-depth dataset analysis. In this publication, we demonstrate – as an example of the S1GBM's potential use – the mapping of permanent water bodies and evaluate the results against the Global Surface Water (GSW) benchmark.Dataset RecordThe VV and VH mosaics are sampled at 10 m pixel spacing, georeferenced to the Equi7Grid and divided into six continental zones (Africa, Asia, Europe, North America, Oceania, South America), which are further divided into square tiles of 100 km extent ("T1"-tiles). With this setup, the S1GBM consists of 16071 tiles over six continents, for VV and VH each, totaling to a compressed data volume of 2.67 TB.The tiles' file-format is a LZW-compressed GeoTIFF holding 16-bit integer values, with tagged metadata on encoding and georeference. Compatibility with common geographic information systems as QGIS or ArcGIS, and geodata libraries as GDAL is given.In this repository, we provide each mosaic as tiles that are organised in a folder structure per continent. With this, twelve zipped dataset-collections per continent are available for download.Web-Based Data ViewerIn addition to this data provision here, there is a web-based data viewer set up at the facilities of the Earth Observation Data Centre (EODC) under http://s1map.eodc.eu/. It offers an intuitive pan-and-zoom exploration of the full S1GBM VV and VH mosaics. It has been designed to quickly browse the S1GBM, providing an easy and direct visual impression of the mosaics.Code AvailabilityWe encourage users to use the open-source Python package yeoda, a datacube storage access layer that offers functions to read, write, search, filter, split and load data from the S1GBM datacube. The yeoda package is openly accessible on GitHub at https://github.com/TUW-GEO/yeoda.Furthermore, for the usage of the Equi7Grid we provide data and tools via the python package available on GitHub at https://github.com/TUW-GEO/Equi7Grid. More details on the grid reference can be found in https://www.sciencedirect.com/science/article/pii/S0098300414001629.AcknowledgementsThis study was partly funded by the project "Development of a Global Sentinel-1 Land Surface Backscatter Model", ESA Contract No. 4000122681/17/NL/MP for the European Union Copernicus Programme. The computational results presented have been achieved using the Vienna Scientific Cluster (VSC). We further would like to thank our colleagues at TU Wien and EODC for supporting us on technical tasks to cope with such a large and complex data set. Last but not least, we appreciate the kind assistance and swift support of the colleagues from the TU Wien Center for Research Data Management.
Facebook
TwitterObjectiveTo evaluate the utility of sentinel lymph node mapping (SLN) in endometrial cancer (EC) patients in comparison with lymphadenectomy (LND).MethodsComprehensive search was performed in MEDLINE, EMBASE, CENTRAL, OVID, Web of science databases, and three clinical trials registration websites, from the database inception to September 2020. The primary outcomes covered operative outcomes, nodal assessment, and oncological outcomes. Software Revman 5.3 was used. Trial sequential analysis (TSA) and Grading of Recommendations Assessment, Development, and Evaluation (GRADE) were performed.ResultsOverall, 5,820 EC patients from 15 studies were pooled in the meta-analysis: SLN group (N = 2,152, 37.0%), LND group (N = 3,668, 63.0%). In meta-analysis of blood loss, SLN offered advantage over LND in reducing operation bleeding (I2 = 74%, P<0.01). Z-curve of blood loss crossed trial sequential monitoring boundaries though did not reach TSA sample size. There was no difference between SLN and LND in intra-operative complications (I2 = 7%, P = 0.12). SLN was superior to LND in detecting positive pelvic nodes (P-LN) (I2 = 36%, P<0.001), even in high risk patients (I2 = 36%, P = 0.001). While no difference was observed in detection of positive para-aortic nodes (PA-LN) (I2 = 47%, P = 0.76), even in high risk patients (I2 = 62%, P = 0.34). Analysis showed no difference between two groups in the number of resected pelvic nodes (I2 = 99%, P = 0.26). SLN was not associated with a statistically significant overall survival (I2 = 79%, P = 0.94). There was no difference in progression-free survival between SLN and LND (I2 = 52%, P = 0.31). No difference was observed in recurrence. Based on the GRADE assessment, we considered the quality of current evidence to be moderate for P-LN biopsy, low for items like blood loss, PA-LN positive.ConclusionThe present meta-analysis underlines that SLN is capable of reducing blood loss during operation in regardless of surgical approach with firm evidence from TSA. SLN mapping is more targeted for less node dissection and more detection of positive lymph nodes even in high risk patients with conclusive evidence from TSA. Utility of SLN yields no survival detriment in EC patients.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains a map of the main classes of agricultural land use (dominant crop types and other land use types) in Germany for the year 2022. It complements a series of maps that are produced annually at the Thünen Institute beginning with the year 2017 on the basis of satellite data. The maps cover the entire open landscape, i.e., the agriculturally used area (UAA) and e.g., uncultivated areas. The map was derived from time series of Sentinel-1, Sentinel-2, Landsat 8 and additional environmental data. Map production is based on the methods described in Blickensdörfer et al. (2022). All optical satellite data were managed, pre-processed and structured in an analysis-ready data (ARD) cube using the open-source software FORCE - Framework for Operational Radiometric Correction for Environmental monitoring (Frantz, D., 2019), in which SAR and environmental data were integrated. The map extent covers all areas in Germany that are defined as agricultural land, grassland, small woody features, heathland, peatland or unvegetated areas according to ATKIS Basis-DLM (Geobasisdaten: © GeoBasis-DE / BKG, 2020). Version v201: Post-processing of the maps included a sieve filter as well as a ruleset for the reduction of non-plausible areas using the Basis-DLM and the digital terrain model of Germany (Geobasisdaten: © GeoBasis-DE / BKG, 2015). The final post-processing step comprises the aggregation of the gridded data to homogeneous objects (fields) based on the approach that is described in Tetteh et al. (2021) and Tetteh et al. (2023). The maps are available in FlatGeobuf format, which makes downloading the full dataset optional. All data can directly be accessed in QGIS, R, Python or any supported software of your choice using the URL to the datasets that will be provided on request. By doing so the entire map area or only the regions of interest can be accessed. QGIS legend files for data visualization can be downloaded separately. Class-specific accuracies for each year are proveded in the respective tables. We provide this dataset "as is" without any warranty regarding the accuracy or completeness and exclude all liability. References: Blickensdörfer, L., Schwieder, M., Pflugmacher, D., Nendel, C., Erasmi, S., & Hostert, P. (2022). Mapping of crop types and crop sequences with combined time series of Sentinel-1, Sentinel-2 and Landsat 8 data for Germany. Remote Sensing of Environment, 269, 112831. BKG, Bundesamt für Kartographie und Geodäsie (2015). Digitales Geländemodell Gitterweite 10 m. DGM10. https://sg.geodatenzentrum.de/web_public/gdz/dokumentation/deu/dgm10.pdf (last accessed: 28. April 2022). BKG, Bundesamt für Kartographie und Geodäsie (2020). Digitales Basis-Landschaftsmodell. https://sg.geodatenzentrum.de/web_public/gdz/dokumentation/deu/basis-dlm.pdf (last accessed: 28. April 2022). Frantz, D. (2019). FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond. Remote Sensing, 11, 1124. Tetteh, G.O., Gocht, A., Erasmi, S., Schwieder, M., & Conrad, C. (2021). Evaluation of Sentinel-1 and Sentinel-2 Feature Sets for Delineating Agricultural Fields in Heterogeneous Landscapes. IEEE Access, 9, 116702-116719. Tetteh, G.O., Schwieder, M., Erasmi, S., Conrad, C., & Gocht, A. (2023). Comparison of an Optimised Multiresolution Segmentation Approach with Deep Neural Networks for Delineating Agricultural Fields from Sentinel-2 Images. PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science
Facebook
TwitterThis layer displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution. Each year is generated from Impact Observatory’s deep learning AI land classification model used a massive training dataset of billions of human-labeled image pixels developed by the National Geographic Society. The global maps were produced by applying this model to the Sentinel-2 scene collection on Microsoft’s Planetary Computer, processing over 400,000 Earth observations per year.The algorithm generates LULC predictions for 10 classes, described in detail below. The year 2017 has a land cover class assigned for every pixel, but its class is based upon fewer images than the other years. The years 2018-2021 are based upon a more complete set of imagery. For this reason, the year 2017 may have less accurate land cover class assignments than the years 2018-2021.Variable mapped: Land use/land cover in 2017, 2018, 2019, 2020, 2021Data Projection: Universal Transverse Mercator (UTM)Mosaic Projection: WGS84Extent: GlobalSource imagery: Sentinel-2Cell Size: 10m (0.00008983152098239751 degrees)Type: ThematicSource: Esri Inc.Publication date: January 2022What can you do with this layer?Global land use/land cover maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover anywhere on Earth. It should be noted that since land use focus does not provide the spatial detail of a land cover map for the built area classification – yards, parks, small groves will appear as built area rather than trees or rangeland classes This layer can also be used in analyses that require land use/land cover input. For example, the Zonal Statistics tools allow a user to understand the composition of a specified area by reporting the total estimates for each of the classes. Land Cover processingThis map was produced by a deep learning model trained using over 5 billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6 bands of Sentinel-2 surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map for each year.Processing platformSentinel-2 L2A/B data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch.Class definitions1. WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2. TreesAny significant clustering of tall (~15 feet or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4. Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5. CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7. Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8. Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9. Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields. 10. CloudsNo land cover information due to persistent cloud cover.11. RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.For questions please email environment@esri.com
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this upload we share processed crop type datasets from both France and Kenya. These datasets can be helpful for testing and comparing various domain adaptation methods. The datasets are processed, used, and described in this paper: https://doi.org/10.1016/j.rse.2021.112488 (arXiv version: https://arxiv.org/pdf/2109.01246.pdf).
In summary, each point in the uploaded datasets corresponds to a particular location. The label is the crop type grown at that location in 2017. The 70 processed features are based on Sentinel-2 satellite measurements at that location in 2017. The points in the France dataset come from 11 different departments (regions) in Occitanie, France, and the points in the Kenya dataset come from 3 different regions in Western Province, Kenya. Within each dataset there are notable shifts in the distribution of the labels and in the distribution of the features between regions. Therefore, these datasets can be helpful for testing for testing and comparing methods that are designed to address such distributional shifts.
More details on the dataset and processing steps can be found in Kluger et. al. (2021). Much of the processing steps were taken to deal with Sentinel-2 measurements that were corrupted by cloud cover. For users interested in the raw multi-spectral time series data and dealing with cloud cover issues on their own (rather than using the 70 processed features provided here), the raw dataset from Kenya can be found in Yeh et. al. (2021), and the raw dataset from France can be made available upon request from the authors of this Zenodo upload.
All of the data uploaded here can be found in "CropTypeDatasetProcessed.RData". We also post the dataframes and tables within that .RData file as separate .csv files for users who do not have R. The contents of each R object (or .csv file) is described in the file "Metadata.rtf".
Preferred Citation:
-Kluger, D.M., Wang, S., Lobell, D.B., 2021. Two shifts for crop mapping: Leveraging aggregate crop statistics to improve satellite-based maps in new regions. Remote Sens. Environ. 262, 112488. https://doi.org/10.1016/j.rse.2021.112488.
-URL to this Zenodo post https://zenodo.org/record/6376160
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Patients and tumor characteristics (n = 767).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is linked to the publication "Recursive classification of satellite imaging time-series: An application to land cover mapping". In this paper, we introduce the recursive Bayesian classifier (RBC), which converts any instantaneous classifier into a robust online method through a probabilistic framework that is resilient to non-informative image variations. To reproduce the results presented in the paper, the RBC-SatImg folder and the code in the GitHub repository RBC-SatImg are required.
The RBC-SatImg folder contains:
The Sentinel-2 images and forest labels used in the deforestation detection experiment for the Amazon Rainforest have been obtained from the MultiEarth Challenge dataset.
The following paths can be changed in the configuration file from the GitHub repository as desired. The RBC-SatImg is organized as follows:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The TU Wien flood mapping algorithm is a Sentinel-1-based workflow using Bayes Inference at the pixel level. The algorithm is currently deployed in global operations under the Copernicus GFM project and have been shown to work generally well. However, the current approach has overestimation issues related to imperfect no-flood probability modeling. In a recent study, we proposed and compared an Exponential Filter derived from no-flood references versus the original Harmonic Model. We have conducted experiments on seven study sites for flooded and no-flood scenarios. A full description and discussion are found in the paper: Assessment of Time-Series-Derived No-Flood Reference for SAR-based Bayesian Flood Mapping.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here we produced the first 10 m resolution urban green space (UGS) map for the main urban clusters across 371 major Latin American cities as of 2017. Our approach applied a supervised classification of Sentinel-2 satellite imagery and UGS samples derived from OpenStreetMap (OSM). The overall accuracy of this UGS map in 11 randomly selected cities was 0.87, evaluated by independently collected validation samples (‘ground truth’). We further improved mapping quality through a visual inspection and additional sample collection. The resulting UGS map enables studies to measure area, spatial configuration, and human exposures to UGS, facilitating studies about the relationship between UGS and human exposures to environmental hazards, public health outcomes, and environmental justice issues in Latin American cities.UGS in this map series includes grass, shrub, forest, and farmland, and non-UGS included buildings, pavement, roads, barren land, and dry vegetation.The UGS map series includes three sets of files:(1) binary UGS maps at 10 m spatial resolution in GEOTIFF format (UGS.zip), with each of the 371 cities being an individual map. Mapped value of 1 indicates UGS, 0 indicates non-UGS, and no data (with value of -32768) indicates areas outside the mapped boundary or water bodies;(2) a shapefile of mapped boundaries (Boundaries.zip). The boundary file contains city name, country name and its ISO-2 country code, and an ID field linking each city's boundary to the corresponding UGS map.(3) .prj files containing projection information for the binary UGS maps and boundary shapefile. The binary UGS maps are projected with World Geodetic System (WGS) 84 / Pseudo-Mercator projected coordinate system (EPSG: 3857), and the boundary shapefile is projected with WGS 1984 geographic coordinate system (EPSG: 4326)Reference: A 10 m resolution urban green space map for major Latin American cities from Sentinel-2 remote sensing images and OpenStreetMap, published by Scientific Data [link].Citation: Ju, Y., Dronova, I., & Delclòs-Alió, X. (2022). A 10 m resolution urban green space map for major Latin American cities from Sentinel-2 remote sensing images and OpenStreetMap. Scientific Data, 9, Article 1. https://doi.org/10.1038/s41597-022-01701-y
Facebook
TwitterSentinel-1 performs systematic acquisition of bursts in both IW and EW modes. The bursts overlap almost perfectly between different passes and are always located at the same place. With the deployment of the SAR processor S1-IPF 3.4, a new element has been added to the products annotations: the Burst ID, which should help the end user to identify a burst area of interest and facilitate searches. The Burst ID map is a complementary auxiliary product. The maps have a validity that covers the entire time span of the mission and they are global, i.e., they include as well information where no SAR data is acquired. Each granule contains information about burst and sub-swath IDs, relative orbit and burst polygon, and should allow for an easier link between a certain burst ID in a product and its corresponding geographic location.