Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset is a global resource for machine learning applications in mining area detection and semantic segmentation on satellite imagery. It contains Sentinel-2 satellite images and corresponding mining area masks + bounding boxes for 1,210 sites worldwide. Ground-truth masks are derived from Maus et al. (2022) and Tang et al. (2023), and validated through manual verification to ensure accurate alignment with Sentinel-2 imagery from specific timestamps.
The dataset includes three mask variants:
Each tile corresponds to a 2048x2048 pixel Sentinel-2 image, with metadata on mine type (surface, placer, underground, brine & evaporation) and scale (artisanal, industrial). For convenience, the preferred mask dataset is already split into training (75%), validation (15%), and test (10%) sets.
Furthermore, dataset quality was validated by re-validating test set tiles manually and correcting any mismatches between mining polygons and visually observed true mining area in the images, resulting in the following estimated quality metrics:
Combined | Maus | Tang | |
Accuracy | 99.78 | 99.74 | 99.83 |
Precision | 99.22 | 99.20 | 99.24 |
Recall | 95.71 | 96.34 | 95.10 |
Note that the dataset does not contain the Sentinel-2 images themselves but contains a reference to specific Sentinel-2 images. Thus, for any ML applications, the images must be persisted first. For example, Sentinel-2 imagery is available from Microsoft's Planetary Computer and filterable via STAC API: https://planetarycomputer.microsoft.com/dataset/sentinel-2-l2a. Additionally, the temporal specificity of the data allows integration with other imagery sources from the indicated timestamp, such as Landsat or other high-resolution imagery.
Source code used to generate this dataset and to use it for ML model training is available at https://github.com/SimonJasansky/mine-segmentation. It includes useful Python scripts, e.g. to download Sentinel-2 images via STAC API, or to divide tile images (2048x2048px) into smaller chips (e.g. 512x512px).
A database schema, a schematic depiction of the dataset generation process, and a map of the global distribution of tiles are provided in the accompanying images.
The C2S-MS Floods Dataset is a dataset of global flood events with labeled Sentinel-1 & Sentinel-2 pairs. There are 900 sets (1800 total) of near-coincident Sentinel-1 and Sentinel-2 chips (512 x 512 pixels) from 18 global flood events. Each chip contains a water label for both Sentinel-1 and Sentinel-2, as well as a cloud/cloud shadow mask for Sentinel-2. The dataset was constructed by Cloud to Street in collaboration with and funded by the Microsoft Planetary Computer team.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This layer displays a global map of land use/land cover (LULC) derived from ESA Sentinel-2 imagery at 10m resolution. Each year is generated with Impact Observatory’s deep learning AI land classification model, trained using billions of human-labeled image pixels from the National Geographic Society. The global maps are produced by applying this model to the Sentinel-2 Level-2A image collection on Microsoft’s Planetary Computer, processing over 400,000 Earth observations per year.The algorithm generates LULC predictions for nine classes, described in detail below. The year 2017 has a land cover class assigned for every pixel, but its class is based upon fewer images than the other years. The years 2018-2024 are based upon a more complete set of imagery. For this reason, the year 2017 may have less accurate land cover class assignments than the years 2018-2024. Key Properties Variable mapped: Land use/land cover in 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024Source Data Coordinate System: Universal Transverse Mercator (UTM) WGS84Service Coordinate System: Web Mercator Auxiliary Sphere WGS84 (EPSG:3857)Extent: GlobalSource imagery: Sentinel-2 L2ACell Size: 10-metersType: ThematicAttribution: Esri, Impact ObservatoryAnalysis: Optimized for analysisClass Definitions: ValueNameDescription1WaterAreas where water was predominantly present throughout the year; may not cover areas with sporadic or ephemeral water; contains little to no sparse vegetation, no rock outcrop nor built up features like docks; examples: rivers, ponds, lakes, oceans, flooded salt plains.2TreesAny significant clustering of tall (~15 feet or higher) dense vegetation, typically with a closed or dense canopy; examples: wooded vegetation, clusters of dense tall vegetation within savannas, plantations, swamp or mangroves (dense/tall vegetation with ephemeral water or canopy too thick to detect water underneath).4Flooded vegetationAreas of any type of vegetation with obvious intermixing of water throughout a majority of the year; seasonally flooded area that is a mix of grass/shrub/trees/bare ground; examples: flooded mangroves, emergent vegetation, rice paddies and other heavily irrigated and inundated agriculture.5CropsHuman planted/plotted cereals, grasses, and crops not at tree height; examples: corn, wheat, soy, fallow plots of structured land.7Built AreaHuman made structures; major road and rail networks; large homogenous impervious surfaces including parking structures, office buildings and residential housing; examples: houses, dense villages / towns / cities, paved roads, asphalt.8Bare groundAreas of rock or soil with very sparse to no vegetation for the entire year; large areas of sand and deserts with no to little vegetation; examples: exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried lake beds, mines.9Snow/IceLarge homogenous areas of permanent snow or ice, typically only in mountain areas or highest latitudes; examples: glaciers, permanent snowpack, snow fields.10CloudsNo land cover information due to persistent cloud cover.11RangelandOpen areas covered in homogenous grasses with little to no taller vegetation; wild cereals and grasses with no obvious human plotting (i.e., not a plotted field); examples: natural meadows and fields with sparse to no tree cover, open savanna with few to no trees, parks/golf courses/lawns, pastures. Mix of small clusters of plants or single plants dispersed on a landscape that shows exposed soil or rock; scrub-filled clearings within dense forests that are clearly not taller than trees; examples: moderate to sparse cover of bushes, shrubs and tufts of grass, savannas with very sparse grasses, trees or other plants.NOTE: Land use focus does not provide the spatial detail of a land cover map. As such, for the built area classification, yards, parks, and groves will appear as built area rather than trees or rangeland classes.Usage Information and Best PracticesProcessing TemplatesThis layer includes a number of preconfigured processing templates (raster function templates) to provide on-the-fly data rendering and class isolation for visualization and analysis. Each processing template includes labels and descriptions to characterize the intended usage. This may include for visualization, for analysis, or for both visualization and analysis. VisualizationThe default rendering on this layer displays all classes.There are a number of on-the-fly renderings/processing templates designed specifically for data visualization.By default, the most recent year is displayed. To discover and isolate specific years for visualization in Map Viewer, try using the Image Collection Explorer. AnalysisIn order to leverage the optimization for analysis, the capability must be enabled by your ArcGIS organization administrator. More information on enabling this feature can be found in the ‘Regional data hosting’ section of this help doc.Optimized for analysis means this layer does not have size constraints for analysis and it is recommended for multisource analysis with other layers optimized for analysis. See this group for a complete list of imagery layers optimized for analysis.Prior to running analysis, users should always provide some form of data selection with either a layer filter (e.g. for a specific date range, cloud cover percent, mission, etc.) or by selecting specific images. To discover and isolate specific images for analysis in Map Viewer, try using the Image Collection Explorer.Zonal Statistics is a common tool used for understanding the composition of a specified area by reporting the total estimates for each of the classes. GeneralIf you are new to Sentinel-2 LULC, the Sentinel-2 Land Cover Explorer provides a good introductory user experience for working with this imagery layer. For more information, see this Quick Start Guide.Global land use/land cover maps provide information on conservation planning, food security, and hydrologic modeling, among other things. This dataset can be used to visualize land use/land cover anywhere on Earth. Classification ProcessThese maps include Version 003 of the global Sentinel-2 land use/land cover data product. It is produced by a deep learning model trained using over five billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world.The underlying deep learning model uses 6-bands of Sentinel-2 L2A surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map for each year.The input Sentinel-2 L2A data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch. CitationKarra, Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote Sensing Symposium. IEEE, 2021.AcknowledgementsTraining data for this project makes use of the National Geographic Society Dynamic World training dataset, produced for the Dynamic World Project by National Geographic Society in partnership with Google and the World Resources Institute.
The Planetary Rover Driving Status Anomaly Detection Dataset is generated and provided by the Moon Wreckers Team from Carnegie Mellon University Robotics Institute in the Roverside Assistance Project. This dataset includes 5 ROS bag files collected from the AK1 rover in different status and a ROS bag file collected from the AK2 rover in the stopped status. Topic messages exported into .csv format files are also provided, including (1) the raw rover odometry estimated from wheel encoders, (2) the filtered rover odometry from both wheel encoders and IMUs, and (3) the odometry from VIVE trackers.
Dataset Card for S2-100K
The S2-100K dataset is a dataset of 100,000 multi-spectral satellite images sampled from Sentinel-2 via the Microsoft Planetary Computer. Copernicus Sentinel data is captured between Jan 1, 2021 and May 17, 2023. The dataset is sampled approximately uniformly over landmass and only includes images without cloud coverage. The dataset is available for research purposes only. If you use the dataset, please cite our paper. More information on the dataset can… See the full description on the dataset page: https://huggingface.co/datasets/davanstrien/satclip.
Initial training dataset for DeepLandforms: A Deep Learning Computer Vision toolset applied to a prime use case for mapping planetary skylights
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This image dataset contains images of planets from our solar system. The dataset includes images of all eight planets in our solar system. The images are labeled with metadata that identifies the planet or object in the image.
The goal of this dataset is to train a computer vision model for object detection, specifically for detecting planets from our solar system.
By using this dataset for training, the model will be able to identify planets in images taken from telescopes or other space-based instruments. This has important applications in astronomy and space exploration, as it can help scientists identify and study planets in our solar system and beyond.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
The purpose of this dataset is to train a classifier to detect "dusty" versus "not dusty" patches within browse-resolution HiRISE observations of the Martian surface. Here, "dusty" refers to images in which the view of the surface has been obscured heavily by atmospheric dust.
The dataset contains two sets of 20,000 image patches each from EDR (full resolution) and RDR ("browse" resolution) non-map-projected ("nomap") HiRISE images, with balanced classes. The patches have been split into train (n = 10,000), validation (n = 5,000), and test (n = 5,000) sets such that no two patches from the same HiRISE observation appear in more than one of these subsets. There could be some noise in the labels, but a subset of the validation images have been manually vetted so that label noise rates can be estimated. More details on the dataset creation process are described below.
Generating Candidate Images and Patches
To begin constructing the dataset, the paper "The origin, evolution, and trajectory of large dust storms on Mars during Mars years 24–30 (1999–2011)," by Wang and Richardson (2015), was used to compile a set of time ranges for which global or regional dust storms were known to be occurring on Mars. All HiRISE RDR nomap browse images acquired within these time ranges were then inspected manually to determine sets of images that were (1) almost entirely obscured by dust and (2) almost entirely clear of dust. Then, 10,000 patches from the two subsets of images were extracted to form the "dusty" and "not dusty" classes. The extracted patches are 100-by-100 pixels, which roughly corresponds to the width of one CCD channel within the browse image (the width of the raw EDR data products that are stitched together to form a full RDR image). Some small amount of label noise is introduced in this process, since a patch from a mostly dusty image might happen to contain a clear view of the ground, and a patch from a mostly non-dusty image might contain some dust or regions on the surface that are featureless and appear like dusty patches. A set of "vetting labels" is included, which includes human annotations by the author for a subset of the validation set of patches. These labels can be used to estimate the apparent label noise in the dataset.
Corresponding to the RDR patch dataset, a set of patches are extracted from the same set of EDR images for the "dusty" and "not dusty" classes. EDRs are raw images from the instrument that have not been calibrated or stitched together. To provide some form of normalization, EDR patches are only extracted from the lower half of the EDRs, with the upper half being used to perform a basic calibration of the lower half. Basic calibration is done by subtracting the sample (image column) averages from the upper half to remove "striping," then computing the 0.1th and 99.9th percentiles of the remaining values in the upper half and stretching the image patch to 8-bit integer values [0, 255] within that range. The calibration is meant to implement a process that could be performed onboard the spacecraft as the data is being observed (hence, using the top half of the image acquired first to calibrate the lower half of the image which is acquired later). The full resolution EDRs, which are 1024 pixels wide, are resized down to 100-by-100 pixel patches after being extracted so that they roughly match the resolution of the patches from the RDR browse images.
Archive Contents
The compressed archive file contains two top-level directories with similar contents, "edr_nomap_full_resized" and "rdr_nomap_browse." The first directory contains the dataset constructed from EDR data and the second contains the dataset constructed from RDR data.
Within each directory, there are "dusty" and "not_dusty" directories containing the image patches from each class, "manifest.csv," and "vetting_labels.csv." The vetting labels file contains a list of manually labeled examples, along with the original labels to make it easier to compute label noise rates. The "manifest.csv" file contains a list of every example, its label, and whether it belongs to the train, validation, or test set.
An example ID encodes information about where the patch was sampled from the original HiRISE image. As an example from the RDR dataset, the ID "003100_PSP_004440_2125_r4805_c512" can be broken into several parts:
For the EDR dataset, the ID "200000_PSP_004530_1030_RED7_1_r9153" is broken down as follows:
Original Data
The original HiRISE EDR and RDR data is available via the Planetary Data System (PDS), hosted at https://hirise-pds.lpl.arizona.edu/PDS/
The S2-100K dataset is a dataset of 100,000 multi-spectral satellite images and their corresponding locations (latitude / longitude coordinates of the image centroid) sampled from Sentinel-2 via the Microsoft Planetary Computer. Copernicus Sentinel data is captured between Jan 1, 2021 and May 17, 2023. The dataset is sampled approximately uniformly over landmass and only includes images without cloud coverage.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global high precision planetary gear reducers market size was valued at approximately USD 2.5 billion in 2023 and is expected to reach a staggering USD 4.8 billion by 2032, growing at a compound annual growth rate (CAGR) of 7.5%. The rising demand for automation in various industry sectors, combined with technological advancements in gear manufacturing, is significantly propelling this market's growth. The precision and efficiency offered by planetary gear reducers make them indispensable in high-performance applications, further driving market appeal.
One of the primary growth factors of the high precision planetary gear reducers market is the increasing adoption of automation technologies across diverse industrial sectors. As industries like automotive, aerospace, and manufacturing strive for greater efficiency, precision, and reliability, the demand for advanced gear reducers is on the rise. High precision planetary gear reducers provide the necessary torque and speed control, making them ideal for modern automation systems. Additionally, the proliferation of robotics in manufacturing and other sectors is significantly boosting the market. Robotics applications demand precise motion control, which these gear reducers efficiently deliver, thereby enhancing operational efficiency and productivity.
Another critical growth factor is the ongoing advancements in gear manufacturing technologies. Innovations such as 3D printing, advanced materials, and computer-aided design (CAD) are transforming the production of planetary gear reducers, making them more efficient and durable. These technological advancements are not only enhancing the performance of gear reducers but also reducing manufacturing costs, which, in turn, is making them more accessible to a broader range of industries. Furthermore, the integration of the Internet of Things (IoT) in machinery and equipment is fostering the development of smart gear reducers, which can monitor and optimize their performance in real-time, thereby contributing to market growth.
The burgeoning aerospace industry is also playing a pivotal role in the market's expansion. High precision planetary gear reducers are extensively used in aerospace applications for their high torque density and reliability. The increasing number of aircraft deliveries and the growing emphasis on improving fuel efficiency and reducing maintenance costs are driving the demand for advanced gear technologies in this sector. Moreover, the increasing focus on space exploration and satellite development is creating new opportunities for the market, as these applications require highly reliable and efficient gear systems.
Precision Speed Reducers are becoming increasingly important in the realm of high-performance machinery, where exactitude and reliability are paramount. These reducers are designed to provide precise control over speed and torque, making them ideal for applications that demand high accuracy and efficiency. In industries such as robotics and aerospace, where precise motion control is critical, the role of precision speed reducers cannot be overstated. They help in reducing the speed of an input shaft, thereby increasing the torque output, which is essential for the smooth operation of sophisticated machinery. As technology continues to evolve, the demand for precision speed reducers is expected to grow, driven by their ability to enhance the performance and efficiency of various mechanical systems.
Regionally, Asia Pacific is expected to dominate the high precision planetary gear reducers market over the forecast period. The rapid industrialization and urbanization in countries like China, India, and Japan are major drivers of market growth in this region. Additionally, the increasing investments in manufacturing and automotive sectors in these countries are further fueling the demand for high precision gear technologies. North America and Europe are also significant markets, driven by the presence of well-established automotive and aerospace industries. The Middle East & Africa and Latin America, while smaller in market size, are expected to witness steady growth due to increasing industrial activities and infrastructure development.
The high precision planetary gear reducers market can be segmented by product type into inline planetary gear reducers, right angle planetary gear reducers, and others. Inline planetary gear reduce
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Earth’s stratosphere is similar to the surface of Mars: rarified air which is dry, cold, and irradiated. E-MIST is a balloon payload that has 4 independently rotating skewers that hold known-quantities of spore-forming bacteria isolated from spacecraft assembly facilities at NASA. Knowing the survival profile of microbes in the stratosphere can uniquely contribute to NASA Planetary Protection for Mars.
Objectives
1. Collect environmental data in the stratosphere to understand factors impacting microbial survival.
2. Determine % of surviving microbes (compared to starting quantities).
3. Examine microbial DNA mutations induced by stratosphere exposure.
Introduction: We designed, built and flew a self-contained payload, Exposing Microorganisms in the Stratosphere (E-MIST), on a large scientific balloon launched from New Mexico on 24 Aug 2014 [1]. The payload carried Bacillus pumilus SAFR-032, a highly-resilient spore-forming bacterial strain originally isolated from a NASA spacecraft assembly facility. Our balloon test flight evaluated microbiological procedures and overall performance of the novel payload. Measuring the endurance of spacecraft-associated microbes at extreme altitudes may help predict their response on the surface of Mars since the upper atmosphere also exerts a harsh combination of stresses on microbes (e.g., lower pressure, higher irradiation, desiccation and oxidation) [2].
Materials and Methods: Our payload (83.3 cm x 53.3 cm x 25.4 cm; mass 36 kg) mounted onto the exterior of a high altitude balloon gondola. Four independent "skewers" rotated 180° to expose samples to the stratosphere. During ascent or descent, the samples remained enclosed within dark cylinders at ~25 °C. Each skewer had a base plate holding ten separate aluminum coupons with Bacillus pumilus spores deposited on the surface. Before and after the flight, B. pumilus was sporulated, enumerated and harvested using previously described techniques [3–5].
Major payload components were a lithium-ion battery, an ultraviolet (UV) radiometer (400 to 230 nm), humidity and temperatures sensors, and a flight computer. During the test flight, samples remained in a sealed position until the payload reached the lower stratosphere (~ 20 km above sea level). Next, the flight computer rotated the skewers into the outside air. After a short rotation demonstration (2 seconds), all skewers reverted to the closed position for the remainder of the flight. The payload continued floating at an altitude of 37.6 km for 4 hours before beginning a 23 minute descent on parachute.
Results and Discussion: Our first test flight examined unknowns associated with sample transportation, gondola installation, balloon ascent/descent, and time lingering in the New Mexico desert awaiting payload launch and recovery. We created a batch of experimental control coupons (each containing approximately 1 x 106 spores) used throughout the investigation for ground and flight test purposes. Several treatment categories were evaluated: Lab Ground Coupons (kept in the KSC laboratory); Transported Ground Coupons (traveled to New Mexico and back but not installed in payload); and Flight Coupons (flown). A subset of coupons from each treatment category were processed, resulting in statistically equivalent viability (Kruskal–Wallis rank-sum test at a 95% confidence level). Taken together, nearly identical viability from all coupons indicate that balloon flight operations and payload procedures did not influence spore survival. A negative control (blank, sterile coupon) was also flown to verify payload seals prevented outside contamination.
A species-specific inactivation model that predicts the persistence of microbes on the surface of Mars is one of many possible outcomes from balloon experiments in the stratosphere. The simplicity of the payload design lends itself to customization. Future investigators can easily reconfigure the sample base plate to accommodate other categories of microorganisms or molecules relevant to the Planetary Protection community. If future flights exposed microbes for hours, we would expect to see a rapid inactivation. Smith et al. [6] simulated stratospheric conditions and measured a 99.9% loss of viable Bacillus subtilis spores after only 6 hours of direct UV irradiation. Earth’s stratosphere is extremely dry, cold, irradiated, and hypobaric, and it may be useful for microorganisms isolated from NASA spacecraft assembly facilities to be evaluated in this accessible and robust Mars analog environment.
A second, science test flight launching from Ft. Sumner, NM, is scheduled for September 2015.
References: [1] D. J. Smith et al. (2014) Gravitational and Space Research, 2, 70–80. [2] D. J. Smith (2013) Astrobiology, 13, 981–990. [3] P. A. Vaishampayan et al. (2012) Astrobiology, 12, 487–497. [4] R. L. Mancinelli and M. Klovstad (2000) Planetary and Space Science, 48, 1093–1097. [5] R. Moeller et al. (2012) Astrobiology, 12, 457–468. [6] D. J. Smith et al. (2011) Aerobiologia, 27, 319–332.
Given the difficulty to handle planetary data we provide downloadable files in PNG format from the missions Chang'E-3 and Chang'E-4. In addition to a set of scripts to do the conversion given a different PDS4 Dataset.
This set of images constitute one of the first available datasets to tackle problems of Computer Vision and Learning in the context of space exploration.
Sentinel-5 Precursor (5P) is a low Earth orbit polar satellite mission dedicated to monitoring air pollution. The satellite carries the state-of-the-art TROPOspheric Monitoring Instrument (TROPOMI). With TROPOMI, Sentinel-5P images air pollutants more accurately and at a higher spatial resolution than any other spaceborne instrument.This layer provides daily global composite images of atmospheric methane measurements. Methane (CH4) is the second most important contributor to the anthropogenically enhanced greenhouse effect. Roughly three-quarters of methane emissions are anthropogenic and as such it is important to continue the record of satellite-based measurements. TROPOMI aims at providing CH4 column concentrations with high sensitivity to the Earth's surface, good spatio/temporal coverage, and sufficient accuracy to facilitate inverse modelling of sources and sinks. More...Key PropertiesGeographic Coverage: GlobalTemporal Coverage: 01-Jan-2019 to PresentSpatial Resolution: 5 x 5 KmTemporal Resolution: Daily*Product Level: Level 3Units/Physical Quantity: Parts per billion, Column averaged dry air mixing ratioTypical Value Range: 1,600 - 2,000 ppb* While imagery is collected globally each day, the time between data collection and product availability from ESA can vary. This layer is updated as imagery products are made publicly available. See Data Collection Notes below for additional details on data availability.Layer ConfigurationThe default rendering is a colorized rolling 7-day mean. This is applied with the "Colorized Methane (CH4) in Parts Per Billion" processing template, plus a layer definition query to select the most recent 7 days available ('Best' < 8).The default mosaic operator is "mean" or "average of all pixels". In practice, this means that unless a user has locked on to a single day/image, the values returned will be the mean for all images displayed. Layer filtering/definition queries can be used to customize your timeframe of interest. If no definition query is included on the layer, a mean for the latest 31 days will be displayed.It is important to note that display performance correlates with number if images. It will take longer to render a 31-day mean than is does for a 7-day mean.Users can temporally select/group data by either the 'AcquisitionDate' or 'Best' fields/attributes.The 'Best' field grades each item in the Image Service based on its recentness within the product's record. The most recent daily data is given the lowest values while the oldest daily data gets the highest values.Level 3 Processing OverviewThe original Sentinel-5P Level 2 data from the European Space Agency, hosted on the Microsoft Planetary Computer, has been re-gridded and merged to create a single Level 3 Cloud Optimized GeoTiff for each day of collection for each product. The software package HARP-Convert has been used to merge and re-grid the data in order to keep a single grid per orbit (that is, no aggregation across products is performed). The complete conversion to Level 3 using HARP-Convert utilizes the "bin_spatial" operation that spatially averages pixel values between overlapping scenes for a given day of collection. In addition to the merging operation, HARP Convert is used to filter pixel values for quality as well as other variables depending on product. After the merging and regridding process, OptimizeRasters is invoked to create a Level 3 Cloud Optimized GeoTiff for a given day's worth of data for each product.Data Collection NotesSentinel-5 Mission Status Reports are available from the Sentinel-5P team at the European Space Agency. These reports provide information on the status of the satellite and the instrument, the associated ground segment, and any mission milestones. From time to time, temporal data gaps may be present due to ground station outages, satellite repositioning, or recalibration of satellite instrumentation. In all cases, any disruption to the Sentinel-5 collection pattern are documented in the Mission Status Reports. Daily collections with any data gaps are omitted from this layer.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Urania trajectory for the three flybys of Umbriel in ascii format. In increasing order of column number are the Coordinated Universal Time (UTC), seconds past J2000, spacecraft position (x, y, and z) with respect to the center of Uranus in the IAU Uranus frame in units of Uranus radii 25,559 km, and spacecraft position (x,y, and z) with respect to the center of Umbriel in the IAU Umbriel frame in units of Umbriel radii 584.7 km.
https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/
Uncover historical ownership history and changes over time by performing a reverse Whois lookup for the company planet-computer.
Sentinel-2 multispectral and multitemporal atmospherically corrected imagery with on-the-fly visual renderings and indices for visualization and analysis. This includes thirteen multispectral bands at spatial resolutions of 10, 20, and 60-meters. This Imagery Layer is sourced from the Planetary Computer Sentinel-2 Level-2A data catalog on Azure which is updated daily with new imagery.
Geographic Coverage
Temporal Coverage
Product Level
Image Selection/Filtering
Visual Rendering
Multispectral Bands
Band | Description | Wavelength (µm) | Resolution (m) |
1 | Coastal aerosol | 0.433 - 0.453 | 60 |
2 | Blue | 0.458 - 0.523 | 10 |
3 | Green | 0.543 - 0.578 | 10 |
4 | Red | 0.650 - 0.680 | 10 |
5 | Vegetation Red Edge | 0.698 - 0.713 | 20 |
6 | Vegetation Red Edge | 0.733 - 0.748 | 20 |
<p |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This results from a prototype change alert system (Bunting et al., 2023) that has been developed to identify mangrove losses on a monthly basis. Implemented on the Microsoft Planetary Computer, the Global Mangrove Watch v3.0 mangrove baseline extent map (Bunting et al., 2022) for 2018 was refined and used to define the mangrove extent mask under which potential losses would be identified. The study period was 2018-2022 due to the availability of the Copernicus Sentinel-2 imagery used for the study. The alert system is based on optimised NDVI thresholds used to identify mangrove losses and a temporal scoring system used to filter false positives. The alert system was found to have an estimated overall accuracy of 92.1 %, with the alert commission and omission estimated to be 10.4 % and 20.6 %, respectively. The alert system is presently limited to Africa, where significant losses were identified in the study period, with 90 % of the loss alerts identified in Nigeria, Guinea-Bissau, Madagascar, Mozambique and Guinea. The drivers of those losses vary, with West Africa primarily driven by economic activities such as agricultural conversion and infrastructure development. At the same time, East Africa is dominated by climatic drivers, primarily storm frequency and intensity. Production of the monthly loss alerts for Africa will be continued as part of the wider Global Mangrove Watch project, and the spatial coverage is expected to be expanded over the coming months and years. Future updates of the mangrove loss alerts will be via the Global Mangrove Watch portal: https://www.globalmangrovewatch.org
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Important Note: This item is in mature support as of February 2023 and will be retired in December 2025. A new version of this item is available for your use. Esri recommends updating your maps and apps to use the new version. This layer displays change in pixels of the Sentinel-2 10m Land Use/Land Cover product developed by Esri, Impact Observatory, and Microsoft. Available years to compare with 2021 are 2018, 2019 and 2020. By default, the layer shows all comparisons together, in effect showing what changed 2018-2021. But the layer may be changed to show one of three specific pairs of years, 2018-2021, 2019-2021, or 2020-2021.Showing just one pair of years in ArcGIS Online Map ViewerTo show just one pair of years in ArcGIS Online Map viewer, create a filter. 1. Click the filter button. 2. Next, click add expression. 3. In the expression dialogue, specify a pair of years with the ProductName attribute. Use the following example in your expression dialogue to show only places that changed between 2020 and 2021:ProductNameis2020-2021By default, places that do not change appear as a
transparent symbol in ArcGIS Pro. But in ArcGIS Online Map Viewer, a transparent
symbol may need to be set for these places after a filter is
chosen. To do this:4. Click the styles button. 5. Under unique values click style options. 6. Click the symbol next to No Change at the bottom of the legend. 7. Click the slider next to "enable fill" to turn the symbol off.Showing just one pair of years in ArcGIS ProTo show just one pair of years in ArcGIS Pro, choose one of the layer's processing templates to single out a particular pair of years. The processing template applies a definition query that works in ArcGIS Pro. 1. To choose a processing template, right click the layer in the table of contents for ArcGIS Pro and choose properties. 2. In the dialogue that comes up, choose the tab that says processing templates. 3. On the right where it says processing template, choose the pair of years you would like to display. The processing template will stay applied for any analysis you may want to perform as well.How the change layer was created, combining LULC classes from two yearsImpact Observatory, Esri, and Microsoft used artificial intelligence to classify the world in 10 Land Use/Land Cover (LULC) classes for the years 2017-2021. Mosaics serve the following sets of change rasters in a single global layer: Change between 2018 and 2021Change between 2019 and 2021Change between 2020 and 2021To make this change layer, Esri used an arithmetic operation
combining the cells from a source year and 2021 to make a change index
value. ((from year * 16) + to year) In the example of the change between 2020 and 2021, the from year (2020) was multiplied by 16, then added to the to year (2021). Then the combined number is served as an index in an 8 bit unsigned mosaic with an attribute table which describes what changed or did not change in that timeframe. Variable mapped: Change in land cover between 2018, 2019, or 2020 and 2021 Data Projection: Universal Transverse Mercator (UTM)Mosaic Projection: WGS84Extent: GlobalSource imagery: Sentinel-2Cell Size: 10m (0.00008983152098239751 degrees)Type: ThematicSource: Esri Inc.Publication date: January 2022What can you do with this layer?Global LULC maps provide information on conservation planning, food security,
and hydrologic modeling, among other things. This dataset can be used to
visualize land cover anywhere on Earth. This
layer can also be used in analyses that require land cover input. For
example, the Zonal Statistics tools allow a user to understand the
composition of a specified area by reporting the total estimates for
each of the classes. Land Cover processingThis map was produced by a deep learning model trained using over 5 billion hand-labeled Sentinel-2 pixels, sampled from over 20,000 sites distributed across all major biomes of the world. The underlying deep learning model uses 6 bands of Sentinel-2 surface reflectance data: visible blue, green, red, near infrared, and two shortwave infrared bands. To create the final map, the model is run on multiple dates of imagery throughout the year, and the outputs are composited into a final representative map. Processing platformSentinel-2 L2A/B data was accessed via Microsoft’s Planetary Computer and scaled using Microsoft Azure Batch.Class definitions1. WaterAreas
where water was predominantly present throughout the year; may not
cover areas with sporadic or ephemeral water; contains little to no
sparse vegetation, no rock outcrop nor built up features like docks;
examples: rivers, ponds, lakes, oceans, flooded salt plains.2. TreesAny
significant clustering of tall (~15-m or higher) dense vegetation,
typically with a closed or dense canopy; examples: wooded vegetation,
clusters of dense tall vegetation within savannas, plantations, swamp or
mangroves (dense/tall vegetation with ephemeral water or canopy too
thick to detect water underneath).4. Flooded vegetationAreas
of any type of vegetation with obvious intermixing of water throughout a
majority of the year; seasonally flooded area that is a mix of
grass/shrub/trees/bare ground; examples: flooded mangroves, emergent
vegetation, rice paddies and other heavily irrigated and inundated
agriculture.5. CropsHuman
planted/plotted cereals, grasses, and crops not at tree height;
examples: corn, wheat, soy, fallow plots of structured land.7. Built AreaHuman
made structures; major road and rail networks; large homogenous
impervious surfaces including parking structures, office buildings and
residential housing; examples: houses, dense villages / towns / cities,
paved roads, asphalt.8. Bare groundAreas
of rock or soil with very sparse to no vegetation for the entire year;
large areas of sand and deserts with no to little vegetation; examples:
exposed rock or soil, desert and sand dunes, dry salt flats/pans, dried
lake beds, mines.9. Snow/IceLarge
homogenous areas of permanent snow or ice, typically only in mountain
areas or highest latitudes; examples: glaciers, permanent snowpack, snow
fields. 10. CloudsNo land cover information due to persistent cloud cover.11. Rangeland Open
areas covered in homogenous grasses with little to no taller
vegetation; wild cereals and grasses with no obvious human plotting
(i.e., not a plotted field); examples: natural meadows and fields with
sparse to no tree cover, open savanna with few to no trees, parks/golf
courses/lawns, pastures. Mix of small clusters of plants or single
plants dispersed on a landscape that shows exposed soil or rock;
scrub-filled clearings within dense forests that are clearly not taller
than trees; examples: moderate to sparse cover of bushes, shrubs and
tufts of grass, savannas with very sparse grasses, trees or other
plants.CitationKarra,
Kontgis, et al. “Global land use/land cover with Sentinel-2 and deep
learning.” IGARSS 2021-2021 IEEE International Geoscience and Remote
Sensing Symposium. IEEE, 2021.AcknowledgementsTraining
data for this project makes use of the National Geographic Society
Dynamic World training dataset, produced for the Dynamic World Project
by National Geographic Society in partnership with Google and the World
Resources Institute.For questions please email environment@esri.com
Fast flood extent monitoring with SAR change detection using Google Earth Engine This dataset develops a tool for near real-time flood monitoring through a novel combining of multi-temporal and multi-source remote sensing data. We use a SAR change detection and thresholding method, and apply sensitivity analytics and thresholding calibration, using SAR-based and optical-based indices in a format that is streamlined, reproducible, and geographically agile. We leverage the massive repository of satellite imagery and planetary-scale geospatial analysis tools of GEE to devise a flood inundation extent model that is both scalable and replicable. The flood extents from the 2021 Hurricane Ida and the 2017 Hurricane Harvey were selected to test the approach. The methodology provides a fast, automatable, and geographically reliable tool for assisting decision-makers and emergency planners using near real-time multi-temporal satellite SAR data sets. GEE code was developed by Ebrahim Hamidi and reviewed by Brad G. Peter; Figures were created by Brad G. Peter. This tool accompanies a publication Hamidi et al., 2023: E. Hamidi, B. G. Peter, D. F. Muñoz, H. Moftakhari and H. Moradkhani, "Fast Flood Extent Monitoring with SAR Change Detection Using Google Earth Engine," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2023.3240097. GEE input datasets: Methodology flowchart: Sensitivity Analysis: GEE code (muti-source and multi-temporal flood monitoring): https://code.earthengine.google.com/7f4942ab0c73503e88287ad7e9187150 The threshold sensitivity analysis is automated in the below GEE code: https://code.earthengine.google.com/a3fbfe338c69232a75cbcd0eb6bc0c8e The above scripts can be run independently. The threshold automation code identifies the optimal threshold values for use in the flood monitoring procedure. GEE code for Hurricane Harvey, east of Houston Java script: // Study Area Boundaries var bounds = /* color: #d63000 */ee.Geometry.Polygon( [[[-94.5214452285728, 30.165244882083663], [-94.5214452285728, 29.56024879238989], [-93.36650748443218, 29.56024879238989], [-93.36650748443218, 30.165244882083663]]], null, false); // [before_start,before_end,after_start,after_end,k_ndfi,k_ri,k_diff,mndwi_threshold] var params = ['2017-06-01','2017-06-15','2017-08-01','2017-09-10',1.0,0.25,0.8,0.4] // SAR Input Data var before_start = params[0] var before_end = params[1] var after_start = params[2] var after_end = params[3] var polarization = "VH" var pass_direction = "ASCENDING" // k Coeficient Values for NDFI, RI and DII SAR Indices (Flooded Pixel Thresholding; Equation 4) var k_ndfi = params[4] var k_ri = params[5] var k_diff = params[6] // MNDWI flooded pixels Threshold Criteria var mndwi_threshold = params[7] // Datasets ----------------------------------- var dem = ee.Image("USGS/3DEP/10m").select('elevation') var slope = ee.Terrain.slope(dem) var swater = ee.Image('JRC/GSW1_0/GlobalSurfaceWater').select('seasonality') var collection = ee.ImageCollection('COPERNICUS/S1_GRD') .filter(ee.Filter.eq('instrumentMode', 'IW')) .filter(ee.Filter.listContains('transmitterReceiverPolarisation', polarization)) .filter(ee.Filter.eq('orbitProperties_pass', pass_direction)) .filter(ee.Filter.eq('resolution_meters', 10)) .filterBounds(bounds) .select(polarization) var before = collection.filterDate(before_start, before_end) var after = collection.filterDate(after_start, after_end) print("before", before) print("after", after) // Generating Reference and Flood Multi-temporal SAR Data ------------------------ // Mean Before and Min After ------------------------ var mean_before = before.mean().clip(bounds) var min_after = after.min().clip(bounds) var max_after = after.max().clip(bounds) var mean_after = after.mean().clip(bounds) Map.addLayer(mean_before, {min: -29.264204107025904, max: -8.938093778644141, palette: []}, "mean_before",0) Map.addLayer(min_after, {min: -29.29334290990966, max: -11.928313976797138, palette: []}, "min_after",1) // Flood identification ------------------------ // NDFI ------------------------ var ndfi = mean_before.abs().subtract(min_after.abs()) .divide(mean_before.abs().add(min_after.abs())) var ndfi_filtered = ndfi.focal_mean({radius: 50, kernelType: 'circle', units: 'meters'}) // NDFI Normalization ----------------------- var ndfi_min = ndfi_filtered.reduceRegion({ reducer: ee.Reducer.min(), geometry: bounds, scale: 10, maxPixels: 1e13 }) var ndfi_max = ndfi_filtered.reduceRegion({ reducer: ee.Reducer.max(), geometry: bounds, scale: 10, maxPixels: 1e13 }) var ndfi_rang = ee.Number(ndfi_max.get('VH')).subtract(ee.Number(ndfi_min.get('VH'))) var ndfi_subtctMin = ndfi_filtered.subtract(ee.Number(ndfi_min.get('VH'))) var ndfi_norm = ndfi_subtctMin.divide(ndfi_rang) Map.addLayer(ndfi_norm, {min: 0.3862747346632676, max: ... Visit https://dataone.org/datasets/sha256%3A5a49b694a219afd20f5b3b730302b6d76b7acb1cc888f47d63648df8acd4d97e for complete metadata about this dataset.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset is a global resource for machine learning applications in mining area detection and semantic segmentation on satellite imagery. It contains Sentinel-2 satellite images and corresponding mining area masks + bounding boxes for 1,210 sites worldwide. Ground-truth masks are derived from Maus et al. (2022) and Tang et al. (2023), and validated through manual verification to ensure accurate alignment with Sentinel-2 imagery from specific timestamps.
The dataset includes three mask variants:
Each tile corresponds to a 2048x2048 pixel Sentinel-2 image, with metadata on mine type (surface, placer, underground, brine & evaporation) and scale (artisanal, industrial). For convenience, the preferred mask dataset is already split into training (75%), validation (15%), and test (10%) sets.
Furthermore, dataset quality was validated by re-validating test set tiles manually and correcting any mismatches between mining polygons and visually observed true mining area in the images, resulting in the following estimated quality metrics:
Combined | Maus | Tang | |
Accuracy | 99.78 | 99.74 | 99.83 |
Precision | 99.22 | 99.20 | 99.24 |
Recall | 95.71 | 96.34 | 95.10 |
Note that the dataset does not contain the Sentinel-2 images themselves but contains a reference to specific Sentinel-2 images. Thus, for any ML applications, the images must be persisted first. For example, Sentinel-2 imagery is available from Microsoft's Planetary Computer and filterable via STAC API: https://planetarycomputer.microsoft.com/dataset/sentinel-2-l2a. Additionally, the temporal specificity of the data allows integration with other imagery sources from the indicated timestamp, such as Landsat or other high-resolution imagery.
Source code used to generate this dataset and to use it for ML model training is available at https://github.com/SimonJasansky/mine-segmentation. It includes useful Python scripts, e.g. to download Sentinel-2 images via STAC API, or to divide tile images (2048x2048px) into smaller chips (e.g. 512x512px).
A database schema, a schematic depiction of the dataset generation process, and a map of the global distribution of tiles are provided in the accompanying images.