Facebook
TwitterMultispectral imagery captured by Sentinel-2 satellites, featuring 13 spectral bands (visible, near-infrared, and short-wave infrared). Available globally since 2018 (Europe since 2017) with 10-60 m spatial resolution and revisit times of 2-3 days at mid-latitudes. Accessible through the EOSDA LandViewer platform for visualization, analysis, and download.
Facebook
Twitter505 Economics is on a mission to make academic economics accessible. We've developed the first monthly sub-national GDP data for EU and UK regions from January 2015 onwards.
Our GDP dataset uses luminosity as a proxy for GDP. The brighter a place, the more economic activity that place tends to have.
We produce the data using high-resolution night time satellite imagery and Artificial Intelligence.
This builds on our academic research at the London School of Economics, and we're producing the dataset in collaboration with the European Space Agency BIC UK.
We have published peer-reviewed academic articles on the usage of luminosity as an accurate proxy for GDP.
Key features:
The dataset can be used by:
We have created this dataset for all UK sub-national regions, 28 EU Countries and Switzerland.
Facebook
TwitterOpen Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The High Resolution Digital Elevation Model (HRDEM) product is derived from airborne LiDAR data (mainly in the south) and satellite images in the north. The complete coverage of the Canadian territory is gradually being established. It includes a Digital Terrain Model (DTM), a Digital Surface Model (DSM) and other derived data. For DTM datasets, derived data available are slope, aspect, shaded relief, color relief and color shaded relief maps and for DSM datasets, derived data available are shaded relief, color relief and color shaded relief maps. The productive forest line is used to separate the northern and the southern parts of the country. This line is approximate and may change based on requirements. In the southern part of the country (south of the productive forest line), DTM and DSM datasets are generated from airborne LiDAR data. They are offered at a 1 m or 2 m resolution and projected to the UTM NAD83 (CSRS) coordinate system and the corresponding zones. The datasets at a 1 m resolution cover an area of 10 km x 10 km while datasets at a 2 m resolution cover an area of 20 km by 20 km. In the northern part of the country (north of the productive forest line), due to the low density of vegetation and infrastructure, only DSM datasets are generally generated. Most of these datasets have optical digital images as their source data. They are generated at a 2 m resolution using the Polar Stereographic North coordinate system referenced to WGS84 horizontal datum or UTM NAD83 (CSRS) coordinate system. Each dataset covers an area of 50 km by 50 km. For some locations in the north, DSM and DTM datasets can also be generated from airborne LiDAR data. In this case, these products will be generated with the same specifications as those generated from airborne LiDAR in the southern part of the country. The HRDEM product is referenced to the Canadian Geodetic Vertical Datum of 2013 (CGVD2013), which is now the reference standard for heights across Canada. Source data for HRDEM datasets is acquired through multiple projects with different partners. Since data is being acquired by project, there is no integration or edgematching done between projects. The tiles are aligned within each project. The product High Resolution Digital Elevation Model (HRDEM) is part of the CanElevation Series created in support to the National Elevation Data Strategy implemented by NRCan. Collaboration is a key factor to the success of the National Elevation Data Strategy. Refer to the “Supporting Document” section to access the list of the different partners including links to their respective data.
Facebook
Twitterhttps://www.ontario.ca/page/open-government-licence-ontariohttps://www.ontario.ca/page/open-government-licence-ontario
The Ontario Imagery Web Map Service (OIWMS) is an open data service available to everyone free of charge. It provides instant online access to the most recent, highest quality, province wide imagery. GEOspatial Ontario (GEO) makes this data available as an Open Geospatial Consortium (OGC) compliant web map service or as an ArcGIS map service. Imagery was compiled from many different acquisitions which are detailed in the Ontario Imagery Web Map Service Metadata Guide linked below. Instructions on how to use the service can also be found in the Imagery User Guide linked below. Note: This map displays the Ontario Imagery Web Map Service Source, a companion ArcGIS web map service to the Ontario Imagery Web Map Service. It provides an overlay that can be used to identify acquisition relevant information such as sensor source and acquisition date. OIWMS contains several hierarchical layers of imagery, with coarser less detailed imagery that draws at broad scales, such as a province wide zooms, and finer more detailed imagery that draws when zoomed in, such as city-wide zooms. The attributes associated with this data describes at what scales (based on a computer screen) the specific imagery datasets are visible. Available Products Ontario Imagery OGC Web Map Service – public linkOntario Imagery ArcGIS Map Service – public linkOntario Imagery Web Map Service Source – public linkOntario Imagery ArcGIS Map Service – OPS internal linkOntario Imagery Web Map Service Source – OPS internal linkAdditional Documentation Ontario Imagery Web Map Service Metadata Guide (PDF)Ontario Imagery Web Map Service Copyright Document (PDF) Imagery User Guide (Word)StatusCompleted: Production of the data has been completed Maintenance and Update FrequencyAnnually: Data is updated every year ContactOntario Ministry of Natural Resources, Geospatial Ontario, imagery@ontario.ca
Facebook
Twitterhttps://data.linz.govt.nz/license/attribution-4-0-international/https://data.linz.govt.nz/license/attribution-4-0-international/
This dataset provides a seamless cloud-free 10m resolution satellite imagery layer of the New Zealand mainland and offshore islands.
The imagery was captured by the European Space Agency Sentinel-2 satellites between September 2023 - April 2024.
Data comprises: • 450 ortho-rectified RGB GeoTIFF images in NZTM projection, tiled into the LINZ Standard 1:50000 tile layout. • Satellite sensors: ESA Sentinel-2A and Sentinel-2B • Acquisition dates: September 2023 - April 2024 • Spectral resolution: R, G, B • Spatial resolution: 10 meters • Radiometric resolution: 8-bits (downsampled from 12-bits)
This is a visual product only. The data has been downsampled from 12-bits to 8-bits, and the original values of the images have been modified for visualisation purposes.
If you require the 12-bit imagery (R, G, B, NIR bands), send your request to imagery@linz.govt.nz
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains 40,000 computer-generated RBG image and mask pairs of hypothetical floods in 100 areas around the globe. The dataset was generated automatically using the Unity flood simulator I created as part of my master's thesis.
The Unity flood simulator uses the MapBox unity API to query real-world aerial and satellite imagery from any location worldwide. Additionally, MapBox provides elevation data that allows imagery to take the shape of the corresponding terrain. Realistic floods are procedurally added by placing simulated water objects in the simulated landscape. The goal of this dataset is to provide a large number of training examples for training semantic segmentation models on areal data.
The data is organized into three folders x, y, and y_clean. For most applications using x and y_clean is recommended. The folder x contains 40,000 RGB training images. The images were collected at 100 locations, with 400 images being collected per geographic location. Each image follows the naming convention city_number. For example, the 20th image collected over Tulsa would be labeled Tulsa_20.png. The folders y and y_clean contain RGB segmentation masks produced by Unity. The masks share the same naming convention so that a training pair could be loaded at generatedData1/x/Tulsa_20.png and generatedData1/y/Tulsa_20.png. The class mappings can be seen in the table below.
https://user-images.githubusercontent.com/24756984/175069414-ddde9371-fa66-4eb1-bb6a-2f3d36b786f0.png" alt="classes table">
In some cases, the masks produced by unity blend RGB values near the borders of different classifications. The y_clean folder provides masks where a post-process script assigns blended values to the nearest valid neighbor. Scripts to load and clean the data can be found here, additionally example Keras loaders can be found here
It is important to note that images taken over the same city will have overlapping portions. [Displacement between images is around 137 pixels but depends on terrain.] It is recommended that data is split by city during training. For example, place Tulsa and Tokyo training, Akita in testing, do not place the first 300 images of each in training and the last 100 in testing. This will avoid contamination of test and validation sets.
Code used to train DeepLabV3+ and VGG models using this dataset can be found at the related repository here. Additionally the GitHub page for this dataset can be found here
Facebook
TwitterIn Puerto Rico, tens of thousands of landslides, slumps, debris flows, rock falls, and other slope failures were triggered by Hurricane María, which made landfall on 20 September 2017. “Landslide” is used here and below to represent all types of slope failures. This dataset is a point shapefile of landslide headscarps identified across Puerto Rico using georeferenced aerial and satellite imagery recorded following the hurricane. The imagery used includes publicly available aerial imagery obtained by the Federal Emergency Management Agency (FEMA; Quantum Spatial, Inc., 2017), aerial imagery obtained by the National Oceanic and Atmospheric Administration (NOAA; NOAA, 2017), and several WorldView satellite imagery datasets available from DigitalGlobe, Inc. The FEMA imagery was recorded by Sanborn and Quantum Spatial, Inc. between 25 September and 27 October 2017, has a pixel resolution of approximately 15 cm, and includes over 6,000 image tiles that cover approximately 97% of the large island and 100% of Vieques. The NOAA imagery was recorded 22-26 September 2017, also has a resolution of approximately 15 cm, and covers about 10% of the large island, 60% of Vieques, and 100% of Culebra. The DigitalGlobe imagery used in this project was recorded during September-November 2017, has a pixel resolution of approximately 50 cm, and covers approximately 99% of the large island and 35% of Vieques. DigitalGlobe images were acquired via the DigitalGlobe Open Data Program, the DigitalGlobe Foundation imagery grant, and via partnership with the U.S. Geological Survey. No imagery was examined for Desecheo, Mona, Monito, Caja de Muertos, or other smaller islands.
The FEMA imagery was usually used first for landslide mapping due to its high resolution and more accurate georeferencing. For almost every location, there were multiple images available due to overlap in each dataset and overlap between different datasets. This overlap was helpful when clouds or shadows obscured the view of the ground surface in one or more images for a given location. Additional oblique and un-georeferenced aerial imagery recorded by the Civil Air Patrol (ArcGIS, 2017) was consulted, if needed. Comparing the post-event imagery with pre-event imagery available through the ESRI ArcGIS basemap layer and/or Google Earth was useful to accurately identify sites that failed during September 2017; such comparisons were made for landslides that appeared potentially older. Some landslides in our inventory may have occurred prior to Hurricane María—potentially triggered by Hurricane Irma which passed northeast of Puerto Rico two weeks earlier—or between the time of the hurricane and when photographs were taken. UTM Zone 19N projection with WGS 84 datum was used throughout the mapping process.
The inventory process began with creation of a first draft by a team of 15 people. This draft was subsequently checked for quality and revised by the three leaders of the mapping effort. Each identified landslide is represented by a point located at the center of its headscarp. The horizontal position of headscarp points was carefully selected using multiple overlapping images (usually available) and other geospatial datasets including lidar acquired during 2015 and available from the U.S. Geological Survey 3DEP program, the U.S. Census Bureau TIGER road shapefile, and the National Hydrology Dataset flowline shapefile. Mapping was generally performed at 1:1000 scale. Given errors in georeferencing and landslides poorly resolved in imagery, we conclude that headscarp point locations are generally accurate within 3 m.
Municipality (municipio) and barrio names in which each landslide occurred are included in the attribute table of the shapefile, as are the geographic coordinates of each point in decimal degrees (WGS 84 datum). Landslides were identified in 72 of the 78 municipalities of Puerto Rico. No landslides were documented on the island municipalities of Culebra or Vieques. On the main island of Puerto Rico, 64% of land experienced 0-3 landslides per square kilometer, 26% experienced 3-25 landslides per square kilometer, and 10% experienced more than 25 landslides per square kilometer. Concentrated zones of more than 100 landslides per square kilometer are in the municipalities of Maricao, Utuado, Jayuya, and Corozal. Of the ten barrios where more than 100 landslides per square kilometer were catalogued, eight are in Utuado. The drainage basins with the highest density of landslides are the Rio Grande de Arecibo and Rio Grande de Añasco watersheds, each with over 30 landslides per square kilometer. Six out of the seven sub-basins with more than 50 landslides per square kilometer are in the Rio Grande de Arecibo basin. We identified and mapped 71,431 landslides in total.
The College of Arts and Sciences at the University of Puerto Rico in Mayagüez is thanked for providing release time to K.S. Hughes to permit partial development of this dataset.
References
ArcGIS, 2017, CAP Imagery – Hurricane Maria: https://www.arcgis.com/home/webmap/viewer.html?webmap=3218d1cb022d4534be0c7d6833c0adf1. Last accessed 18 June 2019.
NOAA, 2017, Hurricane MARIA Imagery: https://storms.ngs.noaa.gov/storms/maria/index.html. Last accessed 18 June 2019.
Quantum Spatial, Inc., 2017, FEMA PR Imagery: https://s3.amazonaws.com/fema-cap-imagery/Others/Maria. Last accessed 18 June 2019.
Facebook
TwitterThis dataset contains cloud free, low tide composite satellite images for the tropical Australia region based on 10 m resolution Sentinel 2 imagery from 2018 – 2023. This image collection was created as part of the NESP MaC 3.17 project and is intended to allow mapping of the reef features in tropical Australia. This collection contains composite imagery for 200 Sentinel 2 tiles around the tropical Australian coast. This dataset uses two styles: 1. a true colour contrast and colour enhancement style (TrueColour) using the bands B2 (blue), B3 (green), and B4 (red) 2. a near infrared false colour style (Shallow) using the bands B5 (red edge), B8 (near infrared), and B12 (short wave infrared). These styles are useful for identifying shallow features along the coastline. The Shallow false colour styling is optimised for viewing the first 3 m of the water column, providing an indication of water depth. This is because the different far red and near infrared bands used in this styling have limited penetration of the water column. In clear waters the maximum penetrations of each of the bands is 3-5 m for B5, 0.5 - 1 m for B8 and < 0.05 m for B12. As a result, the image changes in colour with the depth of the water with the following colours indicating the following different depths: - White, brown, bright green, red, light blue: dry land - Grey brown: damp intertidal sediment - Turquoise: 0.05 - 0.5 m of water - Blue: 0.5 - 3 m of water - Black: Deeper than 3 m In very turbid areas the visible limit will be slightly reduced. Change log: Changes to this dataset and metadata will be noted here: 2024-07-24 - Add tiles for the Great Barrier Reef 2024-05-22 - Initial release for low-tide composites using 30th percentile (Git tag: "low_tide_composites_v1") Methods: The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile filter the "COPERNICUS/S2_HARMONIZED" image collection by - tile ID - maximum cloud cover 0.1% - date between '2018-01-01' and '2023-12-31' - asset_size > 100000000 (remove small fragments of tiles) 2. Remove high sun-glint images (see "High sun-glint image detection" for more information). 3. Split images by "SENSING_ORBIT_NUMBER" (see "Using SENSING_ORBIT_NUMBER for a more balanced composite" for more information). 4. Iterate over all images in the split collections to predict the tide elevation for each image from the image timestamp (see "Tide prediction" for more information). 5. Remove images where tide elevation is above mean sea level to make sure no high tide images are included. 6. Select the 10 images with the lowest tide elevation. 7. Combine SENSING_ORBIT_NUMBER collections into one image collection. 8. Remove sun-glint (true colour only) and apply atmospheric correction on each image (see "Sun-glint removal and atmospheric correction" for more information). 9. Duplicate image collection to first create a composite image without cloud masking and using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 10. Apply cloud masking to all images in the original image collection (see "Cloud Masking" for more information) and create a composite by using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 11. Combine the two composite images (no cloud mask composite and cloud mask composite). This solves the problem of some coral cays and islands being misinterpreted as clouds and therefore creating holes in the composite image. These holes are "plugged" with the underlying composite without cloud masking. (Lawrey et al. 2022) 12. The final composite was exported as cloud optimized 8 bit GeoTIFF Note: The following tiles were generated with different settings as they did not have enough images to create a composite with the standard settings: - 51KWA: no high sun-glint filter - 54LXP: maximum cloud cover set to 1% - 54LXP: maximum cloud cover set to 1% - 54LYK: maximum cloud cover set to 2% - 54LYM: maximum cloud cover set to 5% - 54LYN: maximum cloud cover set to 1% - 54LYQ: maximum cloud cover set to 5% - 54LYP: maximum cloud cover set to 1% - 54LZL: maximum cloud cover set to 1% - 54LZM: maximum cloud cover set to 1% - 54LZN: maximum cloud cover set to 1% - 54LZQ: maximum cloud cover set to 5% - 54LZP: maximum cloud cover set to 1% - 55LBD: maximum cloud cover set to 2% - 55LBE: maximum cloud cover set to 1% - 55LCC: maximum cloud cover set to 5% - 55LCD: maximum cloud cover set to 1% High sun-glint image detection: Images with high sun-glint can lead to lower quality composite images. To determine high sun-glint images, a mask is created for all pixels above a high reflectance threshold for the near-infrared and short-wave infrared bands. Then the proportion of this is calculated and compared against a sun-glint threshold. If the image exceeds this threshold, it is filtered out of the image collection. As we are only interested in the sun-glint on water pixels, a water mask is created using NDWI before creating the sun-glint mask. Sun-glint removal and atmospheric correction: Sun-glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun-glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun-glint detected by B8 correlates very highly with the sun-glint experienced by the visible channels (B2, B3 and B4) and so the sun-glint in these channels can be removed by subtracting B8 from these channels. Eric Lawrey developed this algorithm by fine tuning the value of the scaling between the B8 channel and each individual visible channel (B2, B3 and B4) so that the maximum level of sun-glint would be removed. This work was based on a representative set of images, trying to determine a set of values that represent a good compromise across different water surface conditions. This algorithm is an adjustment of the algorithm already used in Lawrey et al. 2022 Tide prediction: To determine the tide elevation in a specific satellite image, we used a tide prediction model to predict the tide elevation for the image timestamp. After investigating and comparing a number of models, it was decided to use the empirical ocean tide model EOT20 (Hart-Davis et al., 2021). The model data can be freely accessed at https://doi.org/10.17882/79489 and works with the Python library pyTMD (https://github.com/tsutterley/pyTMD). In our comparison we found this model was able to predict accurately the tide elevation across multiple points along the study coastline when compared to historic Bureau of Meteorolgy and AusTide data. To determine the tide elevation of the satellite images we manually created a point dataset where we placed a central point on the water for each Sentinel tile in the study area . We used these points as centroids in the ocean models and calculated the tide elevation from the image timestamp. Using "SENSING_ORBIT_NUMBER" for a more balanced composite: Some of the Sentinel 2 tiles are made up of different sections depending on the "SENSING_ORBIT_NUMBER". For example, a tile could have a small triangle on the left side and a bigger section on the right side. If we filter an image collection and use a subset to create a composite, we could end up with a high number of images for one section (e.g. the left side triangle) and only few images for the other section(s). This would result in a composite image with a balanced section and other sections with a very low input. To avoid this issue, the initial unfiltered image collection is divided into multiple image collections by using the image property "SENSING_ORBIT_NUMBER". The filtering and limiting (max number of images in collection) is then performed on each "SENSING_ORBIT_NUMBER" image collection and finally, they are combined back into one image collection to generate the final composite. Cloud Masking: Each image was processed to mask out clouds and their shadows before creating the composite image. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts. A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 35% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask. A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer. The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm. Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations was adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations. With 4-pixel filter resolutions these operations were still using over 90% of the total
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The Geostationary Operational Environmental Satellite-R Series (GOES-R) is the next generation of geostationary weather satellites. The GOES-R series will significantly improve the detection and observation of environmental phenomena that directly affect public safety, protection of property and our nation’s economic health and prosperity.
The GOES-16 satellite, known as GOES-R prior to launch, is the first satellite in the series. It will provide images of weather pattern and severe storms as frequently as every 30 seconds, which will contribute to more accurate and reliable weather forecasts and severe weather outlooks.
The raw dataset includes a feed of the Advanced Baseline Imager (ABI) radiance data (Level 1b) and Cloud and Moisture Imager (CMI) products (Level 2) which are freely available through the NOAA Big Data Project.
You can use the BigQuery Python client library to query tables in this dataset in Kernels. Note that methods available in Kernels are limited to querying data. Tables are at bigquery-public-data.github_repos.[TABLENAME]. Fork this kernel to get started to learn how to safely manage analyzing large BigQuery datasets.
The NOAA Big Data Project (BDP) is an experimental collaboration between NOAA and infrastructure-as-a-service (IaaS) providers to explore methods of expand the accessibility of NOAA’s data in order to facilitate innovation and collaboration. The goal of this approach is to help form new lines of business and economic growth while making NOAA's data more discoverable for the American public.
https://storage.googleapis.com/public-dataset-images/noaa-goes-16-sample.png" alt="Sample images">
Key metadata for this dataset has been extracted into convenient BigQuery tables (one each for L1b radiance, L2 CMIP, and L2 MCMIP). These tables can be used to query metadata in order to filter the data down to only a subset of raw netcdf4 files available in Google Cloud Storage.
Facebook
TwitterOrigin: USDA National Agricultural Statistics Service (NASS) Cropland Data Layer (CDL): https://www.nass.usda.gov/Research_and_Science/Cropland/SARS1a.phpData Access: https://nassgeodata.gmu.edu/CropScape/The Crop Frequency Layers identify crop specific planting frequency and are based on land cover information derived from every year of available CDL data beginning with the 2008 CDL, the first year of full Continental U.S. coverage. The Cultivated Layer and Crop Frequency Data Layers with accompanying metadata detailing the methodology are available for download at /Research_and_Science/Cropland/Release/.From the CDL Metadata:How has the methodology used to create the CDL changed over the program's history?The classification process used to create older CDLs (prior to 2006) was based on a maximum likelihood classifier approach using in-house software. The pre-2006 CDL's relied primarily on satellite imagery from the Landsat TM/ETM satellites which had a 16-day revisit. The in-house software limited the use of only two scenes per classification area. The only source of ground truth was the NASS June Area Survey (JAS). The JAS data is collected by field enumerators so it is quite accurate but is limited in coverage due to the cost and time constraints of such a massive annual field survey. It was also very labor intensive to digitize and label all of the collected JAS field data for use in the classification process. Non-agricultural land cover was based on image analyst interpretations.Starting in 2006, NASS began utilizing a new satellite sensor, new commercial off-the-shelf software, more extensive training/validation data. The in-house software was phased out in favor of a commercial software suite, which includes Erdas Imagine, ESRI ArcGIS, and Rulequest See5. This improved processing efficiency and, more importantly, allowed for unlimited satellite imagery and ancillary dataset inputs. The new source of agricultural training and validation data became the USDA Farm Service Agency (FSA) Common Land Unit (CLU) Program data which was much more extensive in coverage than the JAS and was in a GIS-ready format. NASS also began using the most current USGS National Land Cover Dataset (NLCD) dataset to train over the non-agricultural domain. The new classification method uses a decision tree classifier.NASS continues to strive for CDL processing improvements, including our handling of the FSA CLU pre-processing and the searching out and inclusion of additional agricultural training and validation data from other State, Federal, and private industry sources. New satellite sensors are incorporated as they become available. Currently, the CDL Program uses the Landsat 8 OLI/TIRS sensor, the Disaster Monitoring Constellation (DMC) DEIMOS-1 and UK2, the ISRO ResourceSat-2 LISS-3, and the ESA SENTINEL-2 A and B sensors. Imagery is downloaded daily throughout the growing season with the objective of obtaining at least one cloud-free usable image every two weeks throughout the growing season.Please refer to (FAQ Section 4, Question 4) on this FAQs webpage to learn more about how the handling of grass and pasture related categories has evolved over the history of the CDL Program.Extensive metadata records are available by state and year at the following webpage: (/Research_and_Science/Cropland/metadata/meta.php).
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset contains processed satellite imageries of the Gulf of Papua - Torres Strait (GP-TS) region. It includes:
- 12-year (mid 2008-mid 2019) of daily MODIS water type images (Wet season colour scale), and summaries (seasonal, annual, long term, difference composite maps)
- 1 year (2019) of weekly Sentinel-3 water type images (Forel-Ule colour scale)
** This dataset is currently under embargo until 31/01/2022.
These outputs have been produced though the remote sensing components of the NESP Project 2.2.1 and NESP Project 5.14: Identifying the water quality and ecosystem health threats to the high diversity Torres Strait and Far Northern GBR from runoff from the Fly River (Waterhouse et al., 2018, in review and Petus et al., in prep.)
These studies used different sources and long-term databases of freely available satellite data to describe large-scale turbidity patterns around the GP-TS region, map the Fly River plume and to identify instances and areas with likely plume intrusion into the Torres Strait protected zone.
Multi-year datasets of medium-resolution satellite images (MODIS-Aqua and Sentinel-3) of the study area have been downloaded and processed. Medium-resolution satellite data have been processed into daily colour class and water type maps of the study area using two respective colour classification scales. Several spatial summaries have been produced (median, frequency, difference composite maps) at different time scales (seasonal, annual, long term).
These spatial summaries provides a large scale baseline of the composition of coastal waters around the GP-TS region, as well as a description of seasonal trends. This baseline is particularly important as field water quality data are scarce and challenging to collect due to the remoteness of the study area, They provide a reference against which to compare future changes, as well as spatially explicit information for when and where the influence from Fly River discharge is likely to occur and which TS ecosystems are likely to be the most exposed.
In making this data publicly available for management, the authors from the TropWATER Catchment to Reef Research Group request being contacted and involved in decision making processes that incorporate this data, to ensure its methodology and limitations are fully understood.
Methods:
MODIS-Aqua water type maps
Twelve years of water type maps (mid-2008 to mid-2019) were produced using daily MODIS-Aqua (MA) true colour satellite imagery reclassified to 6 distinct ocean colour classes. The ocean colour is the result of interactions between sunlight and materials in the water. It is co-determined by the absorption and scattering of various optically active water quality components: the suspended sediment: SS, the coloured dissolved organic matter: CDOM and the chlorophyll-a: Chl-a. The ocean colour is a simple indicator available to study the composition of our ocean and distinguish different surface water bodies and their associated water quality characteristics (e.g., Petus et al., 2019, in prep.).The six colour classes (CC) were defined by their colour properties across an Intensity-Hue-Saturation gradient (Alvarez-Romero et al., 2012) and were regrouped into three optical water types: Primary (CC1-4), Secondary (CC5) and Tertiary (CC6). They were produced using the WSC scale classification toolbox (Petus et al., 2019).
The WSC scale classification toolbox is a semi-automated toolbox using a suit of R and Python (ArcGIS) scripts that has been developed originally for the Great Barrier Reef (GBR) through Marine Monitoring Program (MMP) funding (Alvarez-Romero et al., 2013). The toolbox spectrally enhance (Red-Green-Blue, RGB to Intensity-Hue-Saturation, IHS) MODIS true colour imagery and cluster the MODIS pixels into “cloud” (from the RGB image), “ambiant water” and six Wet Season Colour classes (from the IHS image) through a supervised classification using typical apparent surface colour signatures of flood waters in the GBR (Alvarez-Romero et al., 2013, Figure 1, right and Figure 2). Discrimination of colour classes has been based on the GBR flood plume typology as defined originally in e.g., Devlin et al. (2011). It has been calibrated and validated with satellite and in-situ water quality data, respectively (Alvarez-Romero et al., 2013; Devlin et al., 2015, Petus et al., 2016). Technical details about the WSC scale classification have been published in e.g. Alvarez-Romero et al., 2013, Devlin et al., 2015; Petus et al., 2016, 2019 and GBRMPA, 2020 and Waterhouse et al., in prep.
In the GBR WSC scale, the brownish to brownish-green turbid water masses (colour classes 1 to 4, or primary water type) are typical for inshore regions of GBR river plumes or nearshore marine areas with high concentrations of resuspended sediments found during the wet season. These water bodies in flood waters typically contain high nutrient and phytoplankton concentrations, but are also enriched in sediment and dissolved organic matter resulting in reduced light levels. The greenish-to-greenish-blue turbid water masses (colour class 5, or Secondary water type) is typical of coastal waters rich in algae and also containing dissolved matter and fine sediment. This water body is found in the GBR open coastal waters as well as in the mid-water plumes where relatively high nutrient availability and increased light levels due to sedimentation favour coastal productivity. Finally, the greenish-blue water mass (colour class 6 or Tertiary water type) correspond to waters with above ambient water quality concentrations. This water body is typical for areas towards the open sea or offshore regions of river flood plumes (e.g. Petus et al., 2019).
Sentinel-3 OLCI water type maps
One year (2019) of water type maps was also produced using daily Sentinel-3 Ocean and Land Color Instrument (S3 OLCI) Level-2 (hereafter S3) satellite data reclassified to 21 distinct ocean colour classes. The 21 colour classes (CC) were defined by their colour properties across a Hue gradient and were produced using the Forel-Ule colour (FU) scale classification toolbox.
The FU classification toolbox is a semi-automated toolbox using a suit of Python, .bat and xml scripts that have been developed originally for the GBR through MMP funding. It allow processing multi-year databases of satellite images using the FU classification algorithm recently developed through the European Citclops project and implemented in the Science Toolbox Exploitation Platform (SNAP) (http://www.citclops.eu/home, Van der Woerd and Wernand, 2015, 2018). Technical details about the WSC scale classification have been published in e.g., Petus et al., 2019, in prep. and the Appendix B of Gruber et al., 2019.
The FU satellite algorithm converts satellite normalised multi-band reflectance information into a discrete set of FU numbers using uniform colourimetric functions (Wernand et al., 2012). The derivation of the colour of natural waters is based on the calculation of Tristimulus values of the three primaries (X, Y, Z) that specify the colour stimulus of the human eye. The algorithm is validated by a set of hyperspectral measurements from inland, coastal and marine waters (Van der Woerd and Wernand 2018) and is applicable to global aquatic environments (lake, estuaries, coastal, offshore). Technical details about FU satellite algorithm, including detailed mathematical descriptions, are presented in e.g., Van der Woerd and Wernand (2015, 2016), Van der Woerd and Wernand (2018) and Wernand et al. (2013). A first comparative study in the GBR suggested that FU4-5, FU6-9 and FU ? 10 are similar to the Primary, Secondary and Tertiary water types in the WS colour scale, respectively (Petus et al., 2019).
Both satellites and colour scales provides qualitative estimation of water composition and spatial datasets that can used in conjunction with in-situ field measurements, satellite estimations and/or hydrodynamic modelling assessments of water quality concentrations (if available). By itself, they are particularly interesting in remote areas where in-situ water quality and optical data are scarce to inexistent as they both relies only on the apparent colour of the ocean.
This datasets have been used in Petus et al., in prep. and Waterhouse et al., in review to: (i) map optical water masses in the study area; including the turbid Fly River plume, (ii) document long-term turbidity trends in the Gulf of Papua – Torres Strait region, (iii) determine seasonal changes in turbidity and seasonal plume patterns, and; (iii) assess the presence of ecosystems likely exposed to the Fly River plume, as well as their frequency of exposure. These datasets does not allow assessing the trace metals contaminants of the Fly River discharge or assessing the ecological impact of Fly River discharges on TS ecosystems.
Outputs:
Daily datasets: MODIS-Aqua true colour images and six colour class maps:
Database name: Daily_MAWS.gdb,
Data format: a2008141 = year 2008, Julian day 141
Twelve years (mid-2008 to mid-2019) of daily MODIS true colour images of the GPTS region were downloaded from the NASA Rapid Response and EOSDIS worldview websites. The true colour images were spectrally enhanced (from red-green-blue to hue-saturation-intensity colour systems), and clustered into six colour class maps using methods described above (Álvarez-Romero et al., 2013) and post-processed in ArcGIS 10.3.
Weekly datasets: Sentinel-3 Forel Ule (21) colour class maps:
Database name: Weekly_S3FU.gdb,
Data format: T2019w01 = week 1 of 2019
One year (2019) of daily S3 OLCI imagery of the study area was downloaded on the EUMETSAT Copernicus Online Data Access website (https://coda.eumetsat.int/#/home). S3 data were atmospherically corrected and were processed
Facebook
TwitterOpen Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The High Resolution Digital Elevation Model Mosaic provides a unique and continuous representation of the high resolution elevation data available across the country. The High Resolution Digital Elevation Model (HRDEM) product used is derived from airborne LiDAR data (mainly in the south) and satellite images in the north. The mosaic is available for both the Digital Terrain Model (DTM) and the Digital Surface Model (DSM) from web mapping services. It is part of the CanElevation Series created to support the National Elevation Data Strategy implemented by NRCan. This strategy aims to increase Canada's coverage of high-resolution elevation data and increase the accessibility of the products. Unlike the HRDEM product in the same series, which is distributed by acquisition project without integration between projects, the mosaic is created to provide a single, continuous representation of strategy data. The most recent datasets for a given territory are used to generate the mosaic. This mosaic is disseminated through the Data Cube Platform, implemented by NRCan using geospatial big data management technologies. These technologies enable the rapid and efficient visualization of high-resolution geospatial data and allow for the rapid generation of dynamically derived products. The mosaic is available from Web Map Services (WMS), Web Coverage Services (WCS) and SpatioTemporal Asset Catalog (STAC) collections. Accessible data includes the Digital Terrain Model (DTM), the Digital Surface Model (DSM) and derived products such as shaded relief and slope. The mosaic is referenced to the Canadian Height Reference System 2013 (CGVD2013) which is the reference standard for orthometric heights across Canada. Source data for HRDEM datasets used to create the mosaic is acquired through multiple projects with different partners. Collaboration is a key factor to the success of the National Elevation Strategy. Refer to the “Supporting Document” section to access the list of the different partners including links to their respective data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Real space datasets are scarce and often come with limited metadata to supervise the training of learning algorithms. Using synthetic data allows us to produce a large dataset in a controlled environment which eases the production of annotations. We generate the data with the 3D engine Unity using models of two different satellites: a CubeSat and the Soil Moisture and Ocean Salinity (SMOS) satellite.
CubeSat is a small satellite based on a 3U CubeSat platform. It is a rectangular cuboid shaped of 0.3 x 0.3 x 0.9 m. Its main structure is made of aluminum and black PCB panels on its sides. For this satellite model, we place the camera at 1 meter to render the datasets' images. The near and far bounds are fixed at 0.1 m and 2 m.
SMOS has a more complicated and elongated shape. The main platform has a cubic shape of 0.9 x 0.9 x 1.0 m with solar panels attached on two sides, each 6.5 m long. The payload is a 3-branch antenna of 3 meters each placed at 60 degrees. The structure is covered by golden and silvered foils, which are highly reflective materials. For this satellite model, we place the camera at 10 meters to render the images. The near and far bounds are fixed at 3 m and 17 m due to the solar panel length.
The scene is composed of one satellite, SMOS or CubeSat, with one directional light source fixed with regards to the targeted object. The images are rendered using viewpoints sampled on a full sphere with a unified black background. The images are rendered with a resolution of 1024 x 1024 pixels. For each image, the distance to the camera, azimuth and elevation angles are saved as metadata and a depth map is rendered for testing the predicted shape.
We generate training and validation sets containing resp. 5, 10, 50 and 100 images to evaluate the model during training. We also generate a test set of 100 images from different viewing directions than the ones used in the training and validation sets. This common test set will be used to evaluate our models regardless of the number of training images used.
Facebook
TwitterThis layer presents detectable thermal activity from VIIRS satellites for the last 7 days. VIIRS Thermal Hotspots and Fire Activity is a product of NASA’s Land, Atmosphere Near real-time Capability for EOS (LANCE) Earth Observation Data, part of NASA's Earth Science Data. Consumption Best Practices:As a service that is subject to very high usage, ensure peak performance and accessibility of your maps and apps by avoiding the use of non-cacheable relative Date/Time field filters. To accommodate filtering events by Date/Time, we suggest using the included "Age" fields that maintain the number of days or hours since a record was created or last modified, compared to the last service update. These queries fully support the ability to cache a response, allowing common query results to be efficiently provided to users in a high demand service environment.When ingesting this service in your applications, avoid using POST requests whenever possible. These requests can compromise performance and scalability during periods of high usage because they too are not cacheable. Source: NASA LANCE - VNP14IMG_NRT active fire detection - WorldScale/Resolution: 375-meterUpdate Frequency: Hourly (depending on source availability) using the aggregated live feed methodologyArea Covered: WorldWhat can I do with this layer?This layer represents the most frequently updated and most detailed global remotely sensed wildfire information. Detection attributes include time, location, and intensity. It can be used to track the location of fires from the recent past, a few hours up to seven days behind real time. This layer also shows the location of wildfire over the past 7 days as a time-enabled service so that the progress of fires over that timeframe can be reproduced as an animation.The VIIRS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Fire points in this service are generally available within 3 1/4 hours after detection by a VIIRS device. LANCE estimates availability at around 3 hours after detection, and esri livefeeds check for updates every 20 minutes from LANCE.Even though these data display as point features, each point in fact represents a pixel that is >= 375 m high and wide. A point feature means somewhere in this pixel at least one "hot" spot was detected which may be a fire.VIIRS is a scanning radiometer device aboard the Suomi NPP, NOAA-20, and NOAA-21 satellites that collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in several visible and infrared bands. The VIIRS Thermal Hotspots and Fire Activity layer is a livefeed from a subset of the overall VIIRS imagery, in particular from NASA's VNP14IMG_NRT active fire detection product. The source downloads are monitored automatically and retrieved from LANCE, NASA's near real time data and imagery site, every 20 minutes when updates detected.The 375-m data complements the 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Hotspots and Fire Activity layer; they both show good agreement in hotspot detection but the improved spatial resolution of the 375 m data provides a greater response over fires of relatively small areas and provides improved mapping of large fire perimeters.Attribute informationLatitude and Longitude: The center point location of the 375 m (approximately) pixel flagged as containing one or more fires/hotspots.Satellite: Whether the detection was picked up by the Suomi NPP satellite (N) or NOAA-20 satellite (1) or NOAA-21 satellite (2). For best results, use the virtual field WhichSatellite, redefined by an arcade expression, that gives the complete satellite name.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel. This value is based on a collection of intermediate algorithm quantities used in the detection process. It is intended to help users gauge the quality of individual hotspot/fire pixels. Confidence values are set to low, nominal and high. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Nominal confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.Please note: Low confidence nighttime pixels occur only over the geographic area extending from 11 deg E to 110 deg W and 7 deg N to 55 deg S. This area describes the region of influence of the South Atlantic Magnetic Anomaly which can cause spurious brightness temperatures in the mid-infrared channel I4 leading to potential false positive alarms. These have been removed from the NRT data distributed by FIRMS.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: D = Daytime fire, N = Nighttime fireHours Old: Derived field that provides age of record in hours between Acquisition date/time and latest update date/time. 0 = less than 1 hour ago, 1 = less than 2 hours ago, 2 = less than 3 hours ago, and so on. Additional information can be found on the NASA FIRMS site FAQ.Note about near real time data:Near real time data is not checked thoroughly before it's posted on LANCE or downloaded and posted to the Living Atlas. NASA's goal is to get vital fire information to its customers within three hours of observation time. However, the data is screened by a confidence algorithm which seeks to help users gauge the quality of individual hotspot/fire points. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Medium confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels. RevisionsSeptember 10, 2025: Switched to alternate source site ‘firms2’ to get around data delivery delays on primary ‘firms’ site.March 7, 2024: Updated to include source data from NOAA-21 Satellite. September 15, 2022: Updated to include 'Hours_Old' field. Time series has been disabled by default, but still available. July 5, 2022: Terms of Use updated to Esri Master License Agreement, no longer stating that a subscription is required! This layer is provided for informational purposes and is not monitored 24/7 for accuracy and currency.If you would like to be alerted to potential issues or simply see when this Service will update next, please visit our Live Feed Status Page!
Facebook
TwitterThe WorldView-1 image of Heard Island (23 March 2008) that was purchased by the Australian Antarctic Division (AAD) and the University of Tasmania (UTAS) in June 2008 has to be geometrically corrected to match the Quickbird and IKONOS imagery in the Australian Antarctic Data Centre (AADC) satellite image catalogue. In addition, the WorldView-1 imagery contains two separate image strips that cover the whole island. These strips were acquired at slightly different times from different angles during the satellite overpass. The discrepancy in acquisition angle has resulted in a geometric offset between the two image strips. These two image strips were orthorectified with a 10 m RADARSAT DEM (2002). The orthorectified images were then merged into a single image mosaic for the whole island.
This work was completed as part of ASAC project 2939 (ASAC_2939).
Facebook
TwitterVisualization OverviewThis visualization represents a "true color" band combination (Red = 1, Green = 4, Blue = 3) of data collected by the MODIS instrument on the NASA Terra satellite. The imagery is most similar to how we see the Earth’s surface with our own eyes. It is a natural looking image that is useful at a global and regional scale. At its highest resolution, this visualization represents the underlying data scaled to a resolution of 250m per pixel at the equator.The MODIS Corrected Reflectance product provides natural-looking images by removing gross atmospheric effects such as Rayleigh scattering from the visible bands. By contrast, the MODIS Surface Reflectance product, which is also available in the Living Atlas, provides a more complete atmospheric correction algorithm that includes aerosol correction and is designed to derive land surface properties. In clear atmospheric conditions the Corrected Reflectance product is similar to the Surface Reflectance product, but they depart from each other in the presence of aerosols.Multi-Spectral BandsThe following table lists the MODIS bands that are utilized to create this visualization. See here for a full description of all MODIS bands.BandDescriptionWavelength (µm)Resolution (m)1Visible (Red)0.620 - 0.670 2503Visible (Blue)0.459 - 0.4795004Visible (Green)0.545 - 0.565500Temporal CoverageBy default, this layer will display the imagery currently available for today’s date. This imagery is a "daily composite" that is assembled from hundreds of individual data files. When viewing imagery for “today,” you may notice that only a portion of the map has imagery. This is because the visualization is continually updated as the satellite collects more data. To view imagery over time, you can update the layer properties to enable time animation and configure time settings. Currently, this layer is available from present back to the start of the mission (February 24th, 2000).NASA Global Imagery Browse Services (GIBS), NASA Worldview, & NASA LANCEThis visualization is provided through the NASA Global Imagery Browse Services (GIBS), which are a set of standard services to deliver global, full-resolution satellite imagery for hundreds of NASA Earth science datasets and science parameters. Through its services, and the NASA Worldview client, GIBS enables interactive exploration of NASA's Earth imagery for a broad range of users. The data and imagery are generated within 3 hours of acquisition through the NASA LANCE capability.Esri and NASA Collaborative ServicesThis visualization is made available through an ArcGIS image service hosted on Esri servers and facilitates access to a NASA GIBS service endpoint. For each image service request, the Esri server issues multiple requests to the GIBS service, processes and assembles the responses, and returns a proper mosaic image to the user. Processing occurs on-the-fly for each and every request to ensure that any update to the GIBS imagery is immediately available to the user. As such, availability of this visualization is dependent on both the Esri and the NASA GIBS services.
Facebook
TwitterThis WorldView-2 image of the east coast of Heard Island was collected on 23 Dec. 2010. (Satellite Image Catalogue id=2262 and 2263) The 0.5 m resolution panchromatic band and 2 m resolution multispectral bands were separately orthorectified and two separate image tiles were mosaiced.
The images are the result of a rigorous orthorectification of the panchromatic band and eight multispectral bands of the two WorldView-2 images. The images were orthorectified with the TerraSAR-X DEM acquired in Oct 2009. The digital elevation model used in the orthrectification is described by the metadata record 'A Digital Elevation Model of Heard Island derived from TerraSAR satellite imagery' - Entry ID: heard_dem_terrasar
The orthorectification of the two Worldview-2 image tiles was carried out in ENVI 4.8. No GCPs were used for the orthorectification process given the very high absolute accuracy of the RPC positioning of WorldView-2. Previously problems were encountered (significant geometric errors) with orthorectification of IKONOS 2004 imagery with DGPS GCPs collected by Dr Jenny Scott. The current orthorectification is considered more accurate given the high absolute spatial accuracy of WorldView-2 (CE90 = 3.5 m) and the more detailed TerraSAR-X DEM of Heard Island. The resulting image is considered a base image for subsequent geometric processing and co-registration of other images (e.g. IKONOS image acquired in 2004 for change detection).
The two tiles were mosaiced along a manually digitised cutline in ENVI. For a more detailed description of this process we refer to the report available for download from the provided URL.
Personnel involved with this dataset. Dr Arko Lucieer (principal investigator) Iain Clarke (research assistant: geometric corrections) Desiree Treichler (research assistant: radiometric and atmospheric corrections)
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset provides pre-extracted features from multimodal environmental data and expert-verified species observations, ready to be integrated into your models. Whether you're here for research, experimentation, or competition, you're in the right place!
🔎 Check out the key resources below to get started: | Resource | Description | Link | | ------------------------------ | -------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | | 📄 Dataset Paper | NeurIPS 2024 paper detailing the dataset, benchmark setup, etc. | NeurIPS Paper (PDF) | | | 🧠 GitHub Repository | Codebase with data loaders, baseline models, and utilities | GeoPlant Repo | | 🚀 Starter Notebooks | Baseline models, multimodal pipelines, and training scripts | GeoPlant Code on Kaggle | | 📦 Full Dataset | All provided data including the Presence-Only (PO) species observations. | GeoPlant Seafile |
The species related training data comprises: 1. Presence-Absence (PA) surveys: including around 90 thousand surveys with roughly 10,000 species of the European flora. The presence-absence data (PA) is provided to compensate for the problem of false-absences of PO data and calibrate models to avoid associated biases. 2. Presence-Only (PO) occurrences: combines around five million observations from numerous datasets gathered from the Global Biodiversity Information Facility (GBIF, www.gbif.org). This data constitutes the larger piece of the training data and covers all countries of our study area, but it has been sampled opportunistically (without standardized sampling protocol), leading to various sampling biases. The local absence of a species among PO data doesn't mean it is truly absent. An observer might not have reported it because it was difficult to "see" it at this time of the year, to identify it as not a monitoring target, or just unattractive.
There are two CSVs with species occurrence data on the Seafile available for training. The detailed description is provided again on SeaFile in separate ReadME files in relevant folders.
- The PO metadata are available in PresenceOnlyOccurences/GLC24_PO_metadata_train.csv.
- The PA metadata are available in PresenceAbsenceSurveys/GLC24_PA_metadata_train.csv.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1518097%2Fcf0b0ee7f4ab8c1f7944fd7b3cd89d81%2FDataComposition.png?generation=1718369587083645&alt=media" alt="">
Besides species data, we provide spatialized geographic and environmental data as additional input variables (see Figure 1). More precisely, For each species observation location, we provide: 1. Satellite image patches: 3-band (RGB) and 1-band (NIR) 128x128 images at 10m resolution. 2. Satellite time series: Up to 20 years of values for six satellite bands (R, G, B, NIR, SWIR1, and SWIR2). 3. Environmental rasters Various climatic, pedologic, land use, and human footprint variables at the European scale. We provide scalar values, time-series, and original rasters from which you may extract local 2D images.
There are three separate folders with the relevant data on the Seafile available for training. The detailed description is provided below and again on SeaFile in separate "Readme" files in relevant folders.
- The Satellite image patches in ./SatellitePatches/.
- The Satellite time series in ./SatelliteTimeSeries/.
- The Environmental rasters in ./EnvironmentalRasters/.
Figure. Illustration of of the environmental data for an occurrence (glcID=4859165) collected in northern Switzerland (lon=8.5744;lat=47.7704) in 2021. A. The 1280x1280m satellite image patches were sampled in 2021 around the observation. B. Quarterly time series of six satellite ...
Facebook
TwitterThis map features the World Imagery map, focused on the Carribean region. World Imagery provides one meter or better satellite and aerial imagery in many parts of the world and lower resolution satellite imagery worldwide. The map includes 15m TerraColor imagery at small and mid-scales (~1:591M down to ~1:72k) and 2.5m SPOT Imagery (~1:288k to ~1:72k) for the world. DigitalGlobe sub-meter imagery is featured in many parts of the world, including Africa. Sub-meter Pléiades imagery is available in select urban areas. Additionally, imagery at different resolutions has been contributed by the GIS User Community.For more information on this map, view the World Imagery item description. Metadata: This service is metadata-enabled. With the Identify tool in ArcMap or the World Imagery with Metadata web map, you can see the resolution, collection date, and source of the imagery at the location you click. Values of "99999" mean that metadata is not available for that field. The metadata applies only to the best available imagery at that location. You may need to zoom in to view the best available imagery.Feedback: Have you ever seen a problem in the Esri World Imagery Map that you wanted to see fixed? You can use the Imagery Map Feedback web map to provide feedback on issues or errors that you see. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
Facebook
TwitterThis layer presents detectable thermal activity from MODIS satellites for the last 7 days. MODIS Global Fires is a product of NASA’s Earth Observing System Data and Information System (EOSDIS), part of NASA's Earth Science Data.
EOSDIS integrates remote sensing and GIS technologies to deliver global
MODIS hotspot/fire locations to natural resource managers and other
stakeholders around the World.
Consumption Best Practices:
Facebook
TwitterMultispectral imagery captured by Sentinel-2 satellites, featuring 13 spectral bands (visible, near-infrared, and short-wave infrared). Available globally since 2018 (Europe since 2017) with 10-60 m spatial resolution and revisit times of 2-3 days at mid-latitudes. Accessible through the EOSDA LandViewer platform for visualization, analysis, and download.