Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
This image service contains high resolution satellite imagery for selected regions throughout the Yukon. Imagery is 1m pixel resolution, or better. Imagery was supplied by the Government of Yukon, and the Canadian Department of National Defense. All the imagery in this service is licensed. If you have any questions about Yukon government satellite imagery, please contact Geomatics.Help@gov.yk.can. This service is managed by Geomatics Yukon.
https://data.linz.govt.nz/license/attribution-4-0-international/https://data.linz.govt.nz/license/attribution-4-0-international/
This dataset provides a seamless cloud-free 10m resolution satellite imagery layer of the New Zealand mainland and offshore islands.
The imagery was captured by the European Space Agency Sentinel-2 satellites between September 2021 - April 2022.
Technical specifications:
This is a visual product only. The data has been downsampled from 12-bits to 8-bits, and the original values of the images have been modified for visualisation purposes.
This cached tile service of 2015 WorldView Orthoimagery may be added to ArcMap and other GIS software and applications. The Web service was created in ArcMap 10.3 using orthorectified imagery in mosaic datasets and published to a tile package. The package was published as service that is hosted at MassGIS' ArcGIS Online organizational account.When creating the service in ArcMap, the display settings (stretching, brightness and contrast) were modified individually for each mosaic dataset in order to achieve the best possible uniform appearance across the state; however, because of the different acquisition dates and satellites, seams between strips are visible at smaller scales. With many tiles overlapping from different flights, imagery was displayed so that the best imagery (highest resolution, most cloud-free) appeared "on top".The visible scale range for this service is 1:3,000,000 to 1:2,257.See https://www.mass.gov/info-details/massgis-data-2015-satellite-imagery for full details.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains both large (A0) printable maps of the Torres Strait broken into six overlapping regions, based on a clear sky, clear water composite Sentinel 2 composite imagery and the imagery used to create these maps. These maps show satellite imagery of the region, overlaid with reef and island boundaries and names. Not all features are named, just the more prominent features. This also includes a vector map of Ashmore Reef and Boot Reef in Coral Sea as these were used in the same discussions that these maps were developed for. The map of Ashmore Reef includes the atoll platform, reef boundaries and depth polygons for 5 m and 10 m.
This dataset contains all working files used in the development of these maps. This includes all a copy of all the source datasets and all derived satellite image tiles and QGIS files used to create the maps. This includes cloud free Sentinel 2 composite imagery of the Torres Strait region with alpha blended edges to allow the creation of a smooth high resolution basemap of the region.
The base imagery is similar to the older base imagery dataset: Torres Strait clear sky, clear water Landsat 5 satellite composite (NERP TE 13.1 eAtlas, AIMS, source: NASA).
Most of the imagery in the composite imagery from 2017 - 2021.
Method: The Sentinel 2 basemap was produced by processing imagery from the World_AIMS_Marine-satellite-imagery dataset (not yet published) for the Torres Strait region. The TrueColour imagery for the scenes covering the mapped area were downloaded. Both the reference 1 imagery (R1) and reference 2 imagery (R2) was copied for processing. R1 imagery contains the lowest noise, most cloud free imagery, while R2 contains the next best set of imagery. Both R1 and R2 are typically composite images from multiple dates.
The R2 images were selectively blended using manually created masks with the R1 images. This was done to get the best combination of both images and typically resulted in a reduction in some of the cloud artefacts in the R1 images. The mask creation and previewing of the blending was performed in Photoshop. The created masks were saved in 01-data/R2-R1-masks. To help with the blending of neighbouring images a feathered alpha channel was added to the imagery. The processing of the merging (using the masks) and the creation of the feathered borders on the images was performed using a Python script (src/local/03-merge-R2-R1-images.py) using the Pillow library and GDAL. The neighbouring image blending mask was created by applying a blurring of the original hard image mask. This allowed neighbouring image tiles to merge together.
The imagery and reference datasets (reef boundaries, EEZ) were loaded into QGIS for the creation of the printable maps.
To optimise the matching of the resulting map slight brightness adjustments were applied to each scene tile to match its neighbours. This was done in the setup of each image in QGIS. This adjustment was imperfect as each tile was made from a different combinations of days (to remove clouds) resulting in each scene having a different tonal gradients across the scene then its neighbours. Additionally Sentinel 2 has slight stripes (at 13 degrees off the vertical) due to the swath of each sensor having a slight sensitivity difference. This effect was uncorrected in this imagery.
Single merged composite GeoTiff: The image tiles with alpha blended edges work well in QGIS, but not in ArcGIS Pro. To allow this imagery to be used across tools that don't support the alpha blending we merged and flattened the tiles into a single large GeoTiff with no alpha channel. This was done by rendering the map created in QGIS into a single large image. This was done in multiple steps to make the process manageable.
The rendered map was cut into twenty 1 x 1 degree georeferenced PNG images using the Atlas feature of QGIS. This process baked in the alpha blending across neighbouring Sentinel 2 scenes. The PNG images were then merged back into a large GeoTiff image using GDAL (via QGIS), removing the alpha channel. The brightness of the image was adjusted so that the darkest pixels in the image were 1, saving the value 0 for nodata masking and the boundary was clipped, using a polygon boundary, to trim off the outer feathering. The image was then optimised for performance by using internal tiling and adding overviews. A full breakdown of these steps is provided in the README.md in the 'Browse and download all data files' link.
The merged final image is available in export\TS_AIMS_Torres Strait-Sentinel-2_Composite.tif
.
Change Log: 2023-03-02: Eric Lawrey Created a merged version of the satellite imagery, with no alpha blending so that it can be used in ArcGIS Pro. It is now a single large GeoTiff image. The Google Earth Engine source code for the World_AIMS_Marine-satellite-imagery was included to improve the reproducibility and provenance of the dataset, along with a calculation of the distribution of image dates that went into the final composite image. A WMS service for the imagery was also setup and linked to from the metadata. A cross reference to the older Torres Strait clear sky clear water Landsat composite imagery was also added to the record.
22 Nov 2023: Eric Lawrey Added the data and maps for close up of Mer. - 01-data/TS_DNRM_Mer-aerial-imagery/ - preview/Torres-Strait-Mer-Map-Landscape-A0.jpeg - exports/Torres-Strait-Mer-Map-Landscape-A0.pdf Updated 02-Torres-Strait-regional-maps.qgz to include the layout for the new map.
Source datasets: Complete Great Barrier Reef (GBR) Island and Reef Feature boundaries including Torres Strait Version 1b (NESP TWQ 3.13, AIMS, TSRA, GBRMPA), https://eatlas.org.au/data/uuid/d2396b2c-68d4-4f4b-aab0-52f7bc4a81f5
Geoscience Australia (2014b), Seas and Submerged Lands Act 1973 - Australian Maritime Boundaries 2014a - Geodatabase [Dataset]. Canberra, Australia: Author. https://creativecommons.org/licenses/by/4.0/ [license]. Sourced on 12 July 2017, https://dx.doi.org/10.4225/25/5539DFE87D895
Basemap/AU_GA_AMB_2014a/Exclusive_Economic_Zone_AMB2014a_Limit.shp The original data was obtained from GA (Geoscience Australia, 2014a). The Geodatabase was loaded in ArcMap. The Exclusive_Economic_Zone_AMB2014a_Limit layer was loaded and exported as a shapefile. Since this file was small no clipping was applied to the data.
Geoscience Australia (2014a), Treaties - Australian Maritime Boundaries (AMB) 2014a [Dataset]. Canberra, Australia: Author. https://creativecommons.org/licenses/by/4.0/ [license]. Sourced on 12 July 2017, http://dx.doi.org/10.4225/25/5539E01878302 Basemap/AU_GA_Treaties-AMB_2014a/Papua_New_Guinea_TSPZ_AMB2014a_Limit.shp The original data was obtained from GA (Geoscience Australia, 2014b). The Geodatabase was loaded in ArcMap. The Papua_New_Guinea_TSPZ_AMB2014a_Limit layer was loaded and exported as a shapefile. Since this file was small no clipping was applied to the data.
AIMS Coral Sea Features (2022) - DRAFT This is a draft version of this dataset. The region for Ashmore and Boot reef was checked. The attributes in these datasets haven't been cleaned up. Note these files should not be considered finalised and are only suitable for maps around Ashmore Reef. Please source an updated version of this dataset for any other purpose. CS_AIMS_Coral-Sea-Features/CS_Names/Names.shp CS_AIMS_Coral-Sea-Features/CS_Platform_adj/CS_Platform.shp CS_AIMS_Coral-Sea-Features/CS_Reef_Boundaries_adj/CS_Reef_Boundaries.shp CS_AIMS_Coral-Sea-Features/CS_Depth/CS_AIMS_Coral-Sea-Features_Img_S2_R1_Depth5m_Coral-Sea.shp CS_AIMS_Coral-Sea-Features/CS_Depth/CS_AIMS_Coral-Sea-Features_Img_S2_R1_Depth10m_Coral-Sea.shp
Murray Island 20 Sept 2011 15cm SISP aerial imagery, Queensland Spatial Imagery Services Program, Department of Resources, Queensland This is the high resolution imagery used to create the map of Mer.
Marine satellite imagery (Sentinel 2 and Landsat 8) (AIMS), https://eatlas.org.au/data/uuid/5d67aa4d-a983-45d0-8cc1-187596fa9c0c - World_AIMS_Marine-satellite-imagery
Data Location: This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\TS_AIMS_Torres-Strait-Sentinel-2-regional-maps. On the eAtlas server it is stored at eAtlas GeoServer\data\2020-2029-AIMS.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains Sentinel 2 and Landsat 8 cloud free composite satellite images of the Coral Sea reef areas and some parts of the Great Barrier Reef. It also contains raw depth contours derived from the satellite imagery. This dataset was developed as the base information for mapping the boundaries of reefs and coral cays in the Coral Sea. It is likely that the satellite imagery is useful for numerous other applications. The full source code is available and can be used to apply these techniques to other locations.
This dataset contains two sets of raw satellite derived bathymetry polygons for 5 m, 10 m and 20 m depths based on both the Landsat 8 and Sentinel 2 imagery. These are intended to be post-processed using clipping and manual clean up to provide an estimate of the top structure of reefs. This dataset also contains select scenes on the Great Barrier Reef and Shark bay in Western Australia that were used to calibrate the depth contours. Areas in the GBR were compared with the GA GBR30 2020 (Beaman, 2017) bathymetry dataset and the imagery in Shark bay was used to tune and verify the Satellite Derived Bathymetry algorithm in the handling of dark substrates such as by seagrass meadows. This dataset also contains a couple of small Sentinel 3 images that were used to check the presence of reefs in the Coral Sea outside the bounds of the Sentinel 2 and Landsat 8 imagery.
The Sentinel 2 and Landsat 8 imagery was prepared using the Google Earth Engine, followed by post processing in Python and GDAL. The processing code is available on GitHub (https://github.com/eatlas/CS_AIMS_Coral-Sea-Features_Img).
This collection contains composite imagery for Sentinel 2 tiles (59 in Coral Sea, 8 in GBR) and Landsat 8 tiles (12 in Coral Sea, 4 in GBR and 1 in WA). For each Sentinel tile there are 3 different colour and contrast enhancement styles intended to highlight different features. These include:
- TrueColour
- Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to identifying shallow features are and in mapping the vegetation on cays.
- DeepFalse
- Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This imagery has a high level of contrast enhancement applied to the imagery and so it appears more noisy (in particular showing artefact from clouds) than the TrueColour styling.
- Shallow
- Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Features less than a couple of metres appear dark blue, dry areas are white. This imagery is intended to help identify coral cay boundaries.
For Landsat 8 imagery only the TrueColour
and DeepFalse
stylings were rendered.
All Sentinel 2 and Landsat 8 imagery has Satellite Derived Bathymetry (SDB) depth contours.
- Depth5m
- This corresponds to an estimate of the area above 5 m depth (Mean Sea Level).
- Depth10m
- This corresponds to an estimate of the area above 10 m depth (Mean Sea Level).
- Depth20m
- This corresponds to an estimate of the area above 20 m depth (Mean Sea Level).
For most Sentinel and some Landsat tiles there are two versions of the DeepFalse imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery. This allows any mapped features to be checked against two images. Typically the R2 imagery will have more artefacts from clouds. In one Sentinel 2 tile a third image was created to help with mapping the reef platform boundary.
The satellite imagery was processed in tiles (approximately 100 x 100 km for Sentinel 2 and 200 x 200 km for Landsat 8) to keep each final image small enough to manage. These tiles were not merged into a single mosaic as it allowed better individual image contrast enhancement when mapping deep features. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs and where their might have been potential new reef platforms indicated by existing bathymetry datasets and the AHO Marine Charts. The extent of the imagery was limited by those available through the Google Earth Engine.
The Sentinel 2 imagery was created using the Google Earth Engine. The core algorithm was:
1. For each Sentinel 2 tile, images from 2015 – 2021 were reviewed manually after first filtering to remove cloudy scenes. The allowable cloud cover was adjusted so that at least the 50 least cloud free images were reviewed. The typical cloud cover threshold was 1%. Where very few images were available the cloud cover filter threshold was raised to 100% and all images were reviewed. The Google Earth Engine image IDs of the best images were recorded, along with notes to help sort the images based on those with the clearest water, lowest waves, lowest cloud, and lowest sun glint. Images where there were no or few clouds over the known coral reefs were preferred. No consideration of tides was used in the image selection process. The collection of usable images were grouped into two sets that would be combined together into composite images. The best were added to the R1 composite, and the next best images into the R2 composite. Consideration was made as to whether each image would improve the resultant composite or make it worse. Adding clear images to the collection reduces the visual noise in the image allowing deeper features to be observed. Adding images with clouds introduces small artefacts to the images, which are magnified due to the high contrast stretching applied to the imagery. Where there were few images all available imagery was typically used.
2. Sunglint was removed from the imagery using estimates of the sunglint using two of the infrared bands (described in detail in the section on Sun glint removal and atmospheric correction).
3. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later).
4. The brightness of the composite image was normalised so that all tiles would have a similar average brightness for deep water areas. This correction was applied to allow more consistent contrast enhancement. Note: this brightness adjustment was applied as a single offset across all pixels in the tile and so this does not correct for finer spatial brightness variations.
5. The contrast of the images was enhanced to create a series of products for different uses. The TrueColour
colour image retained the full range of tones visible, so that bright sand cays still retain detail. The DeepFalse
style was optimised to see features at depth and the Shallow
style provides access to far red and infrared bands for assessing shallow features, such as cays and island.
6. The various contrast enhanced composite images were exported from Google Earth Engine and optimised using Python and GDAL. This optimisation added internal tiling and overviews to the imagery. The depth polygons from each tile were merged into shapefiles covering the whole for each depth.
Prior to combining the best images each image was processed to mask out clouds and their shadows.
The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.
A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.
A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.
The buffering was applied as the cloud masking would often miss significant portions of the edges of clouds and their shadows. The buffering allowed a higher percentage of the cloud to be excluded, whilst retaining as much of the original imagery as possible.
The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. The algorithm used is significantly better than the default Sentinel 2 cloud masking and slightly better than the COPERNICUS/S2_CLOUD_PROBABILITY cloud mask because it masks out shadows, however there is potentially significant improvements that could be made to the method in the future.
Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations.
Cloud-free Landsat satellite imagery mosaics of the islands of the main 8 Hawaiian Islands (Hawaii, Maui, Kahoolawe, Lanai, Molokai, Oahu, Kauai and Niihau). Landsat 7 ETM (enhanced thematic mapper) is a polar orbiting 8 band multispectral satellite-borne sensor. The ETM+ instrument provides image data from eight spectral bands. The spatial resolution is 30 meters for the visible and near-infra...
EarthExplorerUse the USGS EarthExplorer (EE) to search, download, and order satellite images, aerial photographs, and cartographic products. In addition to data from the Landsat missions and a variety of other data providers, EE provides access to MODIS land data products from the NASA Terra and Aqua missions, and ASTER level-1B data products over the U.S. and Territories from the NASA ASTER mission. Registered users of EE have access to more features than guest users.Earth Explorer Distribution DownloadThe EarthExplorer user interface is an online search, discovery, and ordering tool developed by the United States Geological Survey (USGS). EarthExplorer supports the searching of satellite, aircraft, and other remote sensing inventories through interactive and textual-based query capabilities. Through the interface, users can identify search areas, datasets, and display metadata, browse and integrated visual services within the interface.The distributable version of EarthExplorer provides the basic software to provide this functionality. Users are responsible for verification of system recommendations for hosting the application on your own servers. By default, this version of our code is not hooked up to a data source so you will have to integrate the interface with your data. Integration options include service-based API's, databases, and anything else that stores data. To integrate with a data source simply replace the contents of the 'getDataset' and 'search' functions in the CWIC.php file.Distribution is being provided due to users requests for the codebase. The EarthExplorer source code is provided "As Is", without a warranty or support of any kind. The software is in the public domain; it is available to any government or private institution.The software code base is managed through the USGS Configuration Management Board. The software is managed through an automated configuration management tool that updates the code base when new major releases have been thoroughly reviewed and tested.Link: https://earthexplorer.usgs.gov/
Consumption Best Practices:
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The State of Indiana Geographic Information Office (GIO) has published a State-wide Digital Aerial Imagery Catalog consisting of orthoimagery files from 2016-2019 and 2021 – 2022 in Cloud-Optimized GeoTIFF (COG) format on the AWS Registry of Open Data Account. These COG formatted files support the dynamic imagery services available from the GIO ESRI-based imagery solution. The Open Data on AWS is a repository of publicly available datasets for access from AWS resources. These datasets are owned and maintained by the Indiana GIO. These images are licensed by Creative Commons 0 (CC0). Cloud Optimized GeoTIF behaves as a GeoTIFF in all products; however, the optimization becomes apparent when incorporating them into web services.
https://earth.esa.int/eogateway/documents/20142/1560778/ESA-Third-Party-Missions-Terms-and-Conditions.pdfhttps://earth.esa.int/eogateway/documents/20142/1560778/ESA-Third-Party-Missions-Terms-and-Conditions.pdf
The PlanetScope Level 1B Basic Scene and Level 3B Ortho Scene full archive products are available as part of Planet imagery offer. The Unrectified Asset: PlanetScope Basic Analytic Radiance (TOAR) product is a Scaled Top of Atmosphere Radiance (at sensor) and sensor corrected product, without correction for any geometric distortions inherent in the imaging processes and is not mapped to a cartographic projection. The imagery data is accompanied by Rational Polynomial Coefficients (RPCs) to enable orthorectification by the user. This kind of product is designed for users with advanced image processing and geometric correction capabilities. Basic Scene Product Components and Format Product Components Image File (GeoTIFF format) Metadata File (XML format) Rational Polynomial Coefficients (XML format) Thumbnail File (GeoTIFF format) Unusable Data Mask UDM File (GeoTIFF format) Usable Data Mask UDM2 File (GeoTIFF format) Bands 4-band multispectral image (blue, green, red, near-infrared) or 8-band (coastal-blue, blue, green I, green, yellow, red, Rededge, near-infrared) Ground Sampling Distance Approximate, satellite altitude dependent Dove-C: 3.0 m-4.1 m Dove-R: 3.0 m-4.1 m SuperDove: 3.7 m-4.2 m Accuracy <10 m RMSE The Rectified assets: The PlanetScope Ortho Scene product is radiometrically-, sensor- and geometrically- corrected and is projected to a UTM/WGS84 cartographic map projection. The geometric correction uses fine Digital Elevation Models (DEMs) with a post spacing of between 30 and 90 metres. Ortho Scene Product Components and Format Product Components Image File (GeoTIFF format) Metadata File (XML format) Thumbnail File (GeoTIFF format) Unusable Data Mask UDM File (GeoTIFF format) Usable Data Mask UDM2 File (GeoTIFF format) Bands 3-band natural colour (red, green, blue) or 4-band multispectral image (blue, green, red, near-infrared) or 8-band (coastal-blue, blue, green I, green, yellow, red, RedEdge, near-infrared) Ground Sampling Distance Approximate, satellite altitude dependent Dove-C: 3.0 m-4.1 m Dove-R: 3.0 m-4.1 m SuperDove: 3.7 m-4.2 m Projection UTM WGS84 Accuracy <10 m RMSE PlanetScope Ortho Scene product is available in the following: PlanetScope Visual Ortho Scene product is orthorectified and colour-corrected (using a colour curve) 3-band RGB Imagery. This correction attempts to optimise colours as seen by the human eye providing images as they would look if viewed from the perspective of the satellite. PlanetScope Surface Reflectance product is orthorectified, 4-band BGRN or 8-band Coastal Blue, Blue, Green I, Green, Yellow, Red, RedEdge, NIR Imagery with geometric, radiometric and corrected for surface reflection. This data is optimal for value-added image processing such as land cover classifications. PlanetScope Analytic Ortho Scene Surface Reflectance product is orthorectified, 4-band BGRN or 8-band Coastal Blue, Blue, Green I, Green, Yellow, Red, RedEdge, NIR Imagery with geometric, radiometric and calibrated to top of atmosphere radiance. As per ESA policy, very high-resolution imagery of conflict areas cannot be provided.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains cloud free composite satellite images for the northern Australia region based on 10 m resolution Sentinel 2 imagery from 2015 – 2024. This image collection was created as part of the NESP MaC 3.17 project and is intended to allow mapping of the reef features in northern Australia. A new, improved version (version 2, published July 2024) has succeeded the draft version (published March 2024).
This collection contains composite imagery for 333 Sentinel 2 tiles around the northern coast line of Australia, including the Great Barrier Reef. This dataset uses a true colour contrast and colour enhancement style using the bands B2 (blue), B3 (green), and B4 (red). This is useful to interpreting what shallow features are and in mapping the vegetation on cays and identifying beach rock.
Changelog:
This dataset will be progressively improved and made available for download. These additions will be noted in this change log. 2024-07-22 - Version 2 composites using an improved contrast enhancement and a noise prediction algorithm to only include low noise images in composite (Git tag: "composites_v2") 2024-03-07 - Initial release draft composites using 15th percentile (Git tag: "composites_v1")
Methods:
The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile filter the "COPERNICUS/S2_HARMONIZED" image collection by - tile ID - maximum cloud cover 20% - date between '2015-06-27' and '2024-05-31' - asset_size > 100000000 (remove small fragments of tiles) Note: A maximum cloud cover of 20% was used to improve the processing times. In most cases this filtering does not have an effect on the final composite as images with higher cloud coverage mostly result in higher noise levels and are not used in the final composite. 2. Split images by "SENSING_ORBIT_NUMBER" (see "Using SENSING_ORBIT_NUMBER for a more balanced composite" for more information). 3. For each SENSING_ORBIT_NUMBER collection filter out all noise-adding images: 3.1 Calculate image noise level for each image in the collection (see "Image noise level calculation for more information") and sort collection by noise level. 3.2 Remove all images with a very high noise index (>15). 3.3 Calculate a baseline noise level using a minimum number of images (min_images_in_collection=30). This minimum number of images is needed to ensure a smoth composite where cloud "holes" in one image are covered by other images. 3.4 Iterate over remaining images (images not used in base noise level calculation) and check if adding image to the composite adds to or reduces the noise. If it reduces the noise add it to the composite. If it increases the noise stop iterating over images. 4. Combine SENSING_ORBIT_NUMBER collections into one image collection. 5. Remove sun-glint (true colour only) and apply atmospheric correction on each image (see "Sun-glint removal and atmospheric correction" for more information). 6. Duplicate image collection to first create a composite image without cloud masking and using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 7. Apply cloud masking to all images in the original image collection (see "Cloud Masking" for more information) and create a composite by using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 8. Combine the two composite images (no cloud mask composite and cloud mask composite). This solves the problem of some coral cays and islands being misinterpreted as clouds and therefore creating holes in the composite image. These holes are "plugged" with the underlying composite without cloud masking. (Lawrey et al. 2022) 9. The final composite was exported as cloud optimized 8 bit GeoTIFF
Note: The following tiles were generated with no "maximum cloud cover" as they did not have enough images to create a composite with the standard settings: - 46LGM - 46LGN - 46LHM - 50KKD - 50KPG - 53LMH - 53LMJ - 53LNH - 53LPH - 53LPJ - 54LVP - 57JVH - 59JKJtime then the resulting image would be cloud free. (Lawrey et al. 2022)
Image noise level calculation:
The noise level for each image in this dataset is calculated to ensure high-quality composites by minimizing the inclusion of noisy images. This process begins by creating a water mask using the Normalized Difference Water Index (NDWI) derived from the NIR and Green bands. High reflectance areas in the NIR and SWIR bands, indicative of sun-glint, are identified and masked by the water mask to focus on water areas affected by sun-glint. The proportion of high sun-glint pixels within these water areas is calculated and amplified to compute a noise index. If no water pixels are detected, a high noise index value is assigned.
Sun glint removal and atmospheric correction:
Sun glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun glint detected by B8 correlates very highly with the sun glint experienced by the visible channels (B2, B3 and B4) and so the sun glint in these channels can be removed by subtracting B8 from these channels.
Eric Lawrey developed this algorithm by fine tuning the value of the scaling between the B8 channel and each individual visible channel (B2, B3 and B4) so that the maximum level of sun glint would be removed. This work was based on a representative set of images, trying to determine a set of values that represent a good compromise across different water surface conditions.
This algorithm is an adjustment of the algorithm already used in Lawrey et al. 2022
Cloud Masking:
Each image was processed to mask out clouds and their shadows before creating the composite image. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.
A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 35% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.
A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.
The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm.
Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations was adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations. With 4-pixel filter resolutions these operations were still using over 90% of the total processing resulting in each image taking approximately 10 min to compute on the Google Earth Engine. (Lawrey et al. 2022)
Format:
GeoTiff - LZW compressed, 8 bit channels, 0 as NoData, Imagery as values 1 - 255. Internal tiling and overviews. Average size: 12500 x 11300 pixels and 300 MB per image.
The images in this dataset are all named using a naming convention. An example file name is AU_AIMS_MARB-S2-comp_p15_TrueColour_51KTV_v2_2015-2024.tif
. The name is made up from:
- Dataset name (AU_AIMS_MARB-S2-comp
)
- An algorithm descriptor (p15
for 15th percentile),
- Colour and contrast enhancement applied (TrueColour
),
- Sentinel 2 tile (example: 54LZP
),
- Version (v2
),
- Date range (2015 to 2024 for version 2)
References:
Google (n.d.) Sentinel-2: Cloud Probability. Earth Engine Data Catalog. Accessed 10 April 2021 from https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_CLOUD_PROBABILITY
Zupanc, A., (2017) Improving Cloud Detection with Machine Learning. Medium. Accessed 10 April 2021 from https://medium.com/sentinel-hub/improving-cloud-detection-with-machine-learning-c09dc5d7cf13
Lawrey, E., & Hammerton, M. (2022). Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and Landsat 8) 2015 – 2021 (AIMS) [Data set]. eAtlas. https://doi.org/10.26274/NH77-ZW79
Data Location:
This dataset is filed in the eAtlas enduring data repository at: data\custodian\2023-2026-NESP-MaC-3\3.17_Northern-Aus-reef-mapping The source code is available on GitHub.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The High Resolution Digital Elevation Model (HRDEM) product is derived from airborne LiDAR data (mainly in the south) and satellite images in the north. The complete coverage of the Canadian territory is gradually being established. It includes a Digital Terrain Model (DTM), a Digital Surface Model (DSM) and other derived data. For DTM datasets, derived data available are slope, aspect, shaded relief, color relief and color shaded relief maps and for DSM datasets, derived data available are shaded relief, color relief and color shaded relief maps. The productive forest line is used to separate the northern and the southern parts of the country. This line is approximate and may change based on requirements. In the southern part of the country (south of the productive forest line), DTM and DSM datasets are generated from airborne LiDAR data. They are offered at a 1 m or 2 m resolution and projected to the UTM NAD83 (CSRS) coordinate system and the corresponding zones. The datasets at a 1 m resolution cover an area of 10 km x 10 km while datasets at a 2 m resolution cover an area of 20 km by 20 km. In the northern part of the country (north of the productive forest line), due to the low density of vegetation and infrastructure, only DSM datasets are generally generated. Most of these datasets have optical digital images as their source data. They are generated at a 2 m resolution using the Polar Stereographic North coordinate system referenced to WGS84 horizontal datum or UTM NAD83 (CSRS) coordinate system. Each dataset covers an area of 50 km by 50 km. For some locations in the north, DSM and DTM datasets can also be generated from airborne LiDAR data. In this case, these products will be generated with the same specifications as those generated from airborne LiDAR in the southern part of the country. The HRDEM product is referenced to the Canadian Geodetic Vertical Datum of 2013 (CGVD2013), which is now the reference standard for heights across Canada. Source data for HRDEM datasets is acquired through multiple projects with different partners. Since data is being acquired by project, there is no integration or edgematching done between projects. The tiles are aligned within each project. The product High Resolution Digital Elevation Model (HRDEM) is part of the CanElevation Series created in support to the National Elevation Data Strategy implemented by NRCan. Collaboration is a key factor to the success of the National Elevation Data Strategy. Refer to the “Supporting Document” section to access the list of the different partners including links to their respective data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains cloud free, low tide composite satellite images for the tropical Australia region based on 10 m resolution Sentinel 2 imagery from 2018 – 2023. This image collection was created as part of the NESP MaC 3.17 project and is intended to allow mapping of the reef features in tropical Australia.
This collection contains composite imagery for 200 Sentinel 2 tiles around the tropical Australian coast. This dataset uses two styles: 1. a true colour contrast and colour enhancement style (TrueColour) using the bands B2 (blue), B3 (green), and B4 (red) 2. a near infrared false colour style (Shallow) using the bands B5 (red edge), B8 (near infrared), and B12 (short wave infrared). These styles are useful for identifying shallow features along the coastline.
The Shallow false colour styling is optimised for viewing the first 3 m of the water column, providing an indication of water depth. This is because the different far red and near infrared bands used in this styling have limited penetration of the water column. In clear waters the maximum penetrations of each of the bands is 3-5 m for B5, 0.5 - 1 m for B8 and < 0.05 m for B12. As a result, the image changes in colour with the depth of the water with the following colours indicating the following different depths: - White, brown, bright green, red, light blue: dry land - Grey brown: damp intertidal sediment - Turquoise: 0.05 - 0.5 m of water - Blue: 0.5 - 3 m of water - Black: Deeper than 3 m In very turbid areas the visible limit will be slightly reduced.
Change log:
This dataset will be progressively improved and made available for download. These additions will be noted in this change log. 2024-07-24 - Add tiles for the Great Barrier Reef 2024-05-22 - Initial release for low-tide composites using 30th percentile (Git tag: "low_tide_composites_v1")
Methods:
The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile filter the "COPERNICUS/S2_HARMONIZED" image collection by - tile ID - maximum cloud cover 0.1% - date between '2018-01-01' and '2023-12-31' - asset_size > 100000000 (remove small fragments of tiles) 2. Remove high sun-glint images (see "High sun-glint image detection" for more information). 3. Split images by "SENSING_ORBIT_NUMBER" (see "Using SENSING_ORBIT_NUMBER for a more balanced composite" for more information). 4. Iterate over all images in the split collections to predict the tide elevation for each image from the image timestamp (see "Tide prediction" for more information). 5. Remove images where tide elevation is above mean sea level to make sure no high tide images are included. 6. Select the 10 images with the lowest tide elevation. 7. Combine SENSING_ORBIT_NUMBER collections into one image collection. 8. Remove sun-glint (true colour only) and apply atmospheric correction on each image (see "Sun-glint removal and atmospheric correction" for more information). 9. Duplicate image collection to first create a composite image without cloud masking and using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 10. Apply cloud masking to all images in the original image collection (see "Cloud Masking" for more information) and create a composite by using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 11. Combine the two composite images (no cloud mask composite and cloud mask composite). This solves the problem of some coral cays and islands being misinterpreted as clouds and therefore creating holes in the composite image. These holes are "plugged" with the underlying composite without cloud masking. (Lawrey et al. 2022) 12. The final composite was exported as cloud optimized 8 bit GeoTIFF
Note: The following tiles were generated with different settings as they did not have enough images to create a composite with the standard settings: - 51KWA: no high sun-glint filter - 54LXP: maximum cloud cover set to 1% - 54LXP: maximum cloud cover set to 1% - 54LYK: maximum cloud cover set to 2% - 54LYM: maximum cloud cover set to 5% - 54LYN: maximum cloud cover set to 1% - 54LYQ: maximum cloud cover set to 5% - 54LYP: maximum cloud cover set to 1% - 54LZL: maximum cloud cover set to 1% - 54LZM: maximum cloud cover set to 1% - 54LZN: maximum cloud cover set to 1% - 54LZQ: maximum cloud cover set to 5% - 54LZP: maximum cloud cover set to 1% - 55LBD: maximum cloud cover set to 2% - 55LBE: maximum cloud cover set to 1% - 55LCC: maximum cloud cover set to 5% - 55LCD: maximum cloud cover set to 1%
High sun-glint image detection:
Images with high sun-glint can lead to lower quality composite images. To determine high sun-glint images, a mask is created for all pixels above a high reflectance threshold for the near-infrared and short-wave infrared bands. Then the proportion of this is calculated and compared against a sun-glint threshold. If the image exceeds this threshold, it is filtered out of the image collection. As we are only interested in the sun-glint on water pixels, a water mask is created using NDWI before creating the sun-glint mask.
Sun-glint removal and atmospheric correction:
Sun-glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun-glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun-glint detected by B8 correlates very highly with the sun-glint experienced by the visible channels (B2, B3 and B4) and so the sun-glint in these channels can be removed by subtracting B8 from these channels.
Eric Lawrey developed this algorithm by fine tuning the value of the scaling between the B8 channel and each individual visible channel (B2, B3 and B4) so that the maximum level of sun-glint would be removed. This work was based on a representative set of images, trying to determine a set of values that represent a good compromise across different water surface conditions.
This algorithm is an adjustment of the algorithm already used in Lawrey et al. 2022
Tide prediction:
To determine the tide elevation in a specific satellite image, we used a tide prediction model to predict the tide elevation for the image timestamp. After investigating and comparing a number of models, it was decided to use the empirical ocean tide model EOT20 (Hart-Davis et al., 2021). The model data can be freely accessed at https://doi.org/10.17882/79489 and works with the Python library pyTMD (https://github.com/tsutterley/pyTMD). In our comparison we found this model was able to predict accurately the tide elevation across multiple points along the study coastline when compared to historic Bureau of Meteorolgy and AusTide data. To determine the tide elevation of the satellite images we manually created a point dataset where we placed a central point on the water for each Sentinel tile in the study area . We used these points as centroids in the ocean models and calculated the tide elevation from the image timestamp.
Using "SENSING_ORBIT_NUMBER" for a more balanced composite:
Some of the Sentinel 2 tiles are made up of different sections depending on the "SENSING_ORBIT_NUMBER". For example, a tile could have a small triangle on the left side and a bigger section on the right side. If we filter an image collection and use a subset to create a composite, we could end up with a high number of images for one section (e.g. the left side triangle) and only few images for the other section(s). This would result in a composite image with a balanced section and other sections with a very low input. To avoid this issue, the initial unfiltered image collection is divided into multiple image collections by using the image property "SENSING_ORBIT_NUMBER". The filtering and limiting (max number of images in collection) is then performed on each "SENSING_ORBIT_NUMBER" image collection and finally, they are combined back into one image collection to generate the final composite.
Cloud Masking:
Each image was processed to mask out clouds and their shadows before creating the composite image. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.
A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 35% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.
A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.
The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm.
Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations was adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these
The Digital Geomorphic-GIS Map of Gulf Islands National Seashore (5-meter accuracy and 1-foot resolution 2006-2007 mapping), Mississippi and Florida is composed of GIS data layers and GIS tables, and is available in the following GRI-supported GIS data formats: 1.) a 10.1 file geodatabase (guis_geomorphology.gdb), a 2.) Open Geospatial Consortium (OGC) geopackage, and 3.) 2.2 KMZ/KML file for use in Google Earth, however, this format version of the map is limited in data layers presented and in access to GRI ancillary table information. The file geodatabase format is supported with a 1.) ArcGIS Pro map file (.mapx) file (guis_geomorphology.mapx) and individual Pro layer (.lyrx) files (for each GIS data layer), as well as with a 2.) 10.1 ArcMap (.mxd) map document (guis_geomorphology.mxd) and individual 10.1 layer (.lyr) files (for each GIS data layer). The OGC geopackage is supported with a QGIS project (.qgz) file. Upon request, the GIS data is also available in ESRI 10.1 shapefile format. Contact Stephanie O'Meara (see contact information below) to acquire the GIS data in these GIS data formats. In addition to the GIS data and supporting GIS files, three additional files comprise a GRI digital geologic-GIS dataset or map: 1.) A GIS readme file (guis_geology_gis_readme.pdf), 2.) the GRI ancillary map information document (.pdf) file (guis_geomorphology.pdf) which contains geologic unit descriptions, as well as other ancillary map information and graphics from the source map(s) used by the GRI in the production of the GRI digital geologic-GIS data for the park, and 3.) a user-friendly FAQ PDF version of the metadata (guis_geomorphology_metadata_faq.pdf). Please read the guis_geology_gis_readme.pdf for information pertaining to the proper extraction of the GIS data and other map files. Google Earth software is available for free at: https://www.google.com/earth/versions/. QGIS software is available for free at: https://www.qgis.org/en/site/. Users are encouraged to only use the Google Earth data for basic visualization, and to use the GIS data for any type of data analysis or investigation. The data were completed as a component of the Geologic Resources Inventory (GRI) program, a National Park Service (NPS) Inventory and Monitoring (I&M) Division funded program that is administered by the NPS Geologic Resources Division (GRD). For a complete listing of GRI products visit the GRI publications webpage: For a complete listing of GRI products visit the GRI publications webpage: https://www.nps.gov/subjects/geology/geologic-resources-inventory-products.htm. For more information about the Geologic Resources Inventory Program visit the GRI webpage: https://www.nps.gov/subjects/geology/gri,htm. At the bottom of that webpage is a "Contact Us" link if you need additional information. You may also directly contact the program coordinator, Jason Kenworthy (jason_kenworthy@nps.gov). Source geologic maps and data used to complete this GRI digital dataset were provided by the following: U.S. Geological Survey. Detailed information concerning the sources used and their contribution the GRI product are listed in the Source Citation section(s) of this metadata record (guis_geomorphology_metadata.txt or guis_geomorphology_metadata_faq.pdf). Users of this data are cautioned about the locational accuracy of features within this dataset. Based on the source map scale of 1:26,000 and United States National Map Accuracy Standards features are within (horizontally) 13.2 meters or 43.3 feet of their actual location as presented by this dataset. Users of this data should thus not assume the location of features is exactly where they are portrayed in Google Earth, ArcGIS, QGIS or other software used to display this dataset. All GIS and ancillary tables were produced as per the NPS GRI Geology-GIS Geodatabase Data Model v. 2.3. (available at: https://www.nps.gov/articles/gri-geodatabase-model.htm).
This layer presents detectable thermal activity from VIIRS satellites for the last 7 days. VIIRS Thermal Hotspots and Fire Activity is a product of NASA’s Land, Atmosphere Near real-time Capability for EOS (LANCE) Earth Observation Data, part of NASA's Earth Science Data.Consumption Best Practices:
As a service that is subject to very high usage, ensure peak performance and accessibility of your maps and apps by avoiding the use of non-cacheable relative Date/Time field filters. To accommodate filtering events by Date/Time, we suggest using the included "Age" fields that maintain the number of days or hours since a record was created or last modified, compared to the last service update. These queries fully support the ability to cache a response, allowing common query results to be efficiently provided to users in a high demand service environment.When ingesting this service in your applications, avoid using POST requests whenever possible. These requests can compromise performance and scalability during periods of high usage because they too are not cacheable.Source: NASA LANCE - VNP14IMG_NRT active fire detection - WorldScale/Resolution: 375-meterUpdate Frequency: Hourly using the aggregated live feed methodologyArea Covered: WorldWhat can I do with this layer?This layer represents the most frequently updated and most detailed global remotely sensed wildfire information. Detection attributes include time, location, and intensity. It can be used to track the location of fires from the recent past, a few hours up to seven days behind real time. This layer also shows the location of wildfire over the past 7 days as a time-enabled service so that the progress of fires over that timeframe can be reproduced as an animation.The VIIRS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Fire points in this service are generally available within 3 1/4 hours after detection by a VIIRS device. LANCE estimates availability at around 3 hours after detection, and esri livefeeds updates this feature layer every 15 minutes from LANCE.Even though these data display as point features, each point in fact represents a pixel that is >= 375 m high and wide. A point feature means somewhere in this pixel at least one "hot" spot was detected which may be a fire.VIIRS is a scanning radiometer device aboard the Suomi NPP, NOAA-20, and NOAA-21 satellites that collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in several visible and infrared bands. The VIIRS Thermal Hotspots and Fire Activity layer is a livefeed from a subset of the overall VIIRS imagery, in particular from NASA's VNP14IMG_NRT active fire detection product. The downloads are automatically downloaded from LANCE, NASA's near real time data and imagery site, every 15 minutes.The 375-m data complements the 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Hotspots and Fire Activity layer; they both show good agreement in hotspot detection but the improved spatial resolution of the 375 m data provides a greater response over fires of relatively small areas and provides improved mapping of large fire perimeters.Attribute informationLatitude and Longitude: The center point location of the 375 m (approximately) pixel flagged as containing one or more fires/hotspots.Satellite: Whether the detection was picked up by the Suomi NPP satellite (N) or NOAA-20 satellite (1) or NOAA-21 satellite (2). For best results, use the virtual field WhichSatellite, redefined by an arcade expression, that gives the complete satellite name.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel. This value is based on a collection of intermediate algorithm quantities used in the detection process. It is intended to help users gauge the quality of individual hotspot/fire pixels. Confidence values are set to low, nominal and high. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Nominal confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.Please note: Low confidence nighttime pixels occur only over the geographic area extending from 11 deg E to 110 deg W and 7 deg N to 55 deg S. This area describes the region of influence of the South Atlantic Magnetic Anomaly which can cause spurious brightness temperatures in the mid-infrared channel I4 leading to potential false positive alarms. These have been removed from the NRT data distributed by FIRMS.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: D = Daytime fire, N = Nighttime fireHours Old: Derived field that provides age of record in hours between Acquisition date/time and latest update date/time. 0 = less than 1 hour ago, 1 = less than 2 hours ago, 2 = less than 3 hours ago, and so on.Additional information can be found on the NASA FIRMS site FAQ.Note about near real time data:Near real time data is not checked thoroughly before it's posted on LANCE or downloaded and posted to the Living Atlas. NASA's goal is to get vital fire information to its customers within three hours of observation time. However, the data is screened by a confidence algorithm which seeks to help users gauge the quality of individual hotspot/fire points. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Medium confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.RevisionsMarch 7, 2024: Updated to include source data from NOAA-21 Satellite.September 15, 2022: Updated to include 'Hours_Old' field. Time series has been disabled by default, but still available.July 5, 2022: Terms of Use updated to Esri Master License Agreement, no longer stating that a subscription is required!This layer is provided for informational purposes and is not monitored 24/7 for accuracy and currency.If you would like to be alerted to potential issues or simply see when this Service will update next, please visit our Live Feed Status Page!
A summary of landfast sea ice coverage and the changes in the distance between the penguin colony at Point Geologie and the nearest span of open water on the Adelie Land coast in East Antarctica. The data were derived from cloud-free NOAA Advanced Very High Resolution Radiometer (AVHRR) data acquired between 1-Jan-1992 and 31-Dec-1999.
The areal extent and variability of fast ice along the Adelie Land coast were mapped using time series of NOAA AVHRR visible and thermal infrared (TIR) satellite images collected at Casey Station (66.28 degrees S, 110.53 degrees E). The AVHRR sensor is a 5-channel scanning radiometer with a best ground resolution of 1.1 km at nadir (Cracknell 1997, Kidwell 1997). The period covered began in 1992 due to a lack of sufficient AVHRR scans of the region of interest prior to this date and ended in 1999 (work is underway to extend the analysis forward in time).
While cloud cover is a limiting factor for visible-TIR data, enough data passes were acquired to provide sufficient cloud-free images to resolve synoptic-scale formation and break-up events. Of 10,297 AVHRR images processed, 881 were selected for fast ice analysis, these being the best for each clear (cloud-free) day. The aim was to analyse as many cloud-free images as possible to resolve synoptic-scale variability in fast ice distribution. In addition, a smaller set of cloud-free images were obtained from the Arctic and Antarctic Research Center (AARC) at Scripps Institution of Oceanography, comprising 227 Defense Meteorological Satellite Program (DMSP) Operational Linescan Imager (OLS) images (2.7 km resolution) and 94 NOAA AVHRR images at 4 km resolution. The analysis also included 2 images (spatial resolution 140 m) from the US Argon surveillance satellite programme, originally acquired in 1963 and obtained from the USGS EROS Data Center (available at: edcsns17.cr.usgs.gov/EarthExplorer/).
Initial image processing was carried out using the Common AVHRR Processing System (CAPS) (Hill 2000). This initially produces 3 brightness temperature (TB) bands (AVHRR channels 3 to 5) to create an Ice Surface Temperature (IST) map (after Key 2002) and to enable cloud clearing (after Key 2002 and Williams et al. 2002). Fast ice area was then calculated from these data through a multi-step process involving user intervention. The first step involved correcting for anomalously warm pixels at the coast due to adiabatic warming by seaward-flowing katabatic winds. This was achieved by interpolating IST values to fast ice at a distance of 15 pixels to the North/South and East/ West. The coastline for ice sheet (land) masking was obtained from Lorenzin (2000). Step 2 involved detecting open water and thin sea ice areas by their thermal signatures. Following this, old ice (as opposed to newly-formed ice) was identified using 2 rules: the difference between the IST and TB (band 4, 10.3 to 11.3 microns) for a given pixel is plus or minus 1 K and the IST is less than 250 K. The final step, i.e. determination of the fast ice area, initially applied a Sobel edge-detection algorithm (Gonzalez and Woods 1992) to identify all pixels adjacent to the coast. A segmentation algorithm then assigned a unique value to each old ice area. Finally, all pixels adjacent to the coast were examined using both the segmented and edge-detected images. If a pixel had a value (i.e. it was segmented old ice), then this segment was assumed to be attached to the coast. This segment's value was noted and every pixel with the same value was classified as fast ice. The area was then the product of the number of fast ice pixels and the resolution of each pixel.
A number of factors affect the accuracy of this technique. Poorly navigated images and large sensor scan angles detrimentally impact image segmentation, and every effort was taken to circumvent this. Moreover, sub-pixel scale clouds and leads remain unresolved and, together with water vapour from leads and polynyas, can contaminate the TB. In spite of these potential shortcomings, the algorithm gives reasonable and consistent results. The accuracy of the AVHRR-derived fast ice extent retrievals was tested by comparison with near- contemporary results from higher resolution satellite microwave data, i.e. from the Radarsat-1 ScanSAR (spatial resolution 100 m over a 500 km swath) obtained from the Alaska Satellite Facility. The latter were derived from a 'snapshot' study of East Antarctic fast ice by Giles et al. (2008) using 4 SAR images averaged over the period 2 to 18 November 1997. This gave an areal extent of approximately 24,700 km2. The comparative AVHRR-derived extent was approximately 22,240 km2 (average for 3 to 14 November 1997). This is approximately 10% less than the SAR estimate, although the estimates (images) were not exactly contemporary. Time series of ScanSAR images, in combination with bathymetric data derived from Porter-Smith (2003), were also used to determine the distribution of grounded icebergs. At the 5.3 GHz frequency (? = 5.6 cm) of the ScanSAR, icebergs can be resolved as high backscatter (bright) targets that are, in general, readily distinguishable from sea ice under cold conditions (Willis et al. 1996).
In addition, an estimate was made from the AVHRR derived fast ice extent product of the direct-path distance between the colony at Point Geologie and the nearest open water or thin ice. This represented the shortest distance that the penguins would have to travel across consolidated fast ice in order to reach foraging grounds. A caveat is that small leads and breaks in the fast ice remain unresolved in this satellite analysis, but may be used by the penguins.
We examine possible relationships between variability in fast ice extent and the extent and characteristics of the surrounding pack ice (including the Mertz Glacier polynya to the immediate east) using both AVHRR data and daily sea ice concentration data from the DMSP Special Sensor Microwave/Imager (SSM/I) for the sector 135 to 145 degrees E. The latter were obtained from the US National Snow and Ice Data Center for the period 1992 to 1999 inclusive (Comiso 1995, 2002).
The effect of variable atmospheric forcing on fast ice variability was determined using meteorological data from the French coastal station Dumont d'Urville (66.66 degrees S, 140.02 degrees E, WMO #89642, elevation 43 m above mean sea level), obtained from the SCAR READER project ( www.antarctica.ac.uk/met/READER/). Synoptic- scale circulation patterns were examined using analyses from the Australian Bureau of Meteorology Global Assimilation and Prediction System, or GASP (Seaman et al. 1995).
Carbon Dioxide (Difference from Global Mean, Best Available, OCO-2) from NASA GIBSTemporal coverage: 2002 SEP - 2012 FEBThe Carbon Dioxide (L3, Free Troposphere, Monthly) layer displays monthly Carbon Dioxide in the free troposphere. It is created from the AIRX3C2M data product which is the AIRS mid-tropospheric Carbon Dioxide (CO2) Level 3 Monthly Gridded Retrieval, from the AIRS and AMSU instruments on board of Aqua satellite. It is monthly gridded data at 2.5x2 degreee (lon)x(lat) grid cell size. The data is in mole fraction units (data x 10^6 =ppm in volume). This quantity is not a total column quantity because the sensitivity function of the AIRS mid-tropospheric CO2 retrieval system peaks over the altitude range 6-10 km. The quantity is what results when the true atmospheric CO2 profile is weighted, level-by-level, by the AIRS sensitivity function.The Atmospheric Infrared Sounder (AIRS), in conjunction with the Advanced Microwave Sounding Unit (AMSU), senses emitted infrared and microwave radiation from Earth to provide a three-dimensional look at Earth's weather and climate. Working in tandem, the two instruments make simultaneous observations down to Earth's surface. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, three-dimensional map of atmospheric temperature and humidity, cloud amounts and heights, greenhouse gas concentrations and many other atmospheric phenomena. Launched into Earth orbit in 2002, the AIRS and AMSU instruments fly onboard NASA's Aqua spacecraft and are managed by NASA's Jet Propulsion Laboratory in Pasadena, California. More information about AIRS can be found at https://airs.jpl.nasa.gov.References: AIRX3C2M doi:10.5067/Aqua/AIRS/DATA339ABOUT NASA GIBSThe Global Imagery Browse Services (GIBS) system is a core EOSDIS component which provides a scalable, responsive, highly available, and community standards based set of imagery services. These services are designed with the goal of advancing user interactions with EOSDIS’ inter-disciplinary data through enhanced visual representation and discovery.The Global Imagery Browse Services (GIBS) system is a core EOSDIS component which provides a scalable, responsive, highly available, and community standards based set of imagery services. These services are designed with the goal of advancing user interactions with EOSDIS’ inter-disciplinary data through enhanced visual representation and discovery.MODIS (or Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (originally known as EOS AM-1) and Aqua (originally known as EOS PM-1) satellites. Terra's orbit around the Earth is timed so that it passes from north to south across the equator in the morning, while Aqua passes south to north over the equator in the afternoon. Terra MODIS and Aqua MODIS are viewing the entire Earth's surface every 1 to 2 days, acquiring data in 36 spectral bands, or groups of wavelengths (see MODIS Technical Specifications). These data will improve our understanding of global dynamics and processes occurring on the land, in the oceans, and in the lower atmosphere. MODIS is playing a vital role in the development of validated, global, interactive Earth system models able to predict global change accurately enough to assist policy makers in making sound decisions concerning the protection of our environment.GIBS Available Imagery ProductsThe GIBS imagery archive includes approximately 1000 imagery products representing visualized science data from the NASA Earth Observing System Data and Information System (EOSDIS). Each imagery product is generated at the native resolution of the source data to provide "full resolution" visualizations of a science parameter. GIBS works closely with the science teams to identify the appropriate data range and color mappings, where appropriate, to provide the best quality imagery to the Earth science community. Many GIBS imagery products are generated by the EOSDIS LANCE near real-time processing system resulting in imagery available in GIBS within 3.5 hours of observation. These products and others may also extend from present to the beginning of the satellite mission. In addition, GIBS makes available supporting imagery layers such as data/no-data, water masks, orbit tracks, and graticules to improve imagery usage.The GIBS team is actively engaging the NASA EOSDIS Distributed Active Archive Centers (DAACs) to add more imagery products and to extend their coverage throughout the life of the mission. The remainder of this page provides a structured view of the layers currently available within GIBS grouped by science discipline and science observation. For information regarding how to access these products, see the GIBS API section of this wiki. For information regarding how to access these products through an existing client, refer to the Map Library and GIS Client sections of this wiki. If you are aware of a science parameter that you would like to see visualized, please contact us at support@earthdata.nasa.gov. https://wiki.earthdata.nasa.gov/display/GIBS/GIBS+Available+Imagery+Products#expand-AerosolOpticalDepth29ProductsNASA GIS API for Developers https://wiki.earthdata.nasa.gov/display/GIBS/GIBS+API+for+Developers
In 2023, Google Maps was the most popular free navigation mobile app by downloads in Poland, amounting to nearly two million on iPhone and iPad App Store, and Google Play. Komoot followed with approximately 610 thousand downloads.
https://www.openstreetmap.org/images/osm_logo.png" alt="" /> OpenStreetMap (openstreetmap.org) is a global collaborative mapping project, which offers maps and map data released with an open license, encouraging free re-use and re-distribution. The data is created by a large community of volunteers who use a variety of simple on-the-ground surveying techniques, and wiki-syle editing tools to collaborate as they create the maps, in a process which is open to everyone. The project originated in London, and an active community of mappers and developers are based here. Mapping work in London is ongoing (and you can help!) but the coverage is already good enough for many uses.
Browse the map of London on OpenStreetMap.org
The whole of England updated daily:
For more details of downloads available from OpenStreetMap, including downloading the whole planet, see 'planet.osm' on the wiki.
Download small areas of the map by bounding-box. For example this URL requests the data around Trafalgar Square:
http://api.openstreetmap.org/api/0.6/map?bbox=-0.13062,51.5065,-0.12557,51.50969
Data filtered by "tag". For example this URL returns all elements in London tagged shop=supermarket:
http://www.informationfreeway.org/api/0.6/*[shop=supermarket][bbox=-0.48,51.30,0.21,51.70]
The format of the data is a raw XML represention of all the elements making up the map. OpenStreetMap is composed of interconnected "nodes" and "ways" (and sometimes "relations") each with a set of name=value pairs called "tags". These classify and describe properties of the elements, and ultimately influence how they get drawn on the map. To understand more about tags, and different ways of working with this data format refer to the following pages on the OpenStreetMap wiki.
Rather than working with raw map data, you may prefer to embed maps from OpenStreetMap on your website with a simple bit of javascript. You can also present overlays of other data, in a manner very similar to working with google maps. In fact you can even use the google maps API to do this. See OSM on your own website for details and links to various javascript map libraries.
The OpenStreetMap project aims to attract large numbers of contributors who all chip in a little bit to help build the map. Although the map editing tools take a little while to learn, they are designed to be as simple as possible, so that everyone can get involved. This project offers an exciting means of allowing local London communities to take ownership of their part of the map.
Read about how to Get Involved and see the London page for details of OpenStreetMap community events.
The Digital Geologic-GIS Map of parts of Great Sand Dunes National Park and Preserve (Sangre de Cristo Mountains and part of the Dunes), Colorado is composed of GIS data layers and GIS tables, and is available in the following GRI-supported GIS data formats: 1.) an ESRI file geodatabase (gsam_geology.gdb), a 2.) Open Geospatial Consortium (OGC) geopackage, and 3.) 2.2 KMZ/KML file for use in Google Earth, however, this format version of the map is limited in data layers presented and in access to GRI ancillary table information. The file geodatabase format is supported with a 1.) ArcGIS Pro 3.X map file (.mapx) file (gsam_geology.mapx) and individual Pro 3.X layer (.lyrx) files (for each GIS data layer). The OGC geopackage is supported with a QGIS project (.qgz) file. Upon request, the GIS data is also available in ESRI shapefile format. Contact Stephanie O'Meara (see contact information below) to acquire the GIS data in these GIS data formats. In addition to the GIS data and supporting GIS files, three additional files comprise a GRI digital geologic-GIS dataset or map: 1.) a readme file (grsa_geology_gis_readme.pdf), 2.) the GRI ancillary map information document (.pdf) file (grsa_geology.pdf) which contains geologic unit descriptions, as well as other ancillary map information and graphics from the source map(s) used by the GRI in the production of the GRI digital geologic-GIS data for the park, and 3.) a user-friendly FAQ PDF version of the metadata (gsam_geology_metadata_faq.pdf). Please read the grsa_geology_gis_readme.pdf for information pertaining to the proper extraction of the GIS data and other map files. Google Earth software is available for free at: https://www.google.com/earth/versions/. QGIS software is available for free at: https://www.qgis.org/en/site/. Users are encouraged to only use the Google Earth data for basic visualization, and to use the GIS data for any type of data analysis or investigation. The data were completed as a component of the Geologic Resources Inventory (GRI) program, a National Park Service (NPS) Inventory and Monitoring (I&M) Division funded program that is administered by the NPS Geologic Resources Division (GRD). For a complete listing of GRI products visit the GRI publications webpage: https://www.nps.gov/subjects/geology/geologic-resources-inventory-products.htm. For more information about the Geologic Resources Inventory Program visit the GRI webpage: https://www.nps.gov/subjects/geology/gri.htm. At the bottom of that webpage is a "Contact Us" link if you need additional information. You may also directly contact the program coordinator, Jason Kenworthy (jason_kenworthy@nps.gov). Source geologic maps and data used to complete this GRI digital dataset were provided by the following: U.S. Geological Survey. Detailed information concerning the sources used and their contribution the GRI product are listed in the Source Citation section(s) of this metadata record (gsam_geology_metadata.txt or gsam_geology_metadata_faq.pdf). Users of this data are cautioned about the locational accuracy of features within this dataset. Based on the source map scale of 1:24,000 and United States National Map Accuracy Standards features are within (horizontally) 12.2 meters or 40 feet of their actual location as presented by this dataset. Users of this data should thus not assume the location of features is exactly where they are portrayed in Google Earth, ArcGIS Pro, QGIS or other software used to display this dataset. All GIS and ancillary tables were produced as per the NPS GRI Geology-GIS Geodatabase Data Model v. 2.3. (available at: https://www.nps.gov/articles/gri-geodatabase-model.htm).
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
This image service contains high resolution satellite imagery for selected regions throughout the Yukon. Imagery is 1m pixel resolution, or better. Imagery was supplied by the Government of Yukon, and the Canadian Department of National Defense. All the imagery in this service is licensed. If you have any questions about Yukon government satellite imagery, please contact Geomatics.Help@gov.yk.can. This service is managed by Geomatics Yukon.