27 datasets found
  1. r

    Marine satellite image test collections (AIMS)

    • researchdata.edu.au
    • catalogue.eatlas.org.au
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hammerton, Marc; Lawrey, Eric, Dr (2024). Marine satellite image test collections (AIMS) [Dataset]. http://doi.org/10.26274/ZQ26-A956
    Explore at:
    Dataset updated
    Jul 9, 2024
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Hammerton, Marc; Lawrey, Eric, Dr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 1, 2016 - Sep 20, 2021
    Area covered
    Description

    This dataset consists of collections of satellite image composites (Sentinel 2 and Landsat 8) that are created from manually curated image dates for a range of projects. These images are typically prepared for subsequent analysis or testing of analysis algorithms as part of other projects. This dataset acts as a repository of reproducible test sets of images processed from Google Earth Engine using a standardised workflow.

    Details of the algorithms used to produce the imagery are described in the GEE code and code repository available on GitHub (https://github.com/eatlas/World_AIMS_Marine-satellite-imagery).

    Project test image sets:

    As new projects are added to this dataset, their details will be described here:

    • NESP MaC 2.3 Benthic reflection estimation (projects/CS_NESP-MaC-2-3_AIMS_Benth-reflect): This collection consists of six Sentinel 2 image composites in the Coral Sea and GBR for the purpose of testing a method of determining benthic reflectance of deep lagoonal areas of coral atolls. These image composites are in GeoTiff format, using 16-bit encoding and LZW compression. These images do not have internal image pyramids to save on space. [Status: final and available for download]

    • NESP MaC 2.3 Oceanic Vegetation (projects/CS_NESP-MaC-2-3_AIMS_Oceanic-veg): This project is focused on mapping vegetation on the bottom of coral atolls in the Coral Sea. This collection consists of additional images of Ashmore Reef. The lagoonal area of Ashmore has low visibility due to coloured dissolved organic matter, making it very hard to distinguish areas that are covered in vegetation. These images were manually curated to best show the vegetation. While these are the best images in the Sentinel 2 series up to 2023, they are still not very good. Probably 80 - 90% of the lagoonal benthos is not visible. [Status: final and available for download]

    • NESP MaC 3.17 Australian reef mapping (projects/AU_NESP-MaC-3-17_AIMS_Reef-mapping): This collection of test images was prepared to determine if creating a composite from manually curated image dates (corresponding to images with the clearest water) would produce a better composite than a fully automated composite based on cloud filtering. The automated composites are described in https://doi.org/10.26274/HD2Z-KM55. This test set also includes composites from low tide imagery. The images in this collection are not yet available for download as the collection of images that will be used in the analysis has not been finalised.
      [Status: under development, code is available, but not rendered images]

    • Capricorn Regional Map (projects/CapBunk_AIMS_Regional-map): This collection was developed for making a set of maps for the region to facilitate participatory mapping and reef restoration field work planning. [Status: final and available for download]

    • Default (project/default): This collection of manual selected scenes are those that were prepared for the Coral Sea and global areas to test the algorithms used in the developing of the original Google Earth Engine workflow. This can be a good starting point for new test sets. Note that the images described in the default project are not rendered and made available for download to save on storage space. [Status: for reference, code is available, but not rendered images]

    Filename conventions:

    The images in this dataset are all named using a naming convention. An example file name is Wld_AIMS_Marine-sat-img_S2_NoSGC_Raw-B1-B4_54LZP.tif. The name is made up of: - Dataset name (Wld_AIMS_Marine-sat-img), short for World, Australian Institute of Marine Science, Marine Satellite Imagery.
    - Satellite source: L8 for Landsat 8 or S2 for Sentinel 2. - Additional information or purpose: NoSGC - No sun glint correction, R1 best reference imagery set or R2 second reference imagery. - Colour and contrast enhancement applied (DeepFalse, TrueColour,Shallow,Depth5m,Depth10m,Depth20m,Raw-B1-B4), - Image tile (example: Sentinel 2 54LZP, Landsat 8 091086)

    Limitations:

    Only simple atmospheric correction is applied to land areas and as a result the imagery only approximates the bottom of atmosphere reflectance.

    For the sentinel 2 imagery the sun glint correction algorithm transitions between different correction levels from deep water (B8) to shallow water (B11) and a fixed atmospheric correction for land (bright B8 areas). Slight errors in the tuning of these transitions can result in unnatural tonal steps in the transitions between these areas, particularly in very shallow areas.

    For the Landsat 8 image processing land areas appear as black from the sun glint correction, which doesn't separately mask out the land. The code for the Landsat 8 imagery is less developed than for the Sentinel 2 imagery.

    The depth contours are estimated using satellite derived bathymetry that is subject to errors caused by cloud artefacts, substrate darkness, water clarity, calibration issues and uncorrected tides. They were tuned in the clear waters of the Coral Sea. The depth contours in this dataset are RAW and contain many false positives due to clouds. They should not be used without additional dataset cleanup.

    Change log:

    As changes are made to the dataset, or additional image collections are added to the dataset then those changes will be recorded here.

    2nd Edition, 2024-06-22: CapBunk_AIMS_Regional-map 1st Edition, 2024-03-18: Initial publication of the dataset, with CS_NESP-MaC-2-3_AIMS_Benth-reflect, CS_NESP-MaC-2-3_AIMS_Oceanic-veg and code for AU_NESP-MaC-3-17_AIMS_Reef-mapping and Default projects.

    Data Format:

    GeoTiff images with LZW compression. Most images do not have internal image pyramids to save on storage space. This makes rendering these images very slow in a desktop GIS. Pyramids should be added to improve performance.

    Data Location:

    This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\Wld-AIMS-Marine-sat-img

  2. n

    Landsat Satellite Imagery for the United State and Russia

    • cmr.earthdata.nasa.gov
    Updated Apr 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Landsat Satellite Imagery for the United State and Russia [Dataset]. https://cmr.earthdata.nasa.gov/search/concepts/C1214608804-SCIOPS.html
    Explore at:
    Dataset updated
    Apr 21, 2017
    Time period covered
    Jan 1, 1970 - Present
    Area covered
    Description

    With the launch of Landsat 7, data are no longer copyright protected and these data may be freely distributed. EOS-WEBSTER, in an effort to provide access to earth science data, has designed an interim system to make Landsat data that we have in our database available to other users. In many cases, in-house researchers have acquired these data directly from the USGS EROS Data Center (EDC) for their research projects. They have provided copies of their data to EOS-WEBSTER for distribution to a wide audience. Boreal Russian Landsat data are also being housed.

    Therefore, our data holdings come from several different sources and can have a variety of different processing levels associated with them. We have attempted to document, to the best of our ability, the processing steps each Landsat scene has been through. Our data are currently served in two output formats: BSQ and ERDAS Imagine, and three different spectral types (when available): multispectral, panchromatic, and thermal. A header file is provided with each ordered image giving the specifics of the image.

    Please refer to the references to learn more about Landsat and the data this satellite acquires. We hope to add more data as it becomes available to EOS-WEBSTER. If you have any Landsat data, which you are willing to share, EOS-WEBSTER would like to provide access to it to a broad audience by adding it to our database. Landsat 7 data and Landsat 5 data older than 10 years can be distributed without copyright restrictions. Please contact our User Services Personnel if you would like to distribute your Landsat data, or other earth science products, via EOS-WEBSTER's FREE data distribution mechanism.

    See more detailed information regarding these data and data access privilages at "http://eos-earthdata.sr.unh.edu/" or contact the Data Center Contact above.

  3. Sentinel-2 Views

    • climate.esri.ca
    • pacificgeoportal.com
    • +22more
    Updated May 2, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2018). Sentinel-2 Views [Dataset]. https://climate.esri.ca/datasets/fd61b9e0c69c4e14bebd50a9a968348c
    Explore at:
    Dataset updated
    May 2, 2018
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    Sentinel-2, 10, 20, and 60m Multispectral, Multitemporal, 13-band imagery is rendered on-the-fly and available for visualization. This imagery layer pulls directly from the Sentinel-2 on AWS collection and is updated daily with new imagery.This imagery layer can be applied across a number of industries, scientific disciplines, and management practices. Some applications include, but are not limited to, land cover and environmental monitoring, climate change, deforestation, disaster and emergency management, national security, plant health and precision agriculture, forest monitoring, watershed analysis and runoff predictions, land-use planning, tracking urban expansion, highlighting burned areas and estimating fire severity.Geographic CoverageGlobalContinental land masses from 65.4° South to 72.1° North, with these special guidelines:All coastal waters up to 20 km from the shoreAll islands greater than 100 km2All EU islandsAll closed seas (e.g. Caspian Sea)The Mediterranean SeaTemporal CoverageThe revisit time for each point on Earth is every 5 days.This layer is updated daily with new imagery.This imagery layer includes a rolling collection of imagery acquired within the past 14 months.The number of images available will vary depending on location.Product LevelThis service provides Level-1C Top of Atmosphere imagery.Alternatively, Sentinel-2 Level-2A is also available.Image Selection/FilteringThe most recent and cloud free images are displayed by default.Any image available within the past 14 months can be displayed via custom filtering.Filtering can be done based on attributes such as Acquisition Date, Estimated Cloud Cover, and Tile ID.Tile_ID is computed as [year][month][day]T[hours][minutes][seconds]_[UTMcode][latitudeband][square]_[sequence]. More…Visual RenderingDefault rendering is Natural Color (bands 4,3,2) with Dynamic Range Adjustment (DRA).The DRA version of each layer enables visualization of the full dynamic range of the images.Rendering (or display) of band combinations and calculated indices is done on-the-fly from the source images via Raster Functions.Various pre-defined Raster Functions can be selected or custom functions created.Available renderings include: Agriculture with DRA, Bathymetric with DRA, Color-Infrared with DRA, Natural Color with DRA, Short-wave Infrared with DRA, Geology with DRA, NDMI Colorized, Normalized Difference Built-Up Index (NDBI), NDWI Raw, NDWI - with VRE Raw, NDVI – with VRE Raw (NDRE), NDVI - VRE only Raw, NDVI Raw, Normalized Burn Ratio, NDVI Colormap.Multispectral BandsBandDescriptionWavelength (µm)Resolution (m)1Coastal aerosol0.433 - 0.453602Blue0.458 - 0.523103Green0.543 - 0.578104Red0.650 - 0.680105Vegetation Red Edge0.698 - 0.713206Vegetation Red Edge0.733 - 0.748207Vegetation Red Edge0.773 - 0.793208NIR0.785 - 0.900108ANarrow NIR0.855 - 0.875209Water vapour0.935 - 0.9556010SWIR – Cirrus1.365 - 1.3856011SWIR-11.565 - 1.6552012SWIR-22.100 - 2.28020Additional NotesOverviews exist with a spatial resolution of 150m and are updated every quarter based on the best and latest imagery available at that time.To work with source images at all scales, the ‘Lock Raster’ functionality is available.NOTE: ‘Lock Raster’ should only be used on the layer for short periods of time, as the imagery and associated record Object IDs may change daily.This ArcGIS Server dynamic imagery layer can be used in Web Maps and ArcGIS Desktop as well as Web and Mobile applications using the REST based Image services API.Images can be exported up to a maximum of 4,000 columns x 4,000 rows per request.Data SourceSentinel-2 imagery is the result of close collaboration between the (European Space Agency) ESA, the European Commission and USGS. Data is hosted by the Amazon Web Services as part of their Registry of Open Data. Users can access the imagery from Sentinel-2 on AWS , or alternatively access EarthExplorer or the Copernicus Data Space Ecosystem to download the scenes.For information on Sentinel-2 imagery, see Sentinel-2.

  4. r

    Coral Sea Sentinel 2 Marine Satellite Composite Draft Imagery version 0...

    • researchdata.edu.au
    • catalogue.eatlas.org.au
    Updated Nov 30, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lawrey, Eric, Dr; mailto:b.robson@aims.gov.au; eAtlas Data Manager; e-Atlas; Wolfe, Kennedy (Dr); Lawrey, Eric, Dr.; Lawrey, Eric, Dr (2021). Coral Sea Sentinel 2 Marine Satellite Composite Draft Imagery version 0 (AIMS) [Dataset]. https://researchdata.edu.au/coral-sea-sentinel-0-aims/2973700
    Explore at:
    Dataset updated
    Nov 30, 2021
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Lawrey, Eric, Dr; mailto:b.robson@aims.gov.au; eAtlas Data Manager; e-Atlas; Wolfe, Kennedy (Dr); Lawrey, Eric, Dr.; Lawrey, Eric, Dr
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Time period covered
    Oct 1, 2016 - Sep 20, 2021
    Area covered
    Description

    This dataset contains composite satellite images for the Coral Sea region based on 10 m resolution Sentinel 2 imagery from 2015 – 2021. This image collection is intended to allow mapping of the reef and island features of the Coral Sea. This is a draft version of the dataset prepared from approximately 60% of the available Sentinel 2 image. An improved version of this dataset was released https://doi.org/10.26274/NH77-ZW79.

    This collection contains composite imagery for 31 Sentinel 2 tiles in the Coral Sea. For each tile there are 5 different colour and contrast enhancement styles intended to highlight different features. These include: - DeepFalse - Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This technique doesn't work where the water is not as clear as the ultraviolet get scattered easily. - DeepMarine - Bands: B2 (blue), B3 (green), B4 (red): This is a contrast enhanced version of the true colour imagery, focusing on being able to better see the deeper features. Shallow features are over exposed due to the increased contrast. - ReefTop - Bands: B3 (red): This imagery is contrast enhanced to create an mask (black and white) of reef tops, delineating areas that are shallower or deeper than approximately 4 - 5 m. This mask is intended to assist in the creating of a GIS layer equivalent to the 'GBR Dry Reefs' dataset. The depth mapping exploits the limited water penetration of the red channel. In clear water the red channel can only see features to approximately 6 m regardless of the substrate type. - Shallow - Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Feature less than a couple of metres appear dark blue, dry areas are white. - TrueColour - Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to interpreting what shallow features are and in mapping the vegetation on cays and identifying beach rock.

    For most Sentinel tiles there are two versions of the DeepFalse and DeepMarine imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery so that mapped features could be checked against two images. Typically the R2 imagery will have more artefacts from clouds.

    The satellite imagery was processed in tiles (approximately 100 x 100 km) to keep each final image small enough to manage. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs.

    Methods:

    The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile, the set of Sentinel images from 2015 – 2021 were reviewed manually. In some tiles the cloud cover threshold was raised to gather more images, particularly if there were less than 20 images available. The Google Earth Engine image IDs of the best images were recorded. These were the images with the clearest water, lowest waves, lowest cloud, and lowest sun glint. 2. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later). 3. The contrast of the images was enhanced to create a series of products for different uses. The true colour image retained the full range of tones visible, so that bright sand cays still retained some detail. The marine enhanced version stretched the blue, green and red channels so that they focused on the deeper, darker marine features. This stretching was done to ensure that when converted to 8-bit colour imagery that all the dark detail in the deeper areas were visible. This contrast enhancement resulted in bright areas of the imagery clipping, leading to loss of detail in shallow reef areas and colours of land areas looking off. A reef top estimate was produced from the red channel (B4) where the contrast was stretched so that the imagery contains almost a binary mask. The threshold was chosen to approximate the 5 m depth contour for the clear waters of the Coral Sea. Lastly a false colour image was produced to allow mapping of shallow water features such as cays and islands. This image was produced from B5 (far red), B8 (nir), B11 (nir), where blue represents depths from approximately 0.5 – 5 m, green areas with 0 – 0.5 m depth, and brown and white corresponding to dry land. 4. The various contrast enhanced composite images were exported from Google Earth Engine (default of 32 bit GeoTiff) and reprocessed to smaller LZW compresed 8 bit GeoTiff images GDAL.

    Cloud Masking

    Prior to combining the best images each image was processed to mask out clouds and their shadows. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.

    A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.

    A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.

    The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm.

    Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations. With 4-pixel filter resolutions these operations were still using over 90% of the total processing resulting in each image taking approximately 10 min to compute on the Google Earth Engine.

    Sun glint removal and atmospheric correction.

    Sun glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun glint detected by B8 correlates very highly with the sun glint experienced by the ultra violet and visible channels (B1, B2, B3 and B4) and so the sun glint in these channels can be removed by subtracting B8 from these channels.

    This simple sun glint correction fails in very shallow and land areas. On land areas B8 is very bright and thus subtracting it from the other channels results in black land. In shallow areas (< 0.5 m) the B8 channel detects the substrate, resulting in too much sun glint correction. To resolve these issues the sun glint correction was adjusted by transitioning to B11 for shallow areas as it penetrates the water even less than B8. We don't use B11 everywhere because it is half the resolution of B8.

    Land areas need their tonal levels to be adjusted to match the water areas after sun glint correction. Ideally this would be achieved using an atmospheric correction that compensates for the contrast loss due to haze in the atmosphere. Complex models for atmospheric correction involve considering the elevation of the surface (higher areas have less atmosphere to pass through) and the weather conditions. Since this dataset is focused on coral reef areas, elevation compensation is unnecessary due to the very low and flat land features being imaged. Additionally the focus of the dataset it on marine features and so only a basic atmospheric correction is needed. Land areas (as determined by very bright B8 areas) where assigned a fixed smaller correction factor to approximate atmospheric correction. This fixed atmospheric correction was determined iteratively so that land areas matched the tonal value of shallow and water areas.

    Image selection

    Available Sentinel 2 images with a cloud cover of less than 0.5% were manually reviewed using an Google Earth Engine App 01-select-sentinel2-images.js. Where there were few images available (less than 30 images) the cloud cover threshold was raised to increase the set of images that were raised.

    Images were excluded from the composites primarily due to two main factors: sun glint and fine scattered clouds. The images were excluded if there was any significant uncorrected sun glint in the image, i.e. the brightness of the sun glint exceeded the sun glint correction. Fine

  5. a

    LandsatLook Viewer

    • disasters.amerigeoss.org
    • data.amerigeoss.org
    • +4more
    Updated Nov 9, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AmeriGEOSS (2018). LandsatLook Viewer [Dataset]. https://disasters.amerigeoss.org/datasets/landsatlook-viewer
    Explore at:
    Dataset updated
    Nov 9, 2018
    Dataset authored and provided by
    AmeriGEOSS
    Description

    Welcome to the LandsatLook Viewer!The LandsatLook Viewer is a prototype tool that was developed to allow rapid online viewing and access to the USGS Landsat image archives. This viewer allows you to:Interactively explore the Landsat archive at up to full resolution directly from a common web browserSearch for specific Landsat images based on area of interest, acquisition date, or cloud coverCompare image features and view changes through timeDisplay configurable map information layers in combination with the Landsat imageryCreate a customized image display and export as a simple graphic fileView metadata and download the full-band source imagerySearch by address or place, or zoom to a point, bounding box, or Sentinel-2 Tile or Landsat WRS-1 or WRS-2 Path/RowGenerate and download a video animation of the oldest to newest images displayed in the viewerWe welcome feedback and input for future versions of this Viewer! Please provide your comments or suggestions .About the ImageryThis viewer provides visual and download access to the USGS LandsatLook "Natural Color" imageproduct archive.BackgroundThe Landsat satellites have been collecting multispectral images of Earth from space since 1972. Each image contains multiple bands of spectral information which may require significant user time, system resources, and technical expertise to obtain a visual result. As a result, the use and access to Landsat data has been historically limited to the scientific and technical user communities.The LandsatLook “Natural Color” image product option was created to provide Landsat imagery in a simple user-friendly and viewer-ready format, based on specific bands that have been selected and arranged to simulate natural color. This type of product allows easy visualization of the archived Landsat image without any need for specialized software or technical expertise.LandsatLook ViewerThe LandsatLook Viewer displays the LandsatLook Natural Color image product for all Landsat 1-8 images in the USGS archive and was designed primarily for visualization purposes.The imagery within this Viewer will be of value to anyone who wants to quickly see the full Landsat record for an area, along with major image features or obvious changes to Earth’s surface through time. An area of interest may be extracted and downloaded as a simple graphic file directly through the viewer, and the original full image tile is also available if needed. Any downloaded LandsatLook image product is a georeferenced file and will be compatible within most GIS and Web mapping applications.If the user needs to perform detailed technical analysis, the full bands of Landsat source data may also be accessed through direct links provided on the LandsatLook Viewer.Image ServicesThe imagery that is visible on this LandsatLook Viewer is based on Web-based ArcGIS image services. The underlying REST service endpoints for the LandsatLook imagery are available at https://landsatlook.usgs.gov/arcgis/rest/services/LandsatLook/ImageServer .Useful linksLandsat- Landsat Mission (USGS)- Landsat Science (NASA)LandsatLook- Product Description- USGS Fact Sheet- LandsatLook image services (REST)Landsat Products- Landsat 8 OLI/TIRS- Landsat 7 ETM+- Landsat 4-5 TM- Landsat 1-5 MSS- Landsat Band DesignationsLandsatLook images are full-resolution files derived from Landsat Level-1 data products. The images are compressed and stretched to create an image optimized for image selection and visual interpretation. It is recommended that these images not be used in image analysis.LandsatLook image files are included as options when downloading Landsat scenes from EarthExplorer, GloVis, or the LandsatLook Viewer (See Figure 1).Figure 1. LandsatLook and Level-1 product download optionsLandsatLook Natural Color ImageThe LandsatLook Natural Color image is a .jpg composite of three bands to show a “natural” looking (false color) image. Reflectance values were calculated from the calibrated scaled digital number (DN) image data. The reflectance values were scaled to a 1-255 range using a gamma stretch with a gamma=2.0. This stretch was designed to emphasize vegetation without clipping the extreme values.Landsat 8 OLI = Bands 6,5,4Landsat 7 ETM+ and Landsat 4-5 TM = Bands 5,4,3Landsat 4-5 MSS = Bands 2,4,1Landsat 1-3 MSS = Bands 7,5,4LandsatLook Thermal ImageThe LandsatLook Thermal image is a one-band gray scale .jpg image that displays thermal properties of a Landsat scene. Image brightness temperature values were calculated from the calibrated scaled digital number (DN) image data. An image specific 2 percent clip and a linear stretch to 1-255 were applied to the brightness temperature values.Landsat 8 TIRS = Band 10Landsat 7 ETM+ = Band 61-high gainLandsat 4-5 TM = Band 6Landsat 1-5 MSS = not availableLandsatLook Quality ImageLandsatLook Quality images are 8-bit files generated from the Landsat Level-1 Quality band to provide a quick view of the quality of the pixels within the scene to determine if a particular scene would work best for the user's application. This file includes values representing bit-packed combinations of surface, atmosphere, and sensor conditions that can affect the overall usefulness of a given pixel. Color mapping assignments can be seen in the tables below. For each Landsat scene, LandsatLook Quality images can be downloaded individually in .jpg format, or as a GeoTIFF format file (_QB.TIF) within the LandsatLook Images with Geographic Reference file.Landsat Collection 1 LandsatLook 8-bit Quality Images DesignationsLandsat 8 OLI/TIRSLandsat 7 ETM+, Landsat 4-5 TMLandsat 1-5 MSSColorBitDescriptionBitDescriptionBitDescription 0Designated Fill0Designated Fill0Designated Fill 1Terrain Occlusion1Dropped Pixel1Dropped Pixel 2Radiometric Saturation 2Radiometric Saturation ​2Radiometric Saturation 3Cloud3Cloud3Cloud 4Cloud Shadow4Cloud Shadow 4Unused 5Snow/Ice 5Snow/Ice 5Unused 6Cirrus 6Unused6Unused 7Unused7Unused7UnusedUnusedTable 1. Landsat Collection 1 LandsatLook 8-bit Quality Images Designations LandsatLook Images with Geographic ReferenceThe LandsatLook Image with Geographic Reference is a .zip file bundle that contains the Natural Color, Thermal, and the 8-bit Quality images in georeferenced GeoTiff (.TIF) file format.Figure 2. LandsatLook Natural Color Image: Landsat 8 Path 45 Row 30 Acquired April 23, 2013Figure 3. LandsatLook Thermal Image: Landsat 8 Path 45 Row 30 Acquired April 23, 2013Figure 4. LandsatLook Quality Image: Landsat 8 Path 45 Row 30 Acquired April 23, 2013 with background color set to dark grey. Additional Information About LandsatLook ImagesMany geographic information systems and image processing software packages easily support .jpg images. To create these files, Landsat data is mapped to a 1-255 range, with the fill area set to zero (if a no-data value is set to zero, the compression algorithm may introduce zero-value artifacts into the data area causing very dark data values to be displayed as no-data).

  6. r

    Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and...

    • researchdata.edu.au
    Updated Feb 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hammerton, Marc; Lawrey, Eric, Dr; mailto:b.robson@aims.gov.au; eAtlas Data Manager; e-Atlas; Wolfe, Kennedy (Dr); Lawrey, Eric, Dr.; Lawrey, Eric, Dr (2024). Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and Landsat 8) 2015 – 2021 (AIMS) [Dataset]. http://doi.org/10.26274/NH77-ZW79
    Explore at:
    Dataset updated
    Feb 29, 2024
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Hammerton, Marc; Lawrey, Eric, Dr; mailto:b.robson@aims.gov.au; eAtlas Data Manager; e-Atlas; Wolfe, Kennedy (Dr); Lawrey, Eric, Dr.; Lawrey, Eric, Dr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 1, 2016 - Sep 20, 2021
    Area covered
    Description

    This dataset contains Sentinel 2 and Landsat 8 cloud free composite satellite images of the Coral Sea reef areas and some parts of the Great Barrier Reef. It also contains raw depth contours derived from the satellite imagery. This dataset was developed as the base information for mapping the boundaries of reefs and coral cays in the Coral Sea. It is likely that the satellite imagery is useful for numerous other applications. The full source code is available and can be used to apply these techniques to other locations.

    This dataset contains two sets of raw satellite derived bathymetry polygons for 5 m, 10 m and 20 m depths based on both the Landsat 8 and Sentinel 2 imagery. These are intended to be post-processed using clipping and manual clean up to provide an estimate of the top structure of reefs. This dataset also contains select scenes on the Great Barrier Reef and Shark bay in Western Australia that were used to calibrate the depth contours. Areas in the GBR were compared with the GA GBR30 2020 (Beaman, 2017) bathymetry dataset and the imagery in Shark bay was used to tune and verify the Satellite Derived Bathymetry algorithm in the handling of dark substrates such as by seagrass meadows. This dataset also contains a couple of small Sentinel 3 images that were used to check the presence of reefs in the Coral Sea outside the bounds of the Sentinel 2 and Landsat 8 imagery.

    The Sentinel 2 and Landsat 8 imagery was prepared using the Google Earth Engine, followed by post processing in Python and GDAL. The processing code is available on GitHub (https://github.com/eatlas/CS_AIMS_Coral-Sea-Features_Img).

    This collection contains composite imagery for Sentinel 2 tiles (59 in Coral Sea, 8 in GBR) and Landsat 8 tiles (12 in Coral Sea, 4 in GBR and 1 in WA). For each Sentinel tile there are 3 different colour and contrast enhancement styles intended to highlight different features. These include: - TrueColour - Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to identifying shallow features are and in mapping the vegetation on cays. - DeepFalse - Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This imagery has a high level of contrast enhancement applied to the imagery and so it appears more noisy (in particular showing artefact from clouds) than the TrueColour styling. - Shallow - Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Features less than a couple of metres appear dark blue, dry areas are white. This imagery is intended to help identify coral cay boundaries.

    For Landsat 8 imagery only the TrueColour and DeepFalse stylings were rendered.

    All Sentinel 2 and Landsat 8 imagery has Satellite Derived Bathymetry (SDB) depth contours. - Depth5m - This corresponds to an estimate of the area above 5 m depth (Mean Sea Level). - Depth10m - This corresponds to an estimate of the area above 10 m depth (Mean Sea Level). - Depth20m - This corresponds to an estimate of the area above 20 m depth (Mean Sea Level).

    For most Sentinel and some Landsat tiles there are two versions of the DeepFalse imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery. This allows any mapped features to be checked against two images. Typically the R2 imagery will have more artefacts from clouds. In one Sentinel 2 tile a third image was created to help with mapping the reef platform boundary.

    The satellite imagery was processed in tiles (approximately 100 x 100 km for Sentinel 2 and 200 x 200 km for Landsat 8) to keep each final image small enough to manage. These tiles were not merged into a single mosaic as it allowed better individual image contrast enhancement when mapping deep features. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs and where their might have been potential new reef platforms indicated by existing bathymetry datasets and the AHO Marine Charts. The extent of the imagery was limited by those available through the Google Earth Engine.

    Methods:

    The Sentinel 2 imagery was created using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile, images from 2015 – 2021 were reviewed manually after first filtering to remove cloudy scenes. The allowable cloud cover was adjusted so that at least the 50 least cloud free images were reviewed. The typical cloud cover threshold was 1%. Where very few images were available the cloud cover filter threshold was raised to 100% and all images were reviewed. The Google Earth Engine image IDs of the best images were recorded, along with notes to help sort the images based on those with the clearest water, lowest waves, lowest cloud, and lowest sun glint. Images where there were no or few clouds over the known coral reefs were preferred. No consideration of tides was used in the image selection process. The collection of usable images were grouped into two sets that would be combined together into composite images. The best were added to the R1 composite, and the next best images into the R2 composite. Consideration was made as to whether each image would improve the resultant composite or make it worse. Adding clear images to the collection reduces the visual noise in the image allowing deeper features to be observed. Adding images with clouds introduces small artefacts to the images, which are magnified due to the high contrast stretching applied to the imagery. Where there were few images all available imagery was typically used. 2. Sunglint was removed from the imagery using estimates of the sunglint using two of the infrared bands (described in detail in the section on Sun glint removal and atmospheric correction). 3. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later). 4. The brightness of the composite image was normalised so that all tiles would have a similar average brightness for deep water areas. This correction was applied to allow more consistent contrast enhancement. Note: this brightness adjustment was applied as a single offset across all pixels in the tile and so this does not correct for finer spatial brightness variations. 5. The contrast of the images was enhanced to create a series of products for different uses. The TrueColour colour image retained the full range of tones visible, so that bright sand cays still retain detail. The DeepFalse style was optimised to see features at depth and the Shallow style provides access to far red and infrared bands for assessing shallow features, such as cays and island. 6. The various contrast enhanced composite images were exported from Google Earth Engine and optimised using Python and GDAL. This optimisation added internal tiling and overviews to the imagery. The depth polygons from each tile were merged into shapefiles covering the whole for each depth.

    Cloud Masking

    Prior to combining the best images each image was processed to mask out clouds and their shadows.

    The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.

    A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.

    A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.

    The buffering was applied as the cloud masking would often miss significant portions of the edges of clouds and their shadows. The buffering allowed a higher percentage of the cloud to be excluded, whilst retaining as much of the original imagery as possible.

    The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. The algorithm used is significantly better than the default Sentinel 2 cloud masking and slightly better than the COPERNICUS/S2_CLOUD_PROBABILITY cloud mask because it masks out shadows, however there is potentially significant improvements that could be made to the method in the future.

    Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations.

  7. n

    Color Landform Atlas of the United States, Version 2

    • cmr.earthdata.nasa.gov
    html
    Updated Apr 25, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Color Landform Atlas of the United States, Version 2 [Dataset]. https://cmr.earthdata.nasa.gov/search/concepts/C1214593875-SCIOPS
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Apr 25, 2017
    Time period covered
    Jan 1, 1970 - Present
    Area covered
    Description

    The Color Landform Atlas of the United States, Version 2 may be accessed on the World Wide Web at: 'http://fermi.jhuapl.edu/states/states.html'

    The following information was abstracted from: 'http://fermi.jhuapl.edu/states/about.html'. Please visit this page for additional information.

    Currently the following maps are available for each state (except Alaska and Hawaii, they are coming sometime):

     A topographic map optimized to show the
     landforms. The same color shading is used
     across the country.
    
     A map showing counties in a state. The
     background topography has been somewhat
     suppressed to allow the county boundaries to
     show well.
    
     Satellite images of the state. These have been
     obtained here directly from the NOAA weather
     satellites and use the AVHRR image data.
    
     An 1895 map of each state. These are from an old
     Rand McNally Atlas of the World. Not yet all
     available, still scanning
    
     A PostScript map of counties in the state.
     These are intended for download and printing on a
     PostScript printer.
    

    The first two maps all have the same maximum image length (900 pixels) so the actually scale varies from state to state. Long narrow states also have more detailed subsections available. More maps will be added later.

    The elevation key is intended for the topographic maps. The county maps use the same colors but with less contrast. It may be convenient to start another browser window and view the elevation key image at the same time as the map of interest.

    The same data and coloring is used for the state maps as for the previous JHU/APL Digital Relief Map of the U.S. which covers the U.S. at a uniform scale in 60 GIF images.

    Even though the same color scheme is used as for earlier maps a new coloring algorithm is in use. The coloring for some maps is improved, for others it is not as good. The old coloring algorithm used a median cut technique which did not handle small areas of elevation extremes well. An example problem area is Mt. Washington in New Hampshire, it was miscolored on the previous maps. The new algorithm does a better overall job but has occasional problems along the coast.

  8. Torres Strait Sentinel 2 Satellite Regional Maps and Imagery 2015 – 2021...

    • researchdata.edu.au
    Updated Oct 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lawrey, Eric, Dr; Lawrey, Eric, Dr (2022). Torres Strait Sentinel 2 Satellite Regional Maps and Imagery 2015 – 2021 (AIMS) [Dataset]. http://doi.org/10.26274/3CGE-NV85
    Explore at:
    Dataset updated
    Oct 1, 2022
    Dataset provided by
    Australian Institute Of Marine Sciencehttp://www.aims.gov.au/
    Australian Ocean Data Network
    Authors
    Lawrey, Eric, Dr; Lawrey, Eric, Dr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 1, 2015 - Mar 1, 2022
    Area covered
    Description

    This dataset contains both large (A0) printable maps of the Torres Strait broken into six overlapping regions, based on a clear sky, clear water composite Sentinel 2 composite imagery and the imagery used to create these maps. These maps show satellite imagery of the region, overlaid with reef and island boundaries and names. Not all features are named, just the more prominent features. This also includes a vector map of Ashmore Reef and Boot Reef in Coral Sea as these were used in the same discussions that these maps were developed for. The map of Ashmore Reef includes the atoll platform, reef boundaries and depth polygons for 5 m and 10 m.

    This dataset contains all working files used in the development of these maps. This includes all a copy of all the source datasets and all derived satellite image tiles and QGIS files used to create the maps. This includes cloud free Sentinel 2 composite imagery of the Torres Strait region with alpha blended edges to allow the creation of a smooth high resolution basemap of the region.

    The base imagery is similar to the older base imagery dataset: Torres Strait clear sky, clear water Landsat 5 satellite composite (NERP TE 13.1 eAtlas, AIMS, source: NASA).

    Most of the imagery in the composite imagery from 2017 - 2021.

    Method: The Sentinel 2 basemap was produced by processing imagery from the World_AIMS_Marine-satellite-imagery dataset (not yet published) for the Torres Strait region. The TrueColour imagery for the scenes covering the mapped area were downloaded. Both the reference 1 imagery (R1) and reference 2 imagery (R2) was copied for processing. R1 imagery contains the lowest noise, most cloud free imagery, while R2 contains the next best set of imagery. Both R1 and R2 are typically composite images from multiple dates.

    The R2 images were selectively blended using manually created masks with the R1 images. This was done to get the best combination of both images and typically resulted in a reduction in some of the cloud artefacts in the R1 images. The mask creation and previewing of the blending was performed in Photoshop. The created masks were saved in 01-data/R2-R1-masks. To help with the blending of neighbouring images a feathered alpha channel was added to the imagery. The processing of the merging (using the masks) and the creation of the feathered borders on the images was performed using a Python script (src/local/03-merge-R2-R1-images.py) using the Pillow library and GDAL. The neighbouring image blending mask was created by applying a blurring of the original hard image mask. This allowed neighbouring image tiles to merge together.

    The imagery and reference datasets (reef boundaries, EEZ) were loaded into QGIS for the creation of the printable maps.

    To optimise the matching of the resulting map slight brightness adjustments were applied to each scene tile to match its neighbours. This was done in the setup of each image in QGIS. This adjustment was imperfect as each tile was made from a different combinations of days (to remove clouds) resulting in each scene having a different tonal gradients across the scene then its neighbours. Additionally Sentinel 2 has slight stripes (at 13 degrees off the vertical) due to the swath of each sensor having a slight sensitivity difference. This effect was uncorrected in this imagery.

    Single merged composite GeoTiff: The image tiles with alpha blended edges work well in QGIS, but not in ArcGIS Pro. To allow this imagery to be used across tools that don't support the alpha blending we merged and flattened the tiles into a single large GeoTiff with no alpha channel. This was done by rendering the map created in QGIS into a single large image. This was done in multiple steps to make the process manageable.

    The rendered map was cut into twenty 1 x 1 degree georeferenced PNG images using the Atlas feature of QGIS. This process baked in the alpha blending across neighbouring Sentinel 2 scenes. The PNG images were then merged back into a large GeoTiff image using GDAL (via QGIS), removing the alpha channel. The brightness of the image was adjusted so that the darkest pixels in the image were 1, saving the value 0 for nodata masking and the boundary was clipped, using a polygon boundary, to trim off the outer feathering. The image was then optimised for performance by using internal tiling and adding overviews. A full breakdown of these steps is provided in the README.md in the 'Browse and download all data files' link.

    The merged final image is available in export\TS_AIMS_Torres Strait-Sentinel-2_Composite.tif.

    Change Log: 2023-03-02: Eric Lawrey Created a merged version of the satellite imagery, with no alpha blending so that it can be used in ArcGIS Pro. It is now a single large GeoTiff image. The Google Earth Engine source code for the World_AIMS_Marine-satellite-imagery was included to improve the reproducibility and provenance of the dataset, along with a calculation of the distribution of image dates that went into the final composite image. A WMS service for the imagery was also setup and linked to from the metadata. A cross reference to the older Torres Strait clear sky clear water Landsat composite imagery was also added to the record.

    22 Nov 2023: Eric Lawrey Added the data and maps for close up of Mer. - 01-data/TS_DNRM_Mer-aerial-imagery/ - preview/Torres-Strait-Mer-Map-Landscape-A0.jpeg - exports/Torres-Strait-Mer-Map-Landscape-A0.pdf Updated 02-Torres-Strait-regional-maps.qgz to include the layout for the new map.

    Source datasets: Complete Great Barrier Reef (GBR) Island and Reef Feature boundaries including Torres Strait Version 1b (NESP TWQ 3.13, AIMS, TSRA, GBRMPA), https://eatlas.org.au/data/uuid/d2396b2c-68d4-4f4b-aab0-52f7bc4a81f5

    Geoscience Australia (2014b), Seas and Submerged Lands Act 1973 - Australian Maritime Boundaries 2014a - Geodatabase [Dataset]. Canberra, Australia: Author. https://creativecommons.org/licenses/by/4.0/ [license]. Sourced on 12 July 2017, https://dx.doi.org/10.4225/25/5539DFE87D895

    Basemap/AU_GA_AMB_2014a/Exclusive_Economic_Zone_AMB2014a_Limit.shp The original data was obtained from GA (Geoscience Australia, 2014a). The Geodatabase was loaded in ArcMap. The Exclusive_Economic_Zone_AMB2014a_Limit layer was loaded and exported as a shapefile. Since this file was small no clipping was applied to the data.

    Geoscience Australia (2014a), Treaties - Australian Maritime Boundaries (AMB) 2014a [Dataset]. Canberra, Australia: Author. https://creativecommons.org/licenses/by/4.0/ [license]. Sourced on 12 July 2017, http://dx.doi.org/10.4225/25/5539E01878302 Basemap/AU_GA_Treaties-AMB_2014a/Papua_New_Guinea_TSPZ_AMB2014a_Limit.shp The original data was obtained from GA (Geoscience Australia, 2014b). The Geodatabase was loaded in ArcMap. The Papua_New_Guinea_TSPZ_AMB2014a_Limit layer was loaded and exported as a shapefile. Since this file was small no clipping was applied to the data.

    AIMS Coral Sea Features (2022) - DRAFT This is a draft version of this dataset. The region for Ashmore and Boot reef was checked. The attributes in these datasets haven't been cleaned up. Note these files should not be considered finalised and are only suitable for maps around Ashmore Reef. Please source an updated version of this dataset for any other purpose. CS_AIMS_Coral-Sea-Features/CS_Names/Names.shp CS_AIMS_Coral-Sea-Features/CS_Platform_adj/CS_Platform.shp CS_AIMS_Coral-Sea-Features/CS_Reef_Boundaries_adj/CS_Reef_Boundaries.shp CS_AIMS_Coral-Sea-Features/CS_Depth/CS_AIMS_Coral-Sea-Features_Img_S2_R1_Depth5m_Coral-Sea.shp CS_AIMS_Coral-Sea-Features/CS_Depth/CS_AIMS_Coral-Sea-Features_Img_S2_R1_Depth10m_Coral-Sea.shp

    Murray Island 20 Sept 2011 15cm SISP aerial imagery, Queensland Spatial Imagery Services Program, Department of Resources, Queensland This is the high resolution imagery used to create the map of Mer.

    Marine satellite imagery (Sentinel 2 and Landsat 8) (AIMS), https://eatlas.org.au/data/uuid/5d67aa4d-a983-45d0-8cc1-187596fa9c0c - World_AIMS_Marine-satellite-imagery

    Data Location: This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\TS_AIMS_Torres-Strait-Sentinel-2-regional-maps. On the eAtlas server it is stored at eAtlas GeoServer\data\2020-2029-AIMS.

  9. c

    Satellite (VIIRS) Thermal Hotspots and Fire Activity

    • cacgeoportal.com
    • hub.arcgis.com
    Updated Mar 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Central Asia and the Caucasus GeoPortal (2024). Satellite (VIIRS) Thermal Hotspots and Fire Activity [Dataset]. https://www.cacgeoportal.com/maps/9eaa252e879a44abbedb8ec02b55ea2e
    Explore at:
    Dataset updated
    Mar 29, 2024
    Dataset authored and provided by
    Central Asia and the Caucasus GeoPortal
    Area covered
    Description

    This live Web Map is a subset of Global Satellite (VIIRS) Thermal Hotspots and Fire ActivityThis layer presents detectable thermal activity from VIIRS satellites for the last 7 days. VIIRS Thermal Hotspots and Fire Activity is a product of NASA’s Land, Atmosphere Near real-time Capability for EOS (LANCE) Earth Observation Data, part of NASA's Earth Science Data.Consumption Best Practices:As a service that is subject to very high usage, ensure peak performance and accessibility of your maps and apps by avoiding the use of non-cacheable relative Date/Time field filters. To accommodate filtering events by Date/Time, we suggest using the included "Age" fields that maintain the number of days or hours since a record was created or last modified, compared to the last service update. These queries fully support the ability to cache a response, allowing common query results to be efficiently provided to users in a high demand service environment.When ingesting this service in your applications, avoid using POST requests whenever possible. These requests can compromise performance and scalability during periods of high usage because they too are not cacheable.Source: NASA LANCE - VNP14IMG_NRT active fire detection - WorldScale/Resolution: 375-meterUpdate Frequency: Hourly using the aggregated live feed methodologyArea Covered: WorldWhat can I do with this layer?This layer represents the most frequently updated and most detailed global remotely sensed wildfire information. Detection attributes include time, location, and intensity. It can be used to track the location of fires from the recent past, a few hours up to seven days behind real time. This layer also shows the location of wildfire over the past 7 days as a time-enabled service so that the progress of fires over that timeframe can be reproduced as an animation.The VIIRS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Fire points in this service are generally available within 3 1/4 hours after detection by a VIIRS device. LANCE estimates availability at around 3 hours after detection, and esri livefeeds updates this feature layer every 15 minutes from LANCE.Even though these data display as point features, each point in fact represents a pixel that is >= 375 m high and wide. A point feature means somewhere in this pixel at least one "hot" spot was detected which may be a fire.VIIRS is a scanning radiometer device aboard the Suomi NPP, NOAA-20, and NOAA-21 satellites that collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in several visible and infrared bands. The VIIRS Thermal Hotspots and Fire Activity layer is a livefeed from a subset of the overall VIIRS imagery, in particular from NASA's VNP14IMG_NRT active fire detection product. The downloads are automatically downloaded from LANCE, NASA's near real time data and imagery site, every 15 minutes.The 375-m data complements the 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Hotspots and Fire Activity layer; they both show good agreement in hotspot detection but the improved spatial resolution of the 375 m data provides a greater response over fires of relatively small areas and provides improved mapping of large fire perimeters.Attribute informationLatitude and Longitude: The center point location of the 375 m (approximately) pixel flagged as containing one or more fires/hotspots.Satellite: Whether the detection was picked up by the Suomi NPP satellite (N) or NOAA-20 satellite (1) or NOAA-21 satellite (2). For best results, use the virtual field WhichSatellite, redefined by an arcade expression, that gives the complete satellite name.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel. This value is based on a collection of intermediate algorithm quantities used in the detection process. It is intended to help users gauge the quality of individual hotspot/fire pixels. Confidence values are set to low, nominal and high. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Nominal confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.Please note: Low confidence nighttime pixels occur only over the geographic area extending from 11 deg E to 110 deg W and 7 deg N to 55 deg S. This area describes the region of influence of the South Atlantic Magnetic Anomaly which can cause spurious brightness temperatures in the mid-infrared channel I4 leading to potential false positive alarms. These have been removed from the NRT data distributed by FIRMS.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: D = Daytime fire, N = Nighttime fireHours Old: Derived field that provides age of record in hours between Acquisition date/time and latest update date/time. 0 = less than 1 hour ago, 1 = less than 2 hours ago, 2 = less than 3 hours ago, and so on.Additional information can be found on the NASA FIRMS site FAQ.Note about near real time data:Near real time data is not checked thoroughly before it's posted on LANCE or downloaded and posted to the Living Atlas. NASA's goal is to get vital fire information to its customers within three hours of observation time. However, the data is screened by a confidence algorithm which seeks to help users gauge the quality of individual hotspot/fire points. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Medium confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.RevisionsMarch 7, 2024: Updated to include source data from NOAA-21 Satellite.September 15, 2022: Updated to include 'Hours_Old' field. Time series has been disabled by default, but still available.July 5, 2022: Terms of Use updated to Esri Master License Agreement, no longer stating that a subscription is required!This layer is provided for informational purposes and is not monitored 24/7 for accuracy and currency.If you would like to be alerted to potential issues or simply see when this Service will update next, please visit our Live Feed Status Page!

  10. n

    Satellite (VIIRS) Thermal Hotspots and Fire Activity - Dataset - CKAN

    • nationaldataplatform.org
    Updated Feb 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Satellite (VIIRS) Thermal Hotspots and Fire Activity - Dataset - CKAN [Dataset]. https://nationaldataplatform.org/catalog/dataset/satellite-viirs-thermal-hotspots-and-fire-activity
    Explore at:
    Dataset updated
    Feb 28, 2024
    Description

    This layer presents detectable thermal activity from VIIRS satellites for the last 7 days. VIIRS Thermal Hotspots and Fire Activity is a product of NASA’s Land, Atmosphere Near real-time Capability for EOS (LANCE) Earth Observation Data, part of NASA's Earth Science Data.Consumption Best Practices: As a service that is subject to Viral loads (very high usage), avoid adding Filters that use a Date/Time type field. These queries are not cacheable and WILL be subject to Rate Limiting by ArcGIS Online. To accommodate filtering events by Date/Time, we encourage using the included "Age" fields that maintain the number of Days or Hours since a record was created or last modified compared to the last service update. These queries fully support the ability to cache a response, allowing common query results to be supplied to many users without adding load on the service.When ingesting this service in your applications, avoid using POST requests, these requests are not cacheable and will also be subject to Rate Limiting measures.Source: NASA LANCE - VNP14IMG_NRT active fire detection - WorldScale/Resolution: 375-meterUpdate Frequency: Hourly using the aggregated live feed methodologyArea Covered: WorldWhat can I do with this layer?This layer represents the most frequently updated and most detailed global remotely sensed wildfire information. Detection attributes include time, location, and intensity. It can be used to track the location of fires from the recent past, a few hours up to seven days behind real time. This layer also shows the location of wildfire over the past 7 days as a time-enabled service so that the progress of fires over that timeframe can be reproduced as an animation.The VIIRS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Fire points in this service are generally available within 3 1/4 hours after detection by a VIIRS device. LANCE estimates availability at around 3 hours after detection, and esri livefeeds updates this feature layer every 15 minutes from LANCE.Even though these data display as point features, each point in fact represents a pixel that is >= 375 m high and wide. A point feature means somewhere in this pixel at least one "hot" spot was detected which may be a fire.VIIRS is a scanning radiometer device aboard the Suomi NPP and NOAA-20 satellites that collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in several visible and infrared bands. The VIIRS Thermal Hotspots and Fire Activity layer is a livefeed from a subset of the overall VIIRS imagery, in particular from NASA's VNP14IMG_NRT active fire detection product. The downloads are automatically downloaded from LANCE, NASA's near real time data and imagery site, every 15 minutes.The 375-m data complements the 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Hotspots and Fire Activity layer; they both show good agreement in hotspot detection but the improved spatial resolution of the 375 m data provides a greater response over fires of relatively small areas and provides improved mapping of large fire perimeters.Attribute informationLatitude and Longitude: The center point location of the 375 m (approximately) pixel flagged as containing one or more fires/hotspots.Satellite: Whether the detection was picked up by the Suomi NPP satellite (N) or NOAA-20 satellite (1). For best results, use the virtual field WhichSatellite, redefined by an arcade expression, that gives the complete satellite name.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel. This value is based on a collection of intermediate algorithm quantities used in the detection process. It is intended to help users gauge the quality of individual hotspot/fire pixels. Confidence values are set to low, nominal and high. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Nominal confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.Please note: Low confidence nighttime pixels occur only over the geographic area extending from 11 deg E to 110 deg W and 7 deg N to 55 deg S. This area describes the region of influence of the South Atlantic Magnetic Anomaly which can cause spurious brightness temperatures in the mid-infrared channel I4 leading to potential false positive alarms. These have been removed from the NRT data distributed by FIRMS.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: D = Daytime fire, N = Nighttime fireHours Old: Derived field that provides age of record in hours between Acquisition date/time and latest update date/time. 0 = less than 1 hour ago, 1 = less than 2 hours ago, 2 = less than 3 hours ago, and so on.Additional information can be found on the NASA FIRMS site FAQ.Note about near real time data:Near real time data is not checked thoroughly before it's posted on LANCE or downloaded and posted to the Living Atlas. NASA's goal is to get vital fire information to its customers within three hours of observation time. However, the data is screened by a confidence algorithm which seeks to help users gauge the quality of individual hotspot/fire points. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Medium confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.RevisionsSeptember 15, 2022: Updated to include 'Hours_Old' field. Time series has been disabled by default, but still available.July 5, 2022: Terms of Use updated to Esri Master License Agreement, no longer stating that a subscription is required!This layer is provided for informational purposes and is not monitored 24/7 for accuracy and currency.If you would like to be alerted to potential issues or simply see when this Service will update next, please visit our Live Feed Status Page!

  11. n

    VIIRS/JPSS2 Vegetation Indices 16-Day L3 Global 1km SIN Grid V002

    • cmr.earthdata.nasa.gov
    Updated Jan 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). VIIRS/JPSS2 Vegetation Indices 16-Day L3 Global 1km SIN Grid V002 [Dataset]. http://doi.org/10.5067/VIIRS/VJ213A2.002
    Explore at:
    Dataset updated
    Jan 5, 2024
    Time period covered
    Jan 1, 2018 - Present
    Area covered
    Earth
    Description

    The NOAA-21 Visible Infrared Imaging Radiometer Suite (VIIRS) (https://lpdaac.usgs.gov/dataset_discovery/viirs) Vegetation Indices (VJ213A2) Version 2 data product provides vegetation indices by a process of selecting the best available pixel over a 16-day acquisition period at 1 kilometer (km) resolution. The VJ213 data products are designed after the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra and Aqua Vegetation Indices product suite to promote the continuity of the Earth Observation System (EOS) mission.

    The VJ213 algorithm process produces three vegetation indices: The Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), and the Enhanced Vegetation Index-2 (EVI2). NDVI is one of the longest continual remotely sensed time series observations, using both the red and near-infrared (NIR) bands. EVI is a slightly different vegetation index that is more sensitive to canopy cover, while NDVI is more sensitive to chlorophyll. EVI2 is a reformation of the standard 3-band EVI, using the red band and NIR band. This reformation addresses arising issues when comparing VIIRS EVI to other EVI models that do not include a blue band. EVI2 will eventually become the standard EVI.

    Along with the three Vegetation Indices layers, this product also includes layers for NIR reflectance; three shortwave infrared (SWIR) reflectance; red, blue, and green reflectance; composite day of year; pixel reliability; relative azimuth, view, and sun angles, and a quality layer. Two low resolution browse images are also available for each VJ213A2 product: EVI and NDVI.

  12. n

    VIIRS/JPSS2 Vegetation Indices 16-Day L3 Global 500m SIN Grid V002

    • cmr.earthdata.nasa.gov
    Updated Dec 14, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). VIIRS/JPSS2 Vegetation Indices 16-Day L3 Global 500m SIN Grid V002 [Dataset]. http://doi.org/10.5067/VIIRS/VJ213A1.002
    Explore at:
    Dataset updated
    Dec 14, 2023
    Time period covered
    Jan 1, 2018 - Present
    Area covered
    Earth
    Description

    The NOAA-21 Visible Infrared Imaging Radiometer Suite (VIIRS) (https://lpdaac.usgs.gov/dataset_discovery/viirs) Vegetation Indices (VJ113A1) Version 2 data product provides vegetation indices by a process of selecting the best available pixel over a 16-day acquisition period at 500 meter (m) resolution. The VJ113 data products are designed after the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra and Aqua Vegetation Indices product suite to promote the continuity of the Earth Observation System (EOS) mission.

    The VJ113 algorithm process produces three vegetation indices: Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), and Enhanced Vegetation Index-2 (EVI2). NDVI is one of the longest continual remotely sensed time series observations, using both the red and near-infrared (NIR) bands. EVI is a slightly different vegetation index that is more sensitive to canopy cover, while NDVI is more sensitive to chlorophyll. EVI2 is a reformation of the standard 3-band EVI, using the red band and NIR band. This reformation addresses arising issues when comparing VIIRS EVI to other EVI models that do not include a blue band. EVI2 will eventually become the standard EVI.

    Along with the three Vegetation Indices layers, this product also includes layers for NIR reflectance; three shortwave infrared (SWIR) reflectance; red, blue, and green reflectance; composite day of year; pixel reliability; relative azimuth, view, and sun angles; and a quality layer. Two low resolution browse images are also available for each VJ113A1 product: EVI and NDVI.

  13. e

    Satellite (VIIRS) Thermal Hotspots and Fire Activity

    • atlas.eia.gov
    • portal30x30.com
    • +25more
    Updated Apr 1, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2020). Satellite (VIIRS) Thermal Hotspots and Fire Activity [Dataset]. https://atlas.eia.gov/items/dece90af1a0242dcbf0ca36d30276aa3
    Explore at:
    Dataset updated
    Apr 1, 2020
    Dataset authored and provided by
    Esri
    Description

    This layer presents detectable thermal activity from VIIRS satellites for the last 7 days. VIIRS Thermal Hotspots and Fire Activity is a product of NASA’s Land, Atmosphere Near real-time Capability for EOS (LANCE) Earth Observation Data, part of NASA's Earth Science Data.Consumption Best Practices:

    As a service that is subject to very high usage, ensure peak performance and accessibility of your maps and apps by avoiding the use of non-cacheable relative Date/Time field filters. To accommodate filtering events by Date/Time, we suggest using the included "Age" fields that maintain the number of days or hours since a record was created or last modified, compared to the last service update. These queries fully support the ability to cache a response, allowing common query results to be efficiently provided to users in a high demand service environment.When ingesting this service in your applications, avoid using POST requests whenever possible. These requests can compromise performance and scalability during periods of high usage because they too are not cacheable.Source: NASA LANCE - VNP14IMG_NRT active fire detection - WorldScale/Resolution: 375-meterUpdate Frequency: Hourly using the aggregated live feed methodologyArea Covered: WorldWhat can I do with this layer?This layer represents the most frequently updated and most detailed global remotely sensed wildfire information. Detection attributes include time, location, and intensity. It can be used to track the location of fires from the recent past, a few hours up to seven days behind real time. This layer also shows the location of wildfire over the past 7 days as a time-enabled service so that the progress of fires over that timeframe can be reproduced as an animation.The VIIRS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Fire points in this service are generally available within 3 1/4 hours after detection by a VIIRS device. LANCE estimates availability at around 3 hours after detection, and esri livefeeds updates this feature layer every 15 minutes from LANCE.Even though these data display as point features, each point in fact represents a pixel that is >= 375 m high and wide. A point feature means somewhere in this pixel at least one "hot" spot was detected which may be a fire.VIIRS is a scanning radiometer device aboard the Suomi NPP, NOAA-20, and NOAA-21 satellites that collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in several visible and infrared bands. The VIIRS Thermal Hotspots and Fire Activity layer is a livefeed from a subset of the overall VIIRS imagery, in particular from NASA's VNP14IMG_NRT active fire detection product. The downloads are automatically downloaded from LANCE, NASA's near real time data and imagery site, every 15 minutes.The 375-m data complements the 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Hotspots and Fire Activity layer; they both show good agreement in hotspot detection but the improved spatial resolution of the 375 m data provides a greater response over fires of relatively small areas and provides improved mapping of large fire perimeters.Attribute informationLatitude and Longitude: The center point location of the 375 m (approximately) pixel flagged as containing one or more fires/hotspots.Satellite: Whether the detection was picked up by the Suomi NPP satellite (N) or NOAA-20 satellite (1) or NOAA-21 satellite (2). For best results, use the virtual field WhichSatellite, redefined by an arcade expression, that gives the complete satellite name.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel. This value is based on a collection of intermediate algorithm quantities used in the detection process. It is intended to help users gauge the quality of individual hotspot/fire pixels. Confidence values are set to low, nominal and high. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Nominal confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.Please note: Low confidence nighttime pixels occur only over the geographic area extending from 11 deg E to 110 deg W and 7 deg N to 55 deg S. This area describes the region of influence of the South Atlantic Magnetic Anomaly which can cause spurious brightness temperatures in the mid-infrared channel I4 leading to potential false positive alarms. These have been removed from the NRT data distributed by FIRMS.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: D = Daytime fire, N = Nighttime fireHours Old: Derived field that provides age of record in hours between Acquisition date/time and latest update date/time. 0 = less than 1 hour ago, 1 = less than 2 hours ago, 2 = less than 3 hours ago, and so on.Additional information can be found on the NASA FIRMS site FAQ.Note about near real time data:Near real time data is not checked thoroughly before it's posted on LANCE or downloaded and posted to the Living Atlas. NASA's goal is to get vital fire information to its customers within three hours of observation time. However, the data is screened by a confidence algorithm which seeks to help users gauge the quality of individual hotspot/fire points. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Medium confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.RevisionsMarch 7, 2024: Updated to include source data from NOAA-21 Satellite.September 15, 2022: Updated to include 'Hours_Old' field. Time series has been disabled by default, but still available.July 5, 2022: Terms of Use updated to Esri Master License Agreement, no longer stating that a subscription is required!This layer is provided for informational purposes and is not monitored 24/7 for accuracy and currency.If you would like to be alerted to potential issues or simply see when this Service will update next, please visit our Live Feed Status Page!

  14. Early IMERG Precipitation Rate (GPM 3IMERGHHE 06 PrecipitationCal) Web Map

    • climate.esri.ca
    • climat.esri.ca
    • +1more
    Updated Dec 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NASA ArcGIS Online (2021). Early IMERG Precipitation Rate (GPM 3IMERGHHE 06 PrecipitationCal) Web Map [Dataset]. https://climate.esri.ca/maps/06f128b03bcc44d0b7376b213697946d
    Explore at:
    Dataset updated
    Dec 2, 2021
    Dataset provided by

    NASAhttp://nasa.gov/
    Authors
    NASA ArcGIS Online
    Area covered
    Description

    GPM_3IMERGHHE Early Precipitation Rate L3 V06 (GPM IMERG Early Precipitation L3 Half Hourly 0.1 degree x 0.1 degree V06 (GPM_3IMERGHHE 06)) is an image service derived from the Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (GPM) Early dataset.

    The image service shows precipitation rate (mm/hr), approximately four hours after observation. The image service provides global coverage with a temporal span from 06/01/2000 0:00 UTC to present at 30-minute intervals. The service is updated every three hours to incorporate the new granules. To access the REST endpoint for the service, input the URL into a browser or select View just above the URL.

    IMERG is an algorithm that estimates precipitation rate from multiple passive microwave sensors in the GPM constellation, the GPM Dual-Frequency Radar, and infrared (IR) sensors mounted on geostationary satellites. Currently, the near-real-time Early estimates have no concluding calibration. Briefly describing the Early Run, the input precipitation estimates computed from the various satellite passive microwave sensors are intercalibrated to the Combined Radar-Radiometer Algorithm (CORRA) product (because it is presumed to be the best snapshot Tropical Rainfall Measuring Mission (TRMM)/GPM estimate after adjustment to the monthly Global Precipitation Climatology Project Satellite-Gauge (GPCP SG)), then "forward morphed" and combined with microwave precipitation-calibrated geo-IR fields to provide half-hourly precipitation estimates on a 0.1°x0.1° (roughly 10x10 km) grid over the globe. Precipitation phase is computed using analyses of surface temperature, humidity, and pressure

    Dataset at a glance

    Shortname: GPM_3IMERGHHE

    DOI: 10.5067/GPM/IMERG/3B-HH-E/06

    Version: 06

    Coverage: -180.0,-90.0,180.0,90.0

    Temporal Coverage: 2000-06-01 to Present

    Data Resolution

    Spatial: 0.1 ° x 0.1 °

    Temporal: 30 minutes

    Symbology

    The default symbology in the Map Viewer may be changed to accommodate other color schemes using the settings in the Image Display panel from the layer settings menu. NoData values, and values less than 0.03 mm/hr (the current threshold value for the IMERG algorithm) have been removed. Ensure that pop-ups are enabled to view pixel values (select Modify Map first).

    Temporal Coverage

    The source dataset is in UTC time but the service is displayed in the Map Viewer in local time. The data is available in 30-minute intervals, and the map visualization may be modified by opening the Time Slider Settings menu from the icon on the time slider bar. The total temporal coverage may be limited to the desired range and the time interval may also be changed. The options in the time interval units are based on the total time range input, so a shorter time range will enable shorter time units to be selected from the time interval drop-down menu. If the time settings are set to more than 30-minute intervals, the first time slice in the time interval is visible.

    Portal Options

        Select Modify Map to
    

    customize the layer visualization. More information about the image service capabilities may be found in the REST endpoint. In the portal, the basemap may be changed by selecting the desired option from the Basemap menu. Further instructions on using the image service may be found at [GES DISC How-To's: How to access the GES DISC IMERG ArcGIS Image Service using the ArcGIS Enterprise Map Viewer (nasa.gov)].

  15. r

    Tropical Australia Sentinel 2 Satellite Composite Imagery - Low Tide - 30th...

    • researchdata.edu.au
    Updated Nov 30, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hammerton, Marc; Hammerton, Marc (2021). Tropical Australia Sentinel 2 Satellite Composite Imagery - Low Tide - 30th percentile true colour and near infrared false colour (NESP MaC 3.17, AIMS) [Dataset]. http://doi.org/10.26274/2BFV-E921
    Explore at:
    Dataset updated
    Nov 30, 2021
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Hammerton, Marc; Hammerton, Marc
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2018 - Dec 31, 2023
    Area covered
    Description

    This dataset contains cloud free, low tide composite satellite images for the tropical Australia region based on 10 m resolution Sentinel 2 imagery from 2018 – 2023. This image collection was created as part of the NESP MaC 3.17 project and is intended to allow mapping of the reef features in tropical Australia.

    This collection contains composite imagery for 200 Sentinel 2 tiles around the tropical Australian coast. This dataset uses two styles: 1. a true colour contrast and colour enhancement style (TrueColour) using the bands B2 (blue), B3 (green), and B4 (red) 2. a near infrared false colour style (Shallow) using the bands B5 (red edge), B8 (near infrared), and B12 (short wave infrared). These styles are useful for identifying shallow features along the coastline.

    The Shallow false colour styling is optimised for viewing the first 3 m of the water column, providing an indication of water depth. This is because the different far red and near infrared bands used in this styling have limited penetration of the water column. In clear waters the maximum penetrations of each of the bands is 3-5 m for B5, 0.5 - 1 m for B8 and < 0.05 m for B12. As a result, the image changes in colour with the depth of the water with the following colours indicating the following different depths: - White, brown, bright green, red, light blue: dry land - Grey brown: damp intertidal sediment - Turquoise: 0.05 - 0.5 m of water - Blue: 0.5 - 3 m of water - Black: Deeper than 3 m In very turbid areas the visible limit will be slightly reduced.

    Change log:

    This dataset will be progressively improved and made available for download. These additions will be noted in this change log. 2024-07-24 - Add tiles for the Great Barrier Reef 2024-05-22 - Initial release for low-tide composites using 30th percentile (Git tag: "low_tide_composites_v1")

    Methods:

    The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile filter the "COPERNICUS/S2_HARMONIZED" image collection by - tile ID - maximum cloud cover 0.1% - date between '2018-01-01' and '2023-12-31' - asset_size > 100000000 (remove small fragments of tiles) 2. Remove high sun-glint images (see "High sun-glint image detection" for more information). 3. Split images by "SENSING_ORBIT_NUMBER" (see "Using SENSING_ORBIT_NUMBER for a more balanced composite" for more information). 4. Iterate over all images in the split collections to predict the tide elevation for each image from the image timestamp (see "Tide prediction" for more information). 5. Remove images where tide elevation is above mean sea level to make sure no high tide images are included. 6. Select the 10 images with the lowest tide elevation. 7. Combine SENSING_ORBIT_NUMBER collections into one image collection. 8. Remove sun-glint (true colour only) and apply atmospheric correction on each image (see "Sun-glint removal and atmospheric correction" for more information). 9. Duplicate image collection to first create a composite image without cloud masking and using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 10. Apply cloud masking to all images in the original image collection (see "Cloud Masking" for more information) and create a composite by using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 11. Combine the two composite images (no cloud mask composite and cloud mask composite). This solves the problem of some coral cays and islands being misinterpreted as clouds and therefore creating holes in the composite image. These holes are "plugged" with the underlying composite without cloud masking. (Lawrey et al. 2022) 12. The final composite was exported as cloud optimized 8 bit GeoTIFF

    Note: The following tiles were generated with different settings as they did not have enough images to create a composite with the standard settings: - 51KWA: no high sun-glint filter - 54LXP: maximum cloud cover set to 1% - 54LXP: maximum cloud cover set to 1% - 54LYK: maximum cloud cover set to 2% - 54LYM: maximum cloud cover set to 5% - 54LYN: maximum cloud cover set to 1% - 54LYQ: maximum cloud cover set to 5% - 54LYP: maximum cloud cover set to 1% - 54LZL: maximum cloud cover set to 1% - 54LZM: maximum cloud cover set to 1% - 54LZN: maximum cloud cover set to 1% - 54LZQ: maximum cloud cover set to 5% - 54LZP: maximum cloud cover set to 1% - 55LBD: maximum cloud cover set to 2% - 55LBE: maximum cloud cover set to 1% - 55LCC: maximum cloud cover set to 5% - 55LCD: maximum cloud cover set to 1%

    High sun-glint image detection:

    Images with high sun-glint can lead to lower quality composite images. To determine high sun-glint images, a mask is created for all pixels above a high reflectance threshold for the near-infrared and short-wave infrared bands. Then the proportion of this is calculated and compared against a sun-glint threshold. If the image exceeds this threshold, it is filtered out of the image collection. As we are only interested in the sun-glint on water pixels, a water mask is created using NDWI before creating the sun-glint mask.

    Sun-glint removal and atmospheric correction:

    Sun-glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun-glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun-glint detected by B8 correlates very highly with the sun-glint experienced by the visible channels (B2, B3 and B4) and so the sun-glint in these channels can be removed by subtracting B8 from these channels.

    Eric Lawrey developed this algorithm by fine tuning the value of the scaling between the B8 channel and each individual visible channel (B2, B3 and B4) so that the maximum level of sun-glint would be removed. This work was based on a representative set of images, trying to determine a set of values that represent a good compromise across different water surface conditions.

    This algorithm is an adjustment of the algorithm already used in Lawrey et al. 2022

    Tide prediction:

    To determine the tide elevation in a specific satellite image, we used a tide prediction model to predict the tide elevation for the image timestamp. After investigating and comparing a number of models, it was decided to use the empirical ocean tide model EOT20 (Hart-Davis et al., 2021). The model data can be freely accessed at https://doi.org/10.17882/79489 and works with the Python library pyTMD (https://github.com/tsutterley/pyTMD). In our comparison we found this model was able to predict accurately the tide elevation across multiple points along the study coastline when compared to historic Bureau of Meteorolgy and AusTide data. To determine the tide elevation of the satellite images we manually created a point dataset where we placed a central point on the water for each Sentinel tile in the study area . We used these points as centroids in the ocean models and calculated the tide elevation from the image timestamp.

    Using "SENSING_ORBIT_NUMBER" for a more balanced composite:

    Some of the Sentinel 2 tiles are made up of different sections depending on the "SENSING_ORBIT_NUMBER". For example, a tile could have a small triangle on the left side and a bigger section on the right side. If we filter an image collection and use a subset to create a composite, we could end up with a high number of images for one section (e.g. the left side triangle) and only few images for the other section(s). This would result in a composite image with a balanced section and other sections with a very low input. To avoid this issue, the initial unfiltered image collection is divided into multiple image collections by using the image property "SENSING_ORBIT_NUMBER". The filtering and limiting (max number of images in collection) is then performed on each "SENSING_ORBIT_NUMBER" image collection and finally, they are combined back into one image collection to generate the final composite.

    Cloud Masking:

    Each image was processed to mask out clouds and their shadows before creating the composite image. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.

    A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 35% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.

    A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.

    The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm.

    Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations was adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these

  16. r

    North Australia Sentinel 2 Satellite Composite Imagery - 15th percentile...

    • researchdata.edu.au
    Updated Nov 30, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hammerton, Marc; Hammerton, Marc (2021). North Australia Sentinel 2 Satellite Composite Imagery - 15th percentile true colour (NESP MaC 3.17, AIMS) [Dataset]. http://doi.org/10.26274/HD2Z-KM55
    Explore at:
    Dataset updated
    Nov 30, 2021
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Hammerton, Marc; Hammerton, Marc
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jun 27, 2015 - May 31, 2024
    Area covered
    Description

    This dataset contains cloud free composite satellite images for the northern Australia region based on 10 m resolution Sentinel 2 imagery from 2015 – 2024. This image collection was created as part of the NESP MaC 3.17 project and is intended to allow mapping of the reef features in northern Australia. A new, improved version (version 2, published July 2024) has succeeded the draft version (published March 2024).

    This collection contains composite imagery for 333 Sentinel 2 tiles around the northern coast line of Australia, including the Great Barrier Reef. This dataset uses a true colour contrast and colour enhancement style using the bands B2 (blue), B3 (green), and B4 (red). This is useful to interpreting what shallow features are and in mapping the vegetation on cays and identifying beach rock.

    Changelog:

    This dataset will be progressively improved and made available for download. These additions will be noted in this change log. 2024-07-22 - Version 2 composites using an improved contrast enhancement and a noise prediction algorithm to only include low noise images in composite (Git tag: "composites_v2") 2024-03-07 - Initial release draft composites using 15th percentile (Git tag: "composites_v1")

    Methods:

    The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile filter the "COPERNICUS/S2_HARMONIZED" image collection by - tile ID - maximum cloud cover 20% - date between '2015-06-27' and '2024-05-31' - asset_size > 100000000 (remove small fragments of tiles) Note: A maximum cloud cover of 20% was used to improve the processing times. In most cases this filtering does not have an effect on the final composite as images with higher cloud coverage mostly result in higher noise levels and are not used in the final composite. 2. Split images by "SENSING_ORBIT_NUMBER" (see "Using SENSING_ORBIT_NUMBER for a more balanced composite" for more information). 3. For each SENSING_ORBIT_NUMBER collection filter out all noise-adding images: 3.1 Calculate image noise level for each image in the collection (see "Image noise level calculation for more information") and sort collection by noise level. 3.2 Remove all images with a very high noise index (>15). 3.3 Calculate a baseline noise level using a minimum number of images (min_images_in_collection=30). This minimum number of images is needed to ensure a smoth composite where cloud "holes" in one image are covered by other images. 3.4 Iterate over remaining images (images not used in base noise level calculation) and check if adding image to the composite adds to or reduces the noise. If it reduces the noise add it to the composite. If it increases the noise stop iterating over images. 4. Combine SENSING_ORBIT_NUMBER collections into one image collection. 5. Remove sun-glint (true colour only) and apply atmospheric correction on each image (see "Sun-glint removal and atmospheric correction" for more information). 6. Duplicate image collection to first create a composite image without cloud masking and using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 7. Apply cloud masking to all images in the original image collection (see "Cloud Masking" for more information) and create a composite by using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 8. Combine the two composite images (no cloud mask composite and cloud mask composite). This solves the problem of some coral cays and islands being misinterpreted as clouds and therefore creating holes in the composite image. These holes are "plugged" with the underlying composite without cloud masking. (Lawrey et al. 2022) 9. The final composite was exported as cloud optimized 8 bit GeoTIFF

    Note: The following tiles were generated with no "maximum cloud cover" as they did not have enough images to create a composite with the standard settings: - 46LGM - 46LGN - 46LHM - 50KKD - 50KPG - 53LMH - 53LMJ - 53LNH - 53LPH - 53LPJ - 54LVP - 57JVH - 59JKJtime then the resulting image would be cloud free. (Lawrey et al. 2022)

    Image noise level calculation:

    The noise level for each image in this dataset is calculated to ensure high-quality composites by minimizing the inclusion of noisy images. This process begins by creating a water mask using the Normalized Difference Water Index (NDWI) derived from the NIR and Green bands. High reflectance areas in the NIR and SWIR bands, indicative of sun-glint, are identified and masked by the water mask to focus on water areas affected by sun-glint. The proportion of high sun-glint pixels within these water areas is calculated and amplified to compute a noise index. If no water pixels are detected, a high noise index value is assigned.

    Sun glint removal and atmospheric correction:

    Sun glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun glint detected by B8 correlates very highly with the sun glint experienced by the visible channels (B2, B3 and B4) and so the sun glint in these channels can be removed by subtracting B8 from these channels.

    Eric Lawrey developed this algorithm by fine tuning the value of the scaling between the B8 channel and each individual visible channel (B2, B3 and B4) so that the maximum level of sun glint would be removed. This work was based on a representative set of images, trying to determine a set of values that represent a good compromise across different water surface conditions.

    This algorithm is an adjustment of the algorithm already used in Lawrey et al. 2022

    Cloud Masking:

    Each image was processed to mask out clouds and their shadows before creating the composite image. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.

    A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 35% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.

    A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.

    The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm.

    Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations was adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations. With 4-pixel filter resolutions these operations were still using over 90% of the total processing resulting in each image taking approximately 10 min to compute on the Google Earth Engine. (Lawrey et al. 2022)

    Format:

    GeoTiff - LZW compressed, 8 bit channels, 0 as NoData, Imagery as values 1 - 255. Internal tiling and overviews. Average size: 12500 x 11300 pixels and 300 MB per image.

    The images in this dataset are all named using a naming convention. An example file name is AU_AIMS_MARB-S2-comp_p15_TrueColour_51KTV_v2_2015-2024.tif. The name is made up from: - Dataset name (AU_AIMS_MARB-S2-comp) - An algorithm descriptor (p15 for 15th percentile), - Colour and contrast enhancement applied (TrueColour), - Sentinel 2 tile (example: 54LZP), - Version (v2), - Date range (2015 to 2024 for version 2)

    References:

    Google (n.d.) Sentinel-2: Cloud Probability. Earth Engine Data Catalog. Accessed 10 April 2021 from https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_CLOUD_PROBABILITY

    Zupanc, A., (2017) Improving Cloud Detection with Machine Learning. Medium. Accessed 10 April 2021 from https://medium.com/sentinel-hub/improving-cloud-detection-with-machine-learning-c09dc5d7cf13

    Lawrey, E., & Hammerton, M. (2022). Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and Landsat 8) 2015 – 2021 (AIMS) [Data set]. eAtlas. https://doi.org/10.26274/NH77-ZW79

    Data Location:

    This dataset is filed in the eAtlas enduring data repository at: data\custodian\2023-2026-NESP-MaC-3\3.17_Northern-Aus-reef-mapping The source code is available on GitHub.

  17. n

    VIIRS/JPSS1 Vegetation Indices 16-Day L3 Global 500m SIN Grid V002

    • cmr.earthdata.nasa.gov
    Updated May 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). VIIRS/JPSS1 Vegetation Indices 16-Day L3 Global 500m SIN Grid V002 [Dataset]. http://doi.org/10.5067/VIIRS/VJ113A1.002
    Explore at:
    Dataset updated
    May 5, 2025
    Time period covered
    Jan 1, 2018 - Present
    Area covered
    Earth
    Description

    The NOAA-20 Visible Infrared Imaging Radiometer Suite (VIIRS) Vegetation Indices (VJ113A1) Version 2 data product provides vegetation indices by a process of selecting the best available pixel over a 16-day acquisition period at 500 meter (m) resolution. The VJ113 data products are designed after the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra and Aqua Vegetation Indices product suite to promote the continuity of the Earth Observation System (EOS) mission.

    The VJ113 algorithm process produces three vegetation indices: Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), and Enhanced Vegetation Index-2 (EVI2). NDVI is one of the longest continual remotely sensed time series observations, using both the red and near-infrared (NIR) bands. EVI is a slightly different vegetation index that is more sensitive to canopy cover, while NDVI is more sensitive to chlorophyll. EVI2 is a reformation of the standard 3-band EVI, using the red band and NIR band. This reformation addresses arising issues when comparing VIIRS EVI to other EVI models that do not include a blue band. EVI2 will eventually become the standard EVI.

    Along with the three Vegetation Indices layers, this product also includes layers for NIR reflectance; three shortwave infrared (SWIR) reflectance; red, blue, and green reflectance; composite day of year; pixel reliability; relative azimuth, view, and sun angles; and a quality layer. Two low resolution browse images are also available for each VJ113A1 product: EVI and NDVI.

    Known Issues * Due to missing critical inputs, this product lacks coverage for tiles h33v07 and h18v14, which are located over water. * For complete information about known issues please refer to the MODIS/VIIRS Land Quality Assessment website and the User Guide and ATBD.

    Improvements/Changes from Previous Version * Improved calibration algorithm and coefficients for entire NOAA-20 mission. * Improved geolocation accuracy and applied updates to fix outliers around maneuver periods. * Corrected the aerosol quantity flag (low, average, high) mainly over brighter surfaces in the mid- to high-latitudes such as desert and tropical vegetation areas. This has an impact on the retrieval of other downstream data products such as VNP13 Vegetation Indices and VNP43 Bidirectional Reflectance Distribution Function (BRDF)/Albedo. * Improved cloud mask input product for corrections along coastlines and artifacts from use of coarse resolution climatology data. * Replaced the land/water mask input product with the eight-class land/water mask from the VNP03 geolocation product that better aligns with MODIS. * Modified QA VI Usefulness bits to ignore BRDF flag. * Implemented VI specific land/water mask. * More details can be found in this VIIRS Land V2 Changes document.

  18. d

    Vegetation - Suisun Marsh - 2021 [ds3187]

    • catalog.data.gov
    • data.cnra.ca.gov
    • +4more
    Updated Nov 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Department of Fish and Wildlife (2024). Vegetation - Suisun Marsh - 2021 [ds3187] [Dataset]. https://catalog.data.gov/dataset/vegetation-suisun-marsh-2021-ds3187-fdf99
    Explore at:
    Dataset updated
    Nov 27, 2024
    Dataset provided by
    California Department of Fish and Wildlife
    Area covered
    Suisun Marsh
    Description

    To create the 2021 Suisun Marsh vegetation map, vegetation was interpreted from a mosaic of the true color imagery that was flown in June 2021. Polygons were delineated using heads-up digitizing (i.e., a photo interpreter manually drew polygons around each stand of vegetation) in Esri’s ArcGIS Pro 3.1.2, and polygon attributes were recorded within a file geodatabase. All attributes were interpreted using the Suisun Marsh 2021 imagery as the base imagery. The photo interpreters obtained information primarily from the 2018 map and 2021 reconnaissance points, which were used during mapping to determine vegetative signatures and the appropriate mapping type for each polygon. Several other imagery sources were used as ancillary data, including 2021 NAIP, 2021 NAIP Color Infrared, all imagery available through Google Earth (including street view), and the 2018 NAIP imagery. Minimum mapping unit (MMU): Typically, the minimum mapping size is 0.25 acres. However, the photo interpreters use their best judgment to determine if a stand below 0.25 acre should be separately delineated. For example, a smaller polygon would be appropriate for any new visible occurrence of a non-native species of concern, such as Phragmites australis, Arundo donax, Carpobrotus edulis, Eucalyptus spp., and Lepidium latifolium. Minimum mapping width: There are many long and narrow polygons within the Suisun Marsh study area, most of which are roads, ditches, levees, and sloughs. The minimum mapping width is typically 10 feet; however, if small sections of a stand fell below the minimum width, the polygon was not split. More information can be found in the project report, which is bundled with the vegetation map published for BIOS here: https://filelib.wildlife.ca.gov/Public/BDB/GIS/BIOS/Public_Datasets/3100_3199/ds3187.zip.

  19. n

    MERIS - Vegetation Index (NDVI) - Europe, Monthly

    • cmr.earthdata.nasa.gov
    • fedeo.ceos.org
    not provided
    Updated Sep 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). MERIS - Vegetation Index (NDVI) - Europe, Monthly [Dataset]. https://cmr.earthdata.nasa.gov/search/concepts/C2207457986-FEDEO/9
    Explore at:
    not providedAvailable download formats
    Dataset updated
    Sep 27, 2024
    Time period covered
    Oct 15, 2003 - Feb 28, 2010
    Area covered
    Description

    The "AVHRR compatible Normalized Difference Vegetation Index derived from MERIS data (MERIS_AVHRR_NDVI)" was developed in a co-operative effort of DLR (German Remote Sensing Data Centre, DFD) and Brockmann Consult GmbH (BC) in the frame of the MAPP project (MERIS Application and Regional Products Projects). For the generation of regional specific value added MERIS level-3 products, MERIS full-resolution (FR) data are processed on a regular (daily) basis using ESA standard level-1b and level-2 data as input. The regular reception of MERIS-FR data is realized at DFD ground station in Neustrelitz.The Medium Resolution Imaging MERIS on Board ESA's ENVISAT provides spectral high resolution image data in the visible-near infrared spectral region (412-900 nm) at a spatial resolution of 300 m. For more details on ENVISAT and MERIS see http://envisat.esa.int The Advanced Very High Resolution Radiometer (AVHRR) compatible vegetation index (MERIS_AVHRR_NDVI) derived from data of the MEdium Resolution Imaging Spectrometer (MERIS) is regarded as a continuity index with 300 meter resolution for the well-known Normalized Difference Vegetation Index (NDVI) derived from AVHRR (given in 1km spatial resolution). The NDVI is an important factor describing the biological status of canopies. This product is thus used by scientists for deriving plant and canopy parameters. Consultants use time series of the NDVI for advising farmers with best practice.For more details the reader is referred tohttp://wdc.dlr.de/sensors/meris/ and http://wdc.dlr.de/sensors/meris/documents/Mapp_ATBD_final_i3r0dez2001.pdfThis product provides monthly maps.

  20. n

    VIIRS/NPP Vegetation Indices Monthly L3 Global 1km SIN Grid V002

    • cmr.earthdata.nasa.gov
    Updated Apr 29, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). VIIRS/NPP Vegetation Indices Monthly L3 Global 1km SIN Grid V002 [Dataset]. http://doi.org/10.5067/VIIRS/VNP13A3.002
    Explore at:
    Dataset updated
    Apr 29, 2025
    Time period covered
    Jan 1, 2012 - Present
    Area covered
    Earth
    Description

    The NASA/NOAA Suomi National Polar-orbiting Partnership (Suomi NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) Vegetation Indices (VNP13A3) Version 2 data product provides vegetation indices by a process of selecting the best available pixel over a monthly acquisition period at 1 kilometer (km) resolution. The VNP13 data products are designed after the Moderate Resolution Imaging Spectroradiometer (MODIS) Terra and Aqua Vegetation Indices product suite to promote the continuity of the Earth Observation System (EOS) mission.

    The VNP13 algorithm process produces three vegetation indices: The Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), and the Enhanced Vegetation Index-2 (EVI2). NDVI is one of the longest continual remotely sensed time series observations, using both the red and near-infrared (NIR) bands. EVI is a slightly different vegetation index that is more sensitive to canopy cover, while NDVI is more sensitive to chlorophyll. EVI2 is a reformation of the standard 3-band EVI, using the red band and NIR band. This reformation addresses arising issues when comparing VIIRS EVI to other EVI models that do not include a blue band. EVI2 will eventually become the standard EVI.

    Along with the three Vegetation Indices layers, this product also includes layers for NIR reflectance; three shortwave infrared (SWIR) reflectance; red, blue, and green reflectance; pixel reliability; pixel reliability; relative azimuth, view, and sun angles; and a quality layer. Two low resolution browse images are also available for each VNP13A3 product: EVI and NDVI.

    Known Issues * Due to missing critical inputs, this product lacks coverage for tiles h33v07 and h18v14, which are located over water. * For complete information about known issues please refer to the MODIS/VIIRS Land Quality Assessment website and the User Guide and ATBD.

    Improvements/Changes from Previous Versions * Improved calibration algorithm and coefficients for entire Suomi NPP mission. * Improved geolocation accuracy and applied updates to fix outliers around maneuver periods. * Corrected the aerosol quantity flag (low, average, high) mainly over brighter surfaces in the mid- to high-latitudes such as desert and tropical vegetation areas. This has an impact on the retrieval of other downstream data products such as VNP13 Vegetation Indices and VNP43 Bidirectional Reflectance Distribution Function (BRDF)/Albedo. * Improved cloud mask input product for corrections along coastlines and artifacts from use of coarse resolution climatology data. * Replaced the land/water mask input product with the eight-class land/water mask from the VNP03 geolocation product that better aligns with MODIS. * Modified QA VI Usefulness bits to ignore BRDF flag. * Implemented VI specific land/water mask. * More details can be found in this VIIRS Land V2 Changes document.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Hammerton, Marc; Lawrey, Eric, Dr (2024). Marine satellite image test collections (AIMS) [Dataset]. http://doi.org/10.26274/ZQ26-A956

Marine satellite image test collections (AIMS)

Explore at:
Dataset updated
Jul 9, 2024
Dataset provided by
Australian Ocean Data Network
Authors
Hammerton, Marc; Lawrey, Eric, Dr
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Time period covered
Oct 1, 2016 - Sep 20, 2021
Area covered
Description

This dataset consists of collections of satellite image composites (Sentinel 2 and Landsat 8) that are created from manually curated image dates for a range of projects. These images are typically prepared for subsequent analysis or testing of analysis algorithms as part of other projects. This dataset acts as a repository of reproducible test sets of images processed from Google Earth Engine using a standardised workflow.

Details of the algorithms used to produce the imagery are described in the GEE code and code repository available on GitHub (https://github.com/eatlas/World_AIMS_Marine-satellite-imagery).

Project test image sets:

As new projects are added to this dataset, their details will be described here:

  • NESP MaC 2.3 Benthic reflection estimation (projects/CS_NESP-MaC-2-3_AIMS_Benth-reflect): This collection consists of six Sentinel 2 image composites in the Coral Sea and GBR for the purpose of testing a method of determining benthic reflectance of deep lagoonal areas of coral atolls. These image composites are in GeoTiff format, using 16-bit encoding and LZW compression. These images do not have internal image pyramids to save on space. [Status: final and available for download]

  • NESP MaC 2.3 Oceanic Vegetation (projects/CS_NESP-MaC-2-3_AIMS_Oceanic-veg): This project is focused on mapping vegetation on the bottom of coral atolls in the Coral Sea. This collection consists of additional images of Ashmore Reef. The lagoonal area of Ashmore has low visibility due to coloured dissolved organic matter, making it very hard to distinguish areas that are covered in vegetation. These images were manually curated to best show the vegetation. While these are the best images in the Sentinel 2 series up to 2023, they are still not very good. Probably 80 - 90% of the lagoonal benthos is not visible. [Status: final and available for download]

  • NESP MaC 3.17 Australian reef mapping (projects/AU_NESP-MaC-3-17_AIMS_Reef-mapping): This collection of test images was prepared to determine if creating a composite from manually curated image dates (corresponding to images with the clearest water) would produce a better composite than a fully automated composite based on cloud filtering. The automated composites are described in https://doi.org/10.26274/HD2Z-KM55. This test set also includes composites from low tide imagery. The images in this collection are not yet available for download as the collection of images that will be used in the analysis has not been finalised.
    [Status: under development, code is available, but not rendered images]

  • Capricorn Regional Map (projects/CapBunk_AIMS_Regional-map): This collection was developed for making a set of maps for the region to facilitate participatory mapping and reef restoration field work planning. [Status: final and available for download]

  • Default (project/default): This collection of manual selected scenes are those that were prepared for the Coral Sea and global areas to test the algorithms used in the developing of the original Google Earth Engine workflow. This can be a good starting point for new test sets. Note that the images described in the default project are not rendered and made available for download to save on storage space. [Status: for reference, code is available, but not rendered images]

Filename conventions:

The images in this dataset are all named using a naming convention. An example file name is Wld_AIMS_Marine-sat-img_S2_NoSGC_Raw-B1-B4_54LZP.tif. The name is made up of: - Dataset name (Wld_AIMS_Marine-sat-img), short for World, Australian Institute of Marine Science, Marine Satellite Imagery.
- Satellite source: L8 for Landsat 8 or S2 for Sentinel 2. - Additional information or purpose: NoSGC - No sun glint correction, R1 best reference imagery set or R2 second reference imagery. - Colour and contrast enhancement applied (DeepFalse, TrueColour,Shallow,Depth5m,Depth10m,Depth20m,Raw-B1-B4), - Image tile (example: Sentinel 2 54LZP, Landsat 8 091086)

Limitations:

Only simple atmospheric correction is applied to land areas and as a result the imagery only approximates the bottom of atmosphere reflectance.

For the sentinel 2 imagery the sun glint correction algorithm transitions between different correction levels from deep water (B8) to shallow water (B11) and a fixed atmospheric correction for land (bright B8 areas). Slight errors in the tuning of these transitions can result in unnatural tonal steps in the transitions between these areas, particularly in very shallow areas.

For the Landsat 8 image processing land areas appear as black from the sun glint correction, which doesn't separately mask out the land. The code for the Landsat 8 imagery is less developed than for the Sentinel 2 imagery.

The depth contours are estimated using satellite derived bathymetry that is subject to errors caused by cloud artefacts, substrate darkness, water clarity, calibration issues and uncorrected tides. They were tuned in the clear waters of the Coral Sea. The depth contours in this dataset are RAW and contain many false positives due to clouds. They should not be used without additional dataset cleanup.

Change log:

As changes are made to the dataset, or additional image collections are added to the dataset then those changes will be recorded here.

2nd Edition, 2024-06-22: CapBunk_AIMS_Regional-map 1st Edition, 2024-03-18: Initial publication of the dataset, with CS_NESP-MaC-2-3_AIMS_Benth-reflect, CS_NESP-MaC-2-3_AIMS_Oceanic-veg and code for AU_NESP-MaC-3-17_AIMS_Reef-mapping and Default projects.

Data Format:

GeoTiff images with LZW compression. Most images do not have internal image pyramids to save on storage space. This makes rendering these images very slow in a desktop GIS. Pyramids should be added to improve performance.

Data Location:

This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\Wld-AIMS-Marine-sat-img

Search
Clear search
Close search
Google apps
Main menu