Facebook
Twitterhttps://data.linz.govt.nz/license/attribution-4-0-international/https://data.linz.govt.nz/license/attribution-4-0-international/
This dataset provides a seamless cloud-free 10m resolution satellite imagery layer of the New Zealand mainland and offshore islands.
The imagery was captured by the European Space Agency Sentinel-2 satellites between September 2022 - April 2023.
Data comprises: • 450 ortho-rectified RGB GeoTIFF images in NZTM projection, tiled into the LINZ Standard 1:50000 tile layout. • Satellite sensors: ESA Sentinel-2A and Sentinel-2B • Acquisition dates: September 2022 - April 2023 • Spectral resolution: R, G, B • Spatial resolution: 10 meters • Radiometric resolution: 8-bits (downsampled from 12-bits)
This is a visual product only. The data has been downsampled from 12-bits to 8-bits, and the original values of the images have been modified for visualisation purposes.
Also available on: • Basemaps • NZ Imagery - Registry of Open Data on AWS
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains both large (A0) printable maps of the Torres Strait broken into six overlapping regions, based on a clear sky, clear water composite Sentinel 2 composite imagery and the imagery used to create these maps. These maps show satellite imagery of the region, overlaid with reef and island boundaries and names. Not all features are named, just the more prominent features. This also includes a vector map of Ashmore Reef and Boot Reef in Coral Sea as these were used in the same discussions that these maps were developed for. The map of Ashmore Reef includes the atoll platform, reef boundaries and depth polygons for 5 m and 10 m.
This dataset contains all working files used in the development of these maps. This includes all a copy of all the source datasets and all derived satellite image tiles and QGIS files used to create the maps. This includes cloud free Sentinel 2 composite imagery of the Torres Strait region with alpha blended edges to allow the creation of a smooth high resolution basemap of the region.
The base imagery is similar to the older base imagery dataset: Torres Strait clear sky, clear water Landsat 5 satellite composite (NERP TE 13.1 eAtlas, AIMS, source: NASA).
Most of the imagery in the composite imagery from 2017 - 2021.
Method:
The Sentinel 2 basemap was produced by processing imagery from the World_AIMS_Marine-satellite-imagery dataset (01-data/World_AIMS_Marine-satellite-imagery in the data download) for the Torres Strait region. The TrueColour imagery for the scenes covering the mapped area were downloaded. Both the reference 1 imagery (R1) and reference 2 imagery (R2) was copied for processing. R1 imagery contains the lowest noise, most cloud free imagery, while R2 contains the next best set of imagery. Both R1 and R2 are typically composite images from multiple dates.
The R2 images were selectively blended using manually created masks with the R1 images. This was done to get the best combination of both images and typically resulted in a reduction in some of the cloud artefacts in the R1 images. The mask creation and previewing of the blending was performed in Photoshop. The created masks were saved in 01-data/R2-R1-masks. To help with the blending of neighbouring images a feathered alpha channel was added to the imagery. The processing of the merging (using the masks) and the creation of the feathered borders on the images was performed using a Python script (src/local/03-merge-R2-R1-images.py) using the Pillow library and GDAL. The neighbouring image blending mask was created by applying a blurring of the original hard image mask. This allowed neighbouring image tiles to merge together.
The imagery and reference datasets (reef boundaries, EEZ) were loaded into QGIS for the creation of the printable maps.
To optimise the matching of the resulting map slight brightness adjustments were applied to each scene tile to match its neighbours. This was done in the setup of each image in QGIS. This adjustment was imperfect as each tile was made from a different combinations of days (to remove clouds) resulting in each scene having a different tonal gradients across the scene then its neighbours. Additionally Sentinel 2 has slight stripes (at 13 degrees off the vertical) due to the swath of each sensor having a slight sensitivity difference. This effect was uncorrected in this imagery.
Single merged composite GeoTiff:
The image tiles with alpha blended edges work well in QGIS, but not in ArcGIS Pro. To allow this imagery to be used across tools that don't support the alpha blending we merged and flattened the tiles into a single large GeoTiff with no alpha channel. This was done by rendering the map created in QGIS into a single large image. This was done in multiple steps to make the process manageable.
The rendered map was cut into twenty 1 x 1 degree georeferenced PNG images using the Atlas feature of QGIS. This process baked in the alpha blending across neighbouring Sentinel 2 scenes. The PNG images were then merged back into a large GeoTiff image using GDAL (via QGIS), removing the alpha channel. The brightness of the image was adjusted so that the darkest pixels in the image were 1, saving the value 0 for nodata masking and the boundary was clipped, using a polygon boundary, to trim off the outer feathering. The image was then optimised for performance by using internal tiling and adding overviews. A full breakdown of these steps is provided in the README.md in the 'Browse and download all data files' link.
The merged final image is available in export\TS_AIMS_Torres Strait-Sentinel-2_Composite.tif.
Source datasets:
Complete Great Barrier Reef (GBR) Island and Reef Feature boundaries including Torres Strait Version 1b (NESP TWQ 3.13, AIMS, TSRA, GBRMPA), https://eatlas.org.au/data/uuid/d2396b2c-68d4-4f4b-aab0-52f7bc4a81f5
Geoscience Australia (2014b), Seas and Submerged Lands Act 1973 - Australian Maritime Boundaries 2014a - Geodatabase [Dataset]. Canberra, Australia: Author. https://creativecommons.org/licenses/by/4.0/ [license]. Sourced on 12 July 2017, https://dx.doi.org/10.4225/25/5539DFE87D895
Basemap/AU_GA_AMB_2014a/Exclusive_Economic_Zone_AMB2014a_Limit.shp
The original data was obtained from GA (Geoscience Australia, 2014a). The Geodatabase was loaded in ArcMap. The Exclusive_Economic_Zone_AMB2014a_Limit layer was loaded and exported as a shapefile. Since this file was small no clipping was applied to the data.
Geoscience Australia (2014a), Treaties - Australian Maritime Boundaries (AMB) 2014a [Dataset]. Canberra, Australia: Author. https://creativecommons.org/licenses/by/4.0/ [license]. Sourced on 12 July 2017, http://dx.doi.org/10.4225/25/5539E01878302
Basemap/AU_GA_Treaties-AMB_2014a/Papua_New_Guinea_TSPZ_AMB2014a_Limit.shp
The original data was obtained from GA (Geoscience Australia, 2014b). The Geodatabase was loaded in ArcMap. The Papua_New_Guinea_TSPZ_AMB2014a_Limit layer was loaded and exported as a shapefile. Since this file was small no clipping was applied to the data.
AIMS Coral Sea Features (2022) - DRAFT
This is a draft version of this dataset. The region for Ashmore and Boot reef was checked. The attributes in these datasets haven't been cleaned up. Note these files should not be considered finalised and are only suitable for maps around Ashmore Reef. Please source an updated version of this dataset for any other purpose.
CS_AIMS_Coral-Sea-Features/CS_Names/Names.shp
CS_AIMS_Coral-Sea-Features/CS_Platform_adj/CS_Platform.shp
CS_AIMS_Coral-Sea-Features/CS_Reef_Boundaries_adj/CS_Reef_Boundaries.shp
CS_AIMS_Coral-Sea-Features/CS_Depth/CS_AIMS_Coral-Sea-Features_Img_S2_R1_Depth5m_Coral-Sea.shp
CS_AIMS_Coral-Sea-Features/CS_Depth/CS_AIMS_Coral-Sea-Features_Img_S2_R1_Depth10m_Coral-Sea.shp
Murray Island 20 Sept 2011 15cm SISP aerial imagery, Queensland Spatial Imagery Services Program, Department of Resources, Queensland
This is the high resolution imagery used to create the map of Mer.
World_AIMS_Marine-satellite-imagery
The base image composites used in this dataset were based on an early version of Lawrey, E., Hammerton, M. (2024). Marine satellite imagery test collections (AIMS) [Data set]. eAtlas. https://doi.org/10.26274/zq26-a956. A snapshot of the code at the time this dataset was developed is made available in the 01-data/World_AIMS_Marine-satellite-imagery folder of the download of this dataset.
Data Location:
This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\TS_AIMS_Torres-Strait-Sentinel-2-regional-maps. On the eAtlas server it is stored at eAtlas GeoServer\data\2020-2029-AIMS.
Change Log:
2025-05-12: Eric Lawrey
Added Torres-Strait-Region-Map-Masig-Ugar-Erub-45k-A0 and Torres-Strait-Eastern-Region-Map-Landscape-A0. These maps have a brighten satellite imagery to allow easier reading of writing on the maps. They also include markers for geo-referencing the maps for digitisation.
2025-02-04: Eric Lawrey
Fixed up the reference to the World_AIMS_Marine-satellite-imagery dataset, clarifying where the source that was used in this dataset. Added ORCID and RORs to the record.
2023-11-22: Eric Lawrey
Added the data and maps for close up of Mer.
- 01-data/TS_DNRM_Mer-aerial-imagery/
- preview/Torres-Strait-Mer-Map-Landscape-A0.jpeg
- exports/Torres-Strait-Mer-Map-Landscape-A0.pdf
Updated 02-Torres-Strait-regional-maps.qgz to include the layout for the new map.
2023-03-02: Eric Lawrey
Created a merged version of the satellite imagery, with no alpha blending so that it can be used in ArcGIS Pro. It is now a single large GeoTiff image. The Google Earth Engine source code for the World_AIMS_Marine-satellite-imagery was included to improve the reproducibility and provenance of the dataset, along with a calculation of the distribution of image dates that went into the final composite image. A WMS service for the imagery was also setup and linked to from the metadata. A cross reference to the older Torres Strait clear sky clear water Landsat composite imagery was also added to the record.
Facebook
TwitterMultispectral imagery captured by Sentinel-2 satellites, featuring 13 spectral bands (visible, near-infrared, and short-wave infrared). Available globally since 2018 (Europe since 2017) with 10-60 m spatial resolution and revisit times of 2-3 days at mid-latitudes. Accessible through the EOSDA LandViewer platform for visualization, analysis, and download.
Facebook
TwitterCloud-free Landsat satellite imagery mosaics of the islands of the main 8 Hawaiian Islands (Hawaii, Maui, Kahoolawe, Lanai, Molokai, Oahu, Kauai and Niihau). Landsat 7 ETM (enhanced thematic mapper) is a polar orbiting 8 band multispectral satellite-borne sensor. The ETM+ instrument provides image data from eight spectral bands. The spatial resolution is 30 meters for the visible and near-infra...
Facebook
TwitterHigh resolution orthorectified images combine the image characteristics of an aerial photograph with the geometric qualities of a map. An orthoimage is a uniform-scale image where corrections have been made for feature displacement such as building tilt and for scale variations caused by terrain relief, sensor geometry, and camera tilt. A mathematical equation based on ground control points, sensor calibration information, and a digital elevation model is applied to each pixel to rectify the image to obtain the geometric qualities of a map.
A digital orthoimage may be created from several photographs mosaicked to form the final image. The source imagery may be black-and-white, natural color, or color infrared with a pixel resolution of 1-meter or finer. With orthoimagery, the resolution refers to the distance on the ground represented by each pixel.
Facebook
Twitterhttps://www.ontario.ca/page/open-government-licence-ontariohttps://www.ontario.ca/page/open-government-licence-ontario
The Ontario Imagery Web Map Service (OIWMS) is an open data service available to everyone free of charge. It provides instant online access to the most recent, highest quality, province wide imagery. GEOspatial Ontario (GEO) makes this data available as an Open Geospatial Consortium (OGC) compliant web map service or as an ArcGIS map service. Imagery was compiled from many different acquisitions which are detailed in the Ontario Imagery Web Map Service Metadata Guide linked below. Instructions on how to use the service can also be found in the Imagery User Guide linked below. Note: This map displays the Ontario Imagery Web Map Service Source, a companion ArcGIS web map service to the Ontario Imagery Web Map Service. It provides an overlay that can be used to identify acquisition relevant information such as sensor source and acquisition date. OIWMS contains several hierarchical layers of imagery, with coarser less detailed imagery that draws at broad scales, such as a province wide zooms, and finer more detailed imagery that draws when zoomed in, such as city-wide zooms. The attributes associated with this data describes at what scales (based on a computer screen) the specific imagery datasets are visible. Available Products Ontario Imagery OGC Web Map Service – public linkOntario Imagery ArcGIS Map Service – public linkOntario Imagery Web Map Service Source – public linkOntario Imagery ArcGIS Map Service – OPS internal linkOntario Imagery Web Map Service Source – OPS internal linkAdditional Documentation Ontario Imagery Web Map Service Metadata Guide (PDF)Ontario Imagery Web Map Service Copyright Document (PDF) Imagery User Guide (Word)StatusCompleted: Production of the data has been completed Maintenance and Update FrequencyAnnually: Data is updated every year ContactOntario Ministry of Natural Resources, Geospatial Ontario, imagery@ontario.ca
Facebook
TwitterMultispectral imagery from Landsat-8, providing moderate spatial resolution optical data. The dataset includes 11 spectral bands, ranging from visible to thermal infrared wavelengths, with spatial resolutions of 15 m (panchromatic), 30 m (multispectral), and 100 m (thermal). It offers global coverage with a revisit time of 16 days, or 8 days when combined with Landsat-7. Landsat-8 data is accessible through the EOSDA LandViewer platform for visualization, analysis, and download.
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset collection contains A0 maps of the Keppel Island region based on satellite imagery and fine-scale habitat mapping of the islands and marine environment. This collection provides the source satellite imagery used to produce these maps and the habitat mapping data.
The imagery used to produce these maps was developed by blending high-resolution imagery (1 m) from ArcGIS Online with a clear-sky composite derived from Sentinel 2 imagery (10 m). The Sentinel 2 imagery was used to achieve full coverage of the entire region, while the high-resolution was used to provide detail around island areas.
The blended imagery is a derivative product of the Sentinel 2 imagery and ArcGIS Online imagery, using Photoshop to to manually blend the best portions of each imagery into the final product. The imagery is provided for the sole purpose of reproducing the A0 maps.
Methods:
The high resolution satellite composite composite was developed by manual masking and blending of a Sentinel 2 composite image and high resolution imagery from ArcGIS Online World Imagery (2019).
The Sentinel 2 composite was produced by statistically combining the clearest 10 images from 2016 - 2019. These images were manually chosen based on their very low cloud cover, lack of sun glint and clear water conditions. These images were then combined together to remove clouds and reduce the noise in the image.
The processing of the images was performed using a script in Google Earth Engine. The script combines the manually chosen imagery to estimate the clearest imagery. The dates of the images were chosen using the EOBrowser (https://www.sentinel-hub.com/explore/eobrowser) to preview all the Sentinel 2 imagery from 2015-2019. The images that were mostly free of clouds, with little or no sun glint, were recorded. Each of these dates was then viewed in Google Earth Engine with high contrast settings to identify images that had high water surface noise due to algal blooms, waves, or re-suspension. These were excluded from the list. All the images were then combined by applying a histogram analysis of each pixel, with the final image using the 40th percentile of the time series of the brightness of each pixel. This approach helps exclude effects from clouds.
The contrast of the image was stretched to highlight the marine features, whilst retaining detail in the land features. This was done by choosing a black point for each channel that would provide a dark setting for deep clear water. Gamma correction was then used to lighten up the dark water features, whilst not ove- exposing the brighter shallow areas.
Both the high resolution satellite imagery and Sentinel 2 imagery was combined at 1 m pixel resolution. The resolution of the Sentinel 2 tiles was up sampled to match the resolution of the high-resolution imagery. These two sets of imagery were then layered in Photoshop. The brightness of the high-resolution satellite imagery was then adjusting to match the Sentinel 2 imagery. A mask was then used to retain and blend the imagery that showed the best detail of each area. The blended tiles were then merged with the overall area imagery by performing a GDAL merge, resulting in an upscaling of the Sentinel 2 imagery to 1 m resolution.
Habitat Mapping:
A 5 m resolution habitat mapping was developed based on the satellite imagery, aerial imagery available, and monitoring site information. This habitat mapping was developed to help with monitoring site selection and for the mapping workshop with the Woppaburra TOs on North Keppel Island in Dec 2019.
The habitat maps should be considered as draft as they don't consider all available in water observations. They are primarily based on aerial and satellite images.
The habitat mapping includes: Asphalt, Buildings, Mangrove, Cabbage-tree palm, Sheoak, Other vegetation, Grass, Salt Flat, Rock, Beach Rock, Gravel, Coral, Sparse coral, Unknown not rock (macroalgae on rubble), Marine feature (rock).
This assumed layers allowed the digitisation of these features to be sped up, so for example, if there was coral growing over a marine feature then the boundary of the marine feature would need to be digitised, then the coral feature, but not the boundary between the marine feature and the coral. We knew that the coral was going to cut out from the marine feature because the coral is on top of the marine feature, saving us time in digitising this boundary. Digitisation was performed on an iPad using Procreate software and an Apple pencil to draw the features as layers in a drawing. Due to memory limitations of the iPad the region was digitised using 6000x6000 pixel tiles. The raster images were converted back to polygons and the tiles merged together.
A python script was then used to clip the layer sandwich so that there is no overlap between feature types.
Habitat Validation:
Only limited validation was performed on the habitat map. To assist in the development of the habitat mapping, nearly every YouTube video available, at the time of development (2019), on the Keppel Islands was reviewed and, where possible, georeferenced to provide a better understanding of the local habitats at the scale of the mapping, prior to the mapping being conducted. Several validation points were observed during the workshop. The map should be considered as largely unvalidated.
data/coastline/Keppels_AIMS_Coastline_2017.shp:
The coastline dataset was produced by starting with the Queensland coastline dataset by DNRME (Downloaded from http://qldspatial.information.qld.gov.au/catalogue/custom/detail.page?fid={369DF13C-1BF3-45EA-9B2B-0FA785397B34} on 31 Aug 2019). This was then edited to work at a scale of 1:5000, using the aerial imagery from Queensland Globe as a reference and a high-tide satellite image from 22 Feb 2015 from Google Earth Pro. The perimeter of each island was redrawn. This line feature was then converted to a polygon using the "Lines to Polygon" QGIS tool. The Keppel island features were then saved to a shapefile by exporting with a limited extent.
data/labels/Keppel-Is-Map-Labels.shp:
This contains 70 named places in the Keppel island region. These names were sourced from literature and existing maps. Unfortunately, no provenance of the names was recorded. These names are not official. This includes the following attributes:
- Name: Name of the location. Examples Bald, Bluff
- NameSuffix: End of the name which is often a description of the feature type: Examples: Rock, Point
- TradName: Traditional name of the location
- Scale: Map scale where the label should be displayed.
data/lat/Keppel-Is-Sentinel2-2016-19_B4-LAT_Poly3m_V3.shp:
This corresponds to a rough estimate of the LAT contours around the Keppel Islands. LAT was estimated from tidal differences in Sentinel-2 imagery and light penetration in the red channel. Note this is not very calibrated and should be used as a rough guide. Only one rough in-situ validation was performed at low tide on Ko-no-mie at the edge of the reef near the education centre. This indicated that the LAT estimate was within a depth error range of about +-0.5 m.
data/habitat/Keppels_AIMS_Habitat-mapping_2019.shp:
This shapefile contains the mapped land and marine habitats. The classification type is recorded in the Type attribute.
Format:
GeoTiff (Internal JPEG format - 538 MB)
PDF (A0 regional maps - ~30MB each)
Shapefile (Habitat map, Coastline, Labels, LAT estimate)
Data Location:
This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\Keppels_AIMS_Regional-maps
Facebook
TwitterContext The Bhuvan Satellite Dataset is a valuable resource for land cover analysis and segmentation tasks. It includes a collection of satellite images and corresponding segmentation masks. The segmentation masks provide a pixel-level classification for five distinct land cover classes: vegetation, urban areas, forest, water bodies, and roads.
Content The dataset consists of satellite 2D images of Varanasi, a city located in the northern part of India, in the state of Uttar Pradesh, with coordinates ranging from 25.3° to 25.5° N latitude and 83° to 83.2° E longitude. It comprises a collection of high-resolution images capturing the Earth's surface. These images were obtained from the Indian Remote Sensing Satellite (IRS) and were processed and made available through the Bhuvan Geo Platform, which is managed by the Indian Space Research Organization (ISRO).
The dataset includes various files that offer valuable insights into the land cover classification and segmentation tasks. Here are the different data files available:
Researchers and professionals can leverage this dataset to conduct in-depth analysis and segmentation tasks related to land cover classification. The dataset's rich content enables the exploration of urban development, vegetation patterns, forest cover, water resources, and road networks within the Varanasi region.
Acknowledgements We would like to express our gratitude to Bhuvan - India Geo Platform of ISRO for providing the satellite images, which serve as a valuable resource for land cover analysis. We appreciate their efforts in collecting and curating satellite images, enabling researchers and professionals to explore and advance their work in remote sensing and geospatial analysis.
Inspiration Artificial Intelligence, Computer Vision, Image Processing, Deep Learning, Machine Learning, Satellite Image, Remote Sensing
Facebook
TwitterThis layer presents detectable thermal activity from VIIRS satellites for the last 7 days. VIIRS Thermal Hotspots and Fire Activity is a product of NASA’s Land, Atmosphere Near real-time Capability for EOS (LANCE) Earth Observation Data, part of NASA's Earth Science Data.Consumption Best Practices: As a service that is subject to Viral loads (very high usage), avoid adding Filters that use a Date/Time type field. These queries are not cacheable and WILL be subject to Rate Limiting by ArcGIS Online. To accommodate filtering events by Date/Time, we encourage using the included "Age" fields that maintain the number of Days or Hours since a record was created or last modified compared to the last service update. These queries fully support the ability to cache a response, allowing common query results to be supplied to many users without adding load on the service.When ingesting this service in your applications, avoid using POST requests, these requests are not cacheable and will also be subject to Rate Limiting measures.Source: NASA LANCE - VNP14IMG_NRT active fire detection - WorldScale/Resolution: 375-meterUpdate Frequency: Hourly using the aggregated live feed methodologyArea Covered: WorldWhat can I do with this layer?This layer represents the most frequently updated and most detailed global remotely sensed wildfire information. Detection attributes include time, location, and intensity. It can be used to track the location of fires from the recent past, a few hours up to seven days behind real time. This layer also shows the location of wildfire over the past 7 days as a time-enabled service so that the progress of fires over that timeframe can be reproduced as an animation.The VIIRS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Fire points in this service are generally available within 3 1/4 hours after detection by a VIIRS device. LANCE estimates availability at around 3 hours after detection, and esri livefeeds updates this feature layer every 15 minutes from LANCE.Even though these data display as point features, each point in fact represents a pixel that is >= 375 m high and wide. A point feature means somewhere in this pixel at least one "hot" spot was detected which may be a fire.VIIRS is a scanning radiometer device aboard the Suomi NPP and NOAA-20 satellites that collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in several visible and infrared bands. The VIIRS Thermal Hotspots and Fire Activity layer is a livefeed from a subset of the overall VIIRS imagery, in particular from NASA's VNP14IMG_NRT active fire detection product. The downloads are automatically downloaded from LANCE, NASA's near real time data and imagery site, every 15 minutes.The 375-m data complements the 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Hotspots and Fire Activity layer; they both show good agreement in hotspot detection but the improved spatial resolution of the 375 m data provides a greater response over fires of relatively small areas and provides improved mapping of large fire perimeters.Attribute informationLatitude and Longitude: The center point location of the 375 m (approximately) pixel flagged as containing one or more fires/hotspots.Satellite: Whether the detection was picked up by the Suomi NPP satellite (N) or NOAA-20 satellite (1). For best results, use the virtual field WhichSatellite, redefined by an arcade expression, that gives the complete satellite name.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel. This value is based on a collection of intermediate algorithm quantities used in the detection process. It is intended to help users gauge the quality of individual hotspot/fire pixels. Confidence values are set to low, nominal and high. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Nominal confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.Please note: Low confidence nighttime pixels occur only over the geographic area extending from 11 deg E to 110 deg W and 7 deg N to 55 deg S. This area describes the region of influence of the South Atlantic Magnetic Anomaly which can cause spurious brightness temperatures in the mid-infrared channel I4 leading to potential false positive alarms. These have been removed from the NRT data distributed by FIRMS.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: D = Daytime fire, N = Nighttime fireHours Old: Derived field that provides age of record in hours between Acquisition date/time and latest update date/time. 0 = less than 1 hour ago, 1 = less than 2 hours ago, 2 = less than 3 hours ago, and so on.Additional information can be found on the NASA FIRMS site FAQ.Note about near real time data:Near real time data is not checked thoroughly before it's posted on LANCE or downloaded and posted to the Living Atlas. NASA's goal is to get vital fire information to its customers within three hours of observation time. However, the data is screened by a confidence algorithm which seeks to help users gauge the quality of individual hotspot/fire points. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Medium confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.RevisionsSeptember 15, 2022: Updated to include 'Hours_Old' field. Time series has been disabled by default, but still available.July 5, 2022: Terms of Use updated to Esri Master License Agreement, no longer stating that a subscription is required!This layer is provided for informational purposes and is not monitored 24/7 for accuracy and currency.If you would like to be alerted to potential issues or simply see when this Service will update next, please visit our Live Feed Status Page!
Facebook
TwitterThis cached tile service of 2015 WorldView Orthoimagery may be added to ArcMap and other GIS software and applications. The Web service was created in ArcMap 10.3 using orthorectified imagery in mosaic datasets and published to a tile package. The package was published as service that is hosted at MassGIS' ArcGIS Online organizational account.When creating the service in ArcMap, the display settings (stretching, brightness and contrast) were modified individually for each mosaic dataset in order to achieve the best possible uniform appearance across the state; however, because of the different acquisition dates and satellites, seams between strips are visible at smaller scales. With many tiles overlapping from different flights, imagery was displayed so that the best imagery (highest resolution, most cloud-free) appeared "on top".The visible scale range for this service is 1:3,000,000 to 1:2,257.See https://www.mass.gov/info-details/massgis-data-2015-satellite-imagery for full details.
Facebook
Twitterhttps://earth.esa.int/eogateway/documents/20142/1560778/ESA-Third-Party-Missions-Terms-and-Conditions.pdfhttps://earth.esa.int/eogateway/documents/20142/1560778/ESA-Third-Party-Missions-Terms-and-Conditions.pdf
QuickBird high resolution optical products are available as part of the Vantor Standard Satellite Imagery products from the QuickBird, WorldView-1/-2/-3/-4, and GeoEye-1 satellites. All details about the data provision, data access conditions and quota assignment procedure are described into the Terms of Applicability available in Resources section. In particular, QuickBird offers archive panchromatic products up to 0.60 m GSD resolution and 4-Bands Multispectral products up to 2.4 m GSD resolution. Band Combination Data Processing Level Resolution Panchromatic and 4-bands Standard(2A)/View Ready Standard (OR2A) 15 cm HD, 30 cm HD, 30 cm, 40 cm, 50/60 cm View Ready Stereo 30 cm, 40 cm, 50/60 cm Map-Ready (Ortho) 1:12,000 Orthorectified 15 cm HD, 30 cm HD, 30 cm, 40 cm, 50/60 cm 4-Bands being an option from: 4-Band Multispectral (BLUE, GREEN, RED, NIR1) 4-Band Pan-sharpened (BLUE, GREEN, RED, NIR1) 4-Band Bundle (PAN, BLUE, GREEN, RED, NIR1) 3-Bands Natural Colour (pan-sharpened BLUE, GREEN, RED) 3-Band Colored Infrared (pan-sharpened GREEN, RED, NIR1) Natural Colour / Coloured Infrared (3-Band pan-sharpened) Native 30 cm and 50/60 cm resolution products are processed with Vantor HD Technology to generate respectively the 15 cm HD and 30 cm HD products: the initial special resolution (GSD) is unchanged but the HD technique intelligently increases the number of pixels and improves the visual clarity achieving aesthetically refined imagery with precise edges and well reconstructed details.
Facebook
TwitterOpen Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The High Resolution Digital Elevation Model (HRDEM) product is derived from airborne LiDAR data (mainly in the south) and satellite images in the north. The complete coverage of the Canadian territory is gradually being established. It includes a Digital Terrain Model (DTM), a Digital Surface Model (DSM) and other derived data. For DTM datasets, derived data available are slope, aspect, shaded relief, color relief and color shaded relief maps and for DSM datasets, derived data available are shaded relief, color relief and color shaded relief maps. The productive forest line is used to separate the northern and the southern parts of the country. This line is approximate and may change based on requirements. In the southern part of the country (south of the productive forest line), DTM and DSM datasets are generated from airborne LiDAR data. They are offered at a 1 m or 2 m resolution and projected to the UTM NAD83 (CSRS) coordinate system and the corresponding zones. The datasets at a 1 m resolution cover an area of 10 km x 10 km while datasets at a 2 m resolution cover an area of 20 km by 20 km. In the northern part of the country (north of the productive forest line), due to the low density of vegetation and infrastructure, only DSM datasets are generally generated. Most of these datasets have optical digital images as their source data. They are generated at a 2 m resolution using the Polar Stereographic North coordinate system referenced to WGS84 horizontal datum or UTM NAD83 (CSRS) coordinate system. Each dataset covers an area of 50 km by 50 km. For some locations in the north, DSM and DTM datasets can also be generated from airborne LiDAR data. In this case, these products will be generated with the same specifications as those generated from airborne LiDAR in the southern part of the country. The HRDEM product is referenced to the Canadian Geodetic Vertical Datum of 2013 (CGVD2013), which is now the reference standard for heights across Canada. Source data for HRDEM datasets is acquired through multiple projects with different partners. Since data is being acquired by project, there is no integration or edgematching done between projects. The tiles are aligned within each project. The product High Resolution Digital Elevation Model (HRDEM) is part of the CanElevation Series created in support to the National Elevation Data Strategy implemented by NRCan. Collaboration is a key factor to the success of the National Elevation Data Strategy. Refer to the “Supporting Document” section to access the list of the different partners including links to their respective data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains Sentinel 2 and Landsat 8 cloud free composite satellite images of the Coral Sea reef areas and some parts of the Great Barrier Reef. It also contains raw depth contours derived from the satellite imagery. This dataset was developed as the base information for mapping the boundaries of reefs and coral cays in the Coral Sea. It is likely that the satellite imagery is useful for numerous other applications. The full source code is available and can be used to apply these techniques to other locations.
This dataset contains two sets of raw satellite derived bathymetry polygons for 5 m, 10 m and 20 m depths based on both the Landsat 8 and Sentinel 2 imagery. These are intended to be post-processed using clipping and manual clean up to provide an estimate of the top structure of reefs. This dataset also contains select scenes on the Great Barrier Reef and Shark bay in Western Australia that were used to calibrate the depth contours. Areas in the GBR were compared with the GA GBR30 2020 (Beaman, 2017) bathymetry dataset and the imagery in Shark bay was used to tune and verify the Satellite Derived Bathymetry algorithm in the handling of dark substrates such as by seagrass meadows. This dataset also contains a couple of small Sentinel 3 images that were used to check the presence of reefs in the Coral Sea outside the bounds of the Sentinel 2 and Landsat 8 imagery.
The Sentinel 2 and Landsat 8 imagery was prepared using the Google Earth Engine, followed by post processing in Python and GDAL. The processing code is available on GitHub (https://github.com/eatlas/CS_AIMS_Coral-Sea-Features_Img).
This collection contains composite imagery for Sentinel 2 tiles (59 in Coral Sea, 8 in GBR) and Landsat 8 tiles (12 in Coral Sea, 4 in GBR and 1 in WA). For each Sentinel tile there are 3 different colour and contrast enhancement styles intended to highlight different features. These include:
- TrueColour - Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to identifying shallow features are and in mapping the vegetation on cays.
- DeepFalse - Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This imagery has a high level of contrast enhancement applied to the imagery and so it appears more noisy (in particular showing artefact from clouds) than the TrueColour styling.
- Shallow - Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Features less than a couple of metres appear dark blue, dry areas are white. This imagery is intended to help identify coral cay boundaries.
For Landsat 8 imagery only the TrueColour and DeepFalse stylings were rendered.
All Sentinel 2 and Landsat 8 imagery has Satellite Derived Bathymetry (SDB) depth contours.
- Depth5m - This corresponds to an estimate of the area above 5 m depth (Mean Sea Level).
- Depth10m - This corresponds to an estimate of the area above 10 m depth (Mean Sea Level).
- Depth20m - This corresponds to an estimate of the area above 20 m depth (Mean Sea Level).
For most Sentinel and some Landsat tiles there are two versions of the DeepFalse imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery. This allows any mapped features to be checked against two images. Typically the R2 imagery will have more artefacts from clouds. In one Sentinel 2 tile a third image was created to help with mapping the reef platform boundary.
The satellite imagery was processed in tiles (approximately 100 x 100 km for Sentinel 2 and 200 x 200 km for Landsat 8) to keep each final image small enough to manage. These tiles were not merged into a single mosaic as it allowed better individual image contrast enhancement when mapping deep features. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs and where their might have been potential new reef platforms indicated by existing bathymetry datasets and the AHO Marine Charts. The extent of the imagery was limited by those available through the Google Earth Engine.
# Methods:
The Sentinel 2 imagery was created using the Google Earth Engine. The core algorithm was:
1. For each Sentinel 2 tile, images from 2015 – 2021 were reviewed manually after first filtering to remove cloudy scenes. The allowable cloud cover was adjusted so that at least the 50 least cloud free images were reviewed. The typical cloud cover threshold was 1%. Where very few images were available the cloud cover filter threshold was raised to 100% and all images were reviewed. The Google Earth Engine image IDs of the best images were recorded, along with notes to help sort the images based on those with the clearest water, lowest waves, lowest cloud, and lowest sun glint. Images where there were no or few clouds over the known coral reefs were preferred. No consideration of tides was used in the image selection process. The collection of usable images were grouped into two sets that would be combined together into composite images. The best were added to the R1 composite, and the next best images into the R2 composite. Consideration was made as to whether each image would improve the resultant composite or make it worse. Adding clear images to the collection reduces the visual noise in the image allowing deeper features to be observed. Adding images with clouds introduces small artefacts to the images, which are magnified due to the high contrast stretching applied to the imagery. Where there were few images all available imagery was typically used.
2. Sunglint was removed from the imagery using estimates of the sunglint using two of the infrared bands (described in detail in the section on Sun glint removal and atmospheric correction).
3. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later).
4. The brightness of the composite image was normalised so that all tiles would have a similar average brightness for deep water areas. This correction was applied to allow more consistent contrast enhancement. Note: this brightness adjustment was applied as a single offset across all pixels in the tile and so this does not correct for finer spatial brightness variations.
5. The contrast of the images was enhanced to create a series of products for different uses. The TrueColour colour image retained the full range of tones visible, so that bright sand cays still retain detail. The DeepFalse style was optimised to see features at depth and the Shallow style provides access to far red and infrared bands for assessing shallow features, such as cays and island.
6. The various contrast enhanced composite images were exported from Google Earth Engine and optimised using Python and GDAL. This optimisation added internal tiling and overviews to the imagery. The depth polygons from each tile were merged into shapefiles covering the whole for each depth.
## Cloud Masking
Prior to combining the best images each image was processed to mask out clouds and their shadows.
The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.
A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.
A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.
The buffering was applied as the cloud masking would often miss significant portions of the edges of clouds and their shadows. The buffering allowed a higher percentage of the cloud to be excluded, whilst retaining as much of the original imagery as possible.
The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. The algorithm used is significantly better than the default Sentinel 2 cloud masking and slightly better than the COPERNICUS/S2_CLOUD_PROBABILITY cloud mask because it masks out shadows, however there is potentially significant improvements that could be made to the method in the future.
Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
We mapped cheatgrass at different scales in the Greater Yellowstone Ecosystem using 10-m Sentinel-2 imagery, 3-m PlanetScope, and 10-cm Unoccupied Aerial Systems (UAS) imagery. We compared these maps to field-collected data to address 1) variation in seasonal phenological signals of native and cheatgrass patches, 2) the influence of scale on detectability and map accuracy across our study area. Model accuracy to predict cheatgrass presence increased with imagery resolution and reached 94% with the integration of PlanetScope and UAS imagery. While there was spatial agreement across models, UAS could best detect small cheatgrass patches required for early management intervention. Our novel use of different data sources in the classification of cheatgrass capitalizes on the senescence of cheatgrass during peak summer periods where cloud free imagery is more prevalent. Our satellite and UAS-based models of varying scale could be used in a multistage effort to discover where cheatgrass ...
Facebook
TwitterThis dataset contains cloud free, low tide composite satellite images for the tropical Australia region based on 10 m resolution Sentinel 2 imagery from 2018 – 2023. This image collection was created as part of the NESP MaC 3.17 project and is intended to allow mapping of the reef features in tropical Australia. This collection contains composite imagery for 200 Sentinel 2 tiles around the tropical Australian coast. This dataset uses two styles: 1. a true colour contrast and colour enhancement style (TrueColour) using the bands B2 (blue), B3 (green), and B4 (red) 2. a near infrared false colour style (Shallow) using the bands B5 (red edge), B8 (near infrared), and B12 (short wave infrared). These styles are useful for identifying shallow features along the coastline. The Shallow false colour styling is optimised for viewing the first 3 m of the water column, providing an indication of water depth. This is because the different far red and near infrared bands used in this styling have limited penetration of the water column. In clear waters the maximum penetrations of each of the bands is 3-5 m for B5, 0.5 - 1 m for B8 and < 0.05 m for B12. As a result, the image changes in colour with the depth of the water with the following colours indicating the following different depths: - White, brown, bright green, red, light blue: dry land - Grey brown: damp intertidal sediment - Turquoise: 0.05 - 0.5 m of water - Blue: 0.5 - 3 m of water - Black: Deeper than 3 m In very turbid areas the visible limit will be slightly reduced. Change log: Changes to this dataset and metadata will be noted here: 2024-07-24 - Add tiles for the Great Barrier Reef 2024-05-22 - Initial release for low-tide composites using 30th percentile (Git tag: "low_tide_composites_v1") Methods: The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile filter the "COPERNICUS/S2_HARMONIZED" image collection by - tile ID - maximum cloud cover 0.1% - date between '2018-01-01' and '2023-12-31' - asset_size > 100000000 (remove small fragments of tiles) 2. Remove high sun-glint images (see "High sun-glint image detection" for more information). 3. Split images by "SENSING_ORBIT_NUMBER" (see "Using SENSING_ORBIT_NUMBER for a more balanced composite" for more information). 4. Iterate over all images in the split collections to predict the tide elevation for each image from the image timestamp (see "Tide prediction" for more information). 5. Remove images where tide elevation is above mean sea level to make sure no high tide images are included. 6. Select the 10 images with the lowest tide elevation. 7. Combine SENSING_ORBIT_NUMBER collections into one image collection. 8. Remove sun-glint (true colour only) and apply atmospheric correction on each image (see "Sun-glint removal and atmospheric correction" for more information). 9. Duplicate image collection to first create a composite image without cloud masking and using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 10. Apply cloud masking to all images in the original image collection (see "Cloud Masking" for more information) and create a composite by using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used). 11. Combine the two composite images (no cloud mask composite and cloud mask composite). This solves the problem of some coral cays and islands being misinterpreted as clouds and therefore creating holes in the composite image. These holes are "plugged" with the underlying composite without cloud masking. (Lawrey et al. 2022) 12. The final composite was exported as cloud optimized 8 bit GeoTIFF Note: The following tiles were generated with different settings as they did not have enough images to create a composite with the standard settings: - 51KWA: no high sun-glint filter - 54LXP: maximum cloud cover set to 1% - 54LXP: maximum cloud cover set to 1% - 54LYK: maximum cloud cover set to 2% - 54LYM: maximum cloud cover set to 5% - 54LYN: maximum cloud cover set to 1% - 54LYQ: maximum cloud cover set to 5% - 54LYP: maximum cloud cover set to 1% - 54LZL: maximum cloud cover set to 1% - 54LZM: maximum cloud cover set to 1% - 54LZN: maximum cloud cover set to 1% - 54LZQ: maximum cloud cover set to 5% - 54LZP: maximum cloud cover set to 1% - 55LBD: maximum cloud cover set to 2% - 55LBE: maximum cloud cover set to 1% - 55LCC: maximum cloud cover set to 5% - 55LCD: maximum cloud cover set to 1% High sun-glint image detection: Images with high sun-glint can lead to lower quality composite images. To determine high sun-glint images, a mask is created for all pixels above a high reflectance threshold for the near-infrared and short-wave infrared bands. Then the proportion of this is calculated and compared against a sun-glint threshold. If the image exceeds this threshold, it is filtered out of the image collection. As we are only interested in the sun-glint on water pixels, a water mask is created using NDWI before creating the sun-glint mask. Sun-glint removal and atmospheric correction: Sun-glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun-glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun-glint detected by B8 correlates very highly with the sun-glint experienced by the visible channels (B2, B3 and B4) and so the sun-glint in these channels can be removed by subtracting B8 from these channels. Eric Lawrey developed this algorithm by fine tuning the value of the scaling between the B8 channel and each individual visible channel (B2, B3 and B4) so that the maximum level of sun-glint would be removed. This work was based on a representative set of images, trying to determine a set of values that represent a good compromise across different water surface conditions. This algorithm is an adjustment of the algorithm already used in Lawrey et al. 2022 Tide prediction: To determine the tide elevation in a specific satellite image, we used a tide prediction model to predict the tide elevation for the image timestamp. After investigating and comparing a number of models, it was decided to use the empirical ocean tide model EOT20 (Hart-Davis et al., 2021). The model data can be freely accessed at https://doi.org/10.17882/79489 and works with the Python library pyTMD (https://github.com/tsutterley/pyTMD). In our comparison we found this model was able to predict accurately the tide elevation across multiple points along the study coastline when compared to historic Bureau of Meteorolgy and AusTide data. To determine the tide elevation of the satellite images we manually created a point dataset where we placed a central point on the water for each Sentinel tile in the study area . We used these points as centroids in the ocean models and calculated the tide elevation from the image timestamp. Using "SENSING_ORBIT_NUMBER" for a more balanced composite: Some of the Sentinel 2 tiles are made up of different sections depending on the "SENSING_ORBIT_NUMBER". For example, a tile could have a small triangle on the left side and a bigger section on the right side. If we filter an image collection and use a subset to create a composite, we could end up with a high number of images for one section (e.g. the left side triangle) and only few images for the other section(s). This would result in a composite image with a balanced section and other sections with a very low input. To avoid this issue, the initial unfiltered image collection is divided into multiple image collections by using the image property "SENSING_ORBIT_NUMBER". The filtering and limiting (max number of images in collection) is then performed on each "SENSING_ORBIT_NUMBER" image collection and finally, they are combined back into one image collection to generate the final composite. Cloud Masking: Each image was processed to mask out clouds and their shadows before creating the composite image. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts. A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 35% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask. A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer. The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm. Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations was adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations. With 4-pixel filter resolutions these operations were still using over 90% of the total
Facebook
Twitter
Facebook
TwitterCarbon Dioxide (Difference from Global Mean, Best Available, OCO-2) from NASA GIBSTemporal coverage: 2002 SEP - 2012 FEBThe Carbon Dioxide (L3, Free Troposphere, Monthly) layer displays monthly Carbon Dioxide in the free troposphere. It is created from the AIRX3C2M data product which is the AIRS mid-tropospheric Carbon Dioxide (CO2) Level 3 Monthly Gridded Retrieval, from the AIRS and AMSU instruments on board of Aqua satellite. It is monthly gridded data at 2.5x2 degreee (lon)x(lat) grid cell size. The data is in mole fraction units (data x 10^6 =ppm in volume). This quantity is not a total column quantity because the sensitivity function of the AIRS mid-tropospheric CO2 retrieval system peaks over the altitude range 6-10 km. The quantity is what results when the true atmospheric CO2 profile is weighted, level-by-level, by the AIRS sensitivity function.The Atmospheric Infrared Sounder (AIRS), in conjunction with the Advanced Microwave Sounding Unit (AMSU), senses emitted infrared and microwave radiation from Earth to provide a three-dimensional look at Earth's weather and climate. Working in tandem, the two instruments make simultaneous observations down to Earth's surface. With more than 2,000 channels sensing different regions of the atmosphere, the system creates a global, three-dimensional map of atmospheric temperature and humidity, cloud amounts and heights, greenhouse gas concentrations and many other atmospheric phenomena. Launched into Earth orbit in 2002, the AIRS and AMSU instruments fly onboard NASA's Aqua spacecraft and are managed by NASA's Jet Propulsion Laboratory in Pasadena, California. More information about AIRS can be found at https://airs.jpl.nasa.gov.References: AIRX3C2M doi:10.5067/Aqua/AIRS/DATA339ABOUT NASA GIBSThe Global Imagery Browse Services (GIBS) system is a core EOSDIS component which provides a scalable, responsive, highly available, and community standards based set of imagery services. These services are designed with the goal of advancing user interactions with EOSDIS’ inter-disciplinary data through enhanced visual representation and discovery.The Global Imagery Browse Services (GIBS) system is a core EOSDIS component which provides a scalable, responsive, highly available, and community standards based set of imagery services. These services are designed with the goal of advancing user interactions with EOSDIS’ inter-disciplinary data through enhanced visual representation and discovery.MODIS (or Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra (originally known as EOS AM-1) and Aqua (originally known as EOS PM-1) satellites. Terra's orbit around the Earth is timed so that it passes from north to south across the equator in the morning, while Aqua passes south to north over the equator in the afternoon. Terra MODIS and Aqua MODIS are viewing the entire Earth's surface every 1 to 2 days, acquiring data in 36 spectral bands, or groups of wavelengths (see MODIS Technical Specifications). These data will improve our understanding of global dynamics and processes occurring on the land, in the oceans, and in the lower atmosphere. MODIS is playing a vital role in the development of validated, global, interactive Earth system models able to predict global change accurately enough to assist policy makers in making sound decisions concerning the protection of our environment.GIBS Available Imagery ProductsThe GIBS imagery archive includes approximately 1000 imagery products representing visualized science data from the NASA Earth Observing System Data and Information System (EOSDIS). Each imagery product is generated at the native resolution of the source data to provide "full resolution" visualizations of a science parameter. GIBS works closely with the science teams to identify the appropriate data range and color mappings, where appropriate, to provide the best quality imagery to the Earth science community. Many GIBS imagery products are generated by the EOSDIS LANCE near real-time processing system resulting in imagery available in GIBS within 3.5 hours of observation. These products and others may also extend from present to the beginning of the satellite mission. In addition, GIBS makes available supporting imagery layers such as data/no-data, water masks, orbit tracks, and graticules to improve imagery usage.The GIBS team is actively engaging the NASA EOSDIS Distributed Active Archive Centers (DAACs) to add more imagery products and to extend their coverage throughout the life of the mission. The remainder of this page provides a structured view of the layers currently available within GIBS grouped by science discipline and science observation. For information regarding how to access these products, see the GIBS API section of this wiki. For information regarding how to access these products through an existing client, refer to the Map Library and GIS Client sections of this wiki. If you are aware of a science parameter that you would like to see visualized, please contact us at support@earthdata.nasa.gov. https://wiki.earthdata.nasa.gov/display/GIBS/GIBS+Available+Imagery+Products#expand-AerosolOpticalDepth29ProductsNASA GIS API for Developers https://wiki.earthdata.nasa.gov/display/GIBS/GIBS+API+for+Developers
Facebook
Twitterhttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1ahttp://inspire.ec.europa.eu/metadata-codelist/LimitationsOnPublicAccess/INSPIRE_Directive_Article13_1a
The PlanetScope Level 1B Basic Scene and Level 3B Ortho Scene full archive products are available as part of Planet imagery offer. The Unrectified Asset: PlanetScope Basic Analytic Radiance (TOAR) product is a Scaled Top of Atmosphere Radiance (at sensor) and sensor corrected product, without correction for any geometric distortions inherent in the imaging processes and is not mapped to a cartographic projection. The imagery data is accompanied by Rational Polynomial Coefficients (RPCs) to enable orthorectification by the user. This kind of product is designed for users with advanced image processing and geometric correction capabilities. Basic Scene Product Components and Format Product Components Image File (GeoTIFF format) Metadata File (XML format) Rational Polynomial Coefficients (XML format) Thumbnail File (GeoTIFF format) Unusable Data Mask UDM File (GeoTIFF format) Usable Data Mask UDM2 File (GeoTIFF format) Bands 4-band multispectral image (blue, green, red, near-infrared) or 8-band (coastal-blue, blue, green I, green, yellow, red, Rededge, near-infrared) Ground Sampling Distance Approximate, satellite altitude dependent Dove-C: 3.0 m-4.1 m Dove-R: 3.0 m-4.1 m SuperDove: 3.7 m-4.2 m Accuracy <10 m RMSE The Rectified assets: The PlanetScope Ortho Scene product is radiometrically-, sensor- and geometrically- corrected and is projected to a UTM/WGS84 cartographic map projection. The geometric correction uses fine Digital Elevation Models (DEMs) with a post spacing of between 30 and 90 metres. Ortho Scene Product Components and Format Product Components Image File (GeoTIFF format) Metadata File (XML format) Thumbnail File (GeoTIFF format) Unusable Data Mask UDM File (GeoTIFF format) Usable Data Mask UDM2 File (GeoTIFF format) Bands 3-band natural colour (red, green, blue) or 4-band multispectral image (blue, green, red, near-infrared) or 8-band (coastal-blue, blue, green I, green, yellow, red, RedEdge, near-infrared) Ground Sampling Distance Approximate, satellite altitude dependent Dove-C: 3.0 m-4.1 m Dove-R: 3.0 m-4.1 m SuperDove: 3.7 m-4.2 m Projection UTM WGS84 Accuracy <10 m RMSE PlanetScope Ortho Scene product is available in the following: PlanetScope Visual Ortho Scene product is orthorectified and colour-corrected (using a colour curve) 3-band RGB Imagery. This correction attempts to optimise colours as seen by the human eye providing images as they would look if viewed from the perspective of the satellite. PlanetScope Surface Reflectance product is orthorectified, 4-band BGRN or 8-band Coastal Blue, Blue, Green I, Green, Yellow, Red, RedEdge, NIR Imagery with geometric, radiometric and corrected for surface reflection. This data is optimal for value-added image processing such as land cover classifications. PlanetScope Analytic Ortho Scene Surface Reflectance product is orthorectified, 4-band BGRN or 8-band Coastal Blue, Blue, Green I, Green, Yellow, Red, RedEdge, NIR Imagery with geometric, radiometric and calibrated to top of atmosphere radiance. As per ESA policy, very high-resolution imagery of conflict areas cannot be provided.
Facebook
TwitterThis live Web Map is a subset of Global Satellite (VIIRS) Thermal Hotspots and Fire ActivityThis layer presents detectable thermal activity from VIIRS satellites for the last 7 days. VIIRS Thermal Hotspots and Fire Activity is a product of NASA’s Land, Atmosphere Near real-time Capability for EOS (LANCE) Earth Observation Data, part of NASA's Earth Science Data.Consumption Best Practices:As a service that is subject to very high usage, ensure peak performance and accessibility of your maps and apps by avoiding the use of non-cacheable relative Date/Time field filters. To accommodate filtering events by Date/Time, we suggest using the included "Age" fields that maintain the number of days or hours since a record was created or last modified, compared to the last service update. These queries fully support the ability to cache a response, allowing common query results to be efficiently provided to users in a high demand service environment.When ingesting this service in your applications, avoid using POST requests whenever possible. These requests can compromise performance and scalability during periods of high usage because they too are not cacheable.Source: NASA LANCE - VNP14IMG_NRT active fire detection - WorldScale/Resolution: 375-meterUpdate Frequency: Hourly using the aggregated live feed methodologyArea Covered: WorldWhat can I do with this layer?This layer represents the most frequently updated and most detailed global remotely sensed wildfire information. Detection attributes include time, location, and intensity. It can be used to track the location of fires from the recent past, a few hours up to seven days behind real time. This layer also shows the location of wildfire over the past 7 days as a time-enabled service so that the progress of fires over that timeframe can be reproduced as an animation.The VIIRS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Fire points in this service are generally available within 3 1/4 hours after detection by a VIIRS device. LANCE estimates availability at around 3 hours after detection, and esri livefeeds updates this feature layer every 15 minutes from LANCE.Even though these data display as point features, each point in fact represents a pixel that is >= 375 m high and wide. A point feature means somewhere in this pixel at least one "hot" spot was detected which may be a fire.VIIRS is a scanning radiometer device aboard the Suomi NPP, NOAA-20, and NOAA-21 satellites that collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in several visible and infrared bands. The VIIRS Thermal Hotspots and Fire Activity layer is a livefeed from a subset of the overall VIIRS imagery, in particular from NASA's VNP14IMG_NRT active fire detection product. The downloads are automatically downloaded from LANCE, NASA's near real time data and imagery site, every 15 minutes.The 375-m data complements the 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Hotspots and Fire Activity layer; they both show good agreement in hotspot detection but the improved spatial resolution of the 375 m data provides a greater response over fires of relatively small areas and provides improved mapping of large fire perimeters.Attribute informationLatitude and Longitude: The center point location of the 375 m (approximately) pixel flagged as containing one or more fires/hotspots.Satellite: Whether the detection was picked up by the Suomi NPP satellite (N) or NOAA-20 satellite (1) or NOAA-21 satellite (2). For best results, use the virtual field WhichSatellite, redefined by an arcade expression, that gives the complete satellite name.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel. This value is based on a collection of intermediate algorithm quantities used in the detection process. It is intended to help users gauge the quality of individual hotspot/fire pixels. Confidence values are set to low, nominal and high. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Nominal confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.Please note: Low confidence nighttime pixels occur only over the geographic area extending from 11 deg E to 110 deg W and 7 deg N to 55 deg S. This area describes the region of influence of the South Atlantic Magnetic Anomaly which can cause spurious brightness temperatures in the mid-infrared channel I4 leading to potential false positive alarms. These have been removed from the NRT data distributed by FIRMS.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: D = Daytime fire, N = Nighttime fireHours Old: Derived field that provides age of record in hours between Acquisition date/time and latest update date/time. 0 = less than 1 hour ago, 1 = less than 2 hours ago, 2 = less than 3 hours ago, and so on.Additional information can be found on the NASA FIRMS site FAQ.Note about near real time data:Near real time data is not checked thoroughly before it's posted on LANCE or downloaded and posted to the Living Atlas. NASA's goal is to get vital fire information to its customers within three hours of observation time. However, the data is screened by a confidence algorithm which seeks to help users gauge the quality of individual hotspot/fire points. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Medium confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.RevisionsMarch 7, 2024: Updated to include source data from NOAA-21 Satellite.September 15, 2022: Updated to include 'Hours_Old' field. Time series has been disabled by default, but still available.July 5, 2022: Terms of Use updated to Esri Master License Agreement, no longer stating that a subscription is required!This layer is provided for informational purposes and is not monitored 24/7 for accuracy and currency.If you would like to be alerted to potential issues or simply see when this Service will update next, please visit our Live Feed Status Page!
Facebook
Twitterhttps://data.linz.govt.nz/license/attribution-4-0-international/https://data.linz.govt.nz/license/attribution-4-0-international/
This dataset provides a seamless cloud-free 10m resolution satellite imagery layer of the New Zealand mainland and offshore islands.
The imagery was captured by the European Space Agency Sentinel-2 satellites between September 2022 - April 2023.
Data comprises: • 450 ortho-rectified RGB GeoTIFF images in NZTM projection, tiled into the LINZ Standard 1:50000 tile layout. • Satellite sensors: ESA Sentinel-2A and Sentinel-2B • Acquisition dates: September 2022 - April 2023 • Spectral resolution: R, G, B • Spatial resolution: 10 meters • Radiometric resolution: 8-bits (downsampled from 12-bits)
This is a visual product only. The data has been downsampled from 12-bits to 8-bits, and the original values of the images have been modified for visualisation purposes.
Also available on: • Basemaps • NZ Imagery - Registry of Open Data on AWS