World Imagery provides one meter or better satellite and aerial imagery for most of the world’s landmass and lower resolution satellite imagery worldwide. The map is currently comprised of the following sources: Worldwide 15-m resolution TerraColor imagery at small and medium map scales.Maxar imagery basemap products around the world: Vivid Premium at 15-cm HD resolution for select metropolitan areas, Vivid Advanced 30-cm HD for more than 1,000 metropolitan areas, and Vivid Standard from 1.2-m to 0.6-cm resolution for the most of the world, with 30-cm HD across the United States and parts of Western Europe. More information on the Maxar products is included below. High-resolution aerial photography contributed by the GIS User Community. This imagery ranges from 30-cm to 3-cm resolution. You can contribute your imagery to this map and have it served by Esri via the Community Maps Program. Maxar Basemap ProductsVivid PremiumProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product provides 15-cm HD resolution imagery.Vivid AdvancedProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product includes a mix of native 30-cm and 30-cm HD resolution imagery.Vivid StandardProvides a visually consistent and continuous image layer over large areas through advanced image mosaicking techniques, including tonal balancing and seamline blending across thousands of image strips. Available from 1.2-m down to 30-cm HD. More on Maxar HD. Imagery UpdatesYou can use the Updates Mode in the World Imagery Wayback app to learn more about recent and pending updates. Accessing this information requires a user login with an ArcGIS organizational account. CitationsThis layer includes imagery provider, collection date, resolution, accuracy, and source of the imagery. With the Identify tool in ArcGIS Desktop or the ArcGIS Online Map Viewer you can see imagery citations. Citations returned apply only to the available imagery at that location and scale. You may need to zoom in to view the best available imagery. Citations can also be accessed in the World Imagery with Metadata web map.UseYou can add this layer to the ArcGIS Online Map Viewer, ArcGIS Desktop, or ArcGIS Pro. To view this layer with a useful reference overlay, open the Imagery Hybrid web map.FeedbackHave you ever seen a problem in the Esri World Imagery Map that you wanted to report? You can use the Imagery Map Feedback web map to provide comments on issues. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
World Imagery provides one meter or better satellite and aerial imagery in many parts of the world and lower resolution satellite imagery worldwide. The map includes 15m TerraColor imagery at small and mid-scales (~1:591M down to ~1:72k) and 2.5m SPOT Imagery (~1:288k to ~1:72k) for the world. The map features 0.5m resolution imagery in the continental United States and parts of Western Europe from DigitalGlobe. Additional DigitalGlobe sub-meter imagery is featured in many parts of the world. In the United States, 1 meter or better resolution NAIP imagery is available in some areas. In other parts of the world, imagery at different resolutions has been contributed by the GIS User Community. In select communities, very high resolution imagery (down to 0.03m) is available down to ~1:280 scale. You can contribute your imagery to this map and have it served by Esri via the Community Maps Program. View the list of Contributors for the World Imagery Map.CoverageView the links below to learn more about recent updates and map coverage:What's new in World ImageryWorld coverage mapCitationsThis layer includes imagery provider, collection date, resolution, accuracy, and source of the imagery. With the Identify tool in ArcGIS Desktop or the ArcGIS Online Map Viewer you can see imagery citations. Citations returned apply only to the available imagery at that location and scale. You may need to zoom in to view the best available imagery. Citations can also be accessed in the World Imagery with Metadata web map.UseYou can add this layer to the ArcGIS Online Map Viewer, ArcGIS Desktop, or ArcGIS Pro. To view this layer with a useful reference overlay, open the Imagery Hybrid web map. A similar raster web map, Imagery with Labels, is also available.FeedbackHave you ever seen a problem in the Esri World Imagery Map that you wanted to report? You can use the Imagery Map Feedback web map to provide comments on issues. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
https://data.linz.govt.nz/license/attribution-4-0-international/https://data.linz.govt.nz/license/attribution-4-0-international/
This dataset provides a seamless cloud-free 10m resolution satellite imagery layer of the New Zealand mainland and offshore islands.
The imagery was captured by the European Space Agency Sentinel-2 satellites between September 2021 - April 2022.
Technical specifications:
This is a visual product only. The data has been downsampled from 12-bits to 8-bits, and the original values of the images have been modified for visualisation purposes.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset collection contains A0 maps of the Keppel Island region based on satellite imagery and fine-scale habitat mapping of the islands and marine environment. This collection provides the source satellite imagery used to produce these maps and the habitat mapping data.
The imagery used to produce these maps was developed by blending high-resolution imagery (1 m) from ArcGIS Online with a clear-sky composite derived from Sentinel 2 imagery (10 m). The Sentinel 2 imagery was used to achieve full coverage of the entire region, while the high-resolution was used to provide detail around island areas.
The blended imagery is a derivative product of the Sentinel 2 imagery and ArcGIS Online imagery, using Photoshop to to manually blend the best portions of each imagery into the final product. The imagery is provided for the sole purpose of reproducing the A0 maps.
Methods:
The high resolution satellite composite composite was developed by manual masking and blending of a Sentinel 2 composite image and high resolution imagery from ArcGIS Online World Imagery (2019).
The Sentinel 2 composite was produced by statistically combining the clearest 10 images from 2016 - 2019. These images were manually chosen based on their very low cloud cover, lack of sun glint and clear water conditions. These images were then combined together to remove clouds and reduce the noise in the image.
The processing of the images was performed using a script in Google Earth Engine. The script combines the manually chosen imagery to estimate the clearest imagery. The dates of the images were chosen using the EOBrowser (https://www.sentinel-hub.com/explore/eobrowser) to preview all the Sentinel 2 imagery from 2015-2019. The images that were mostly free of clouds, with little or no sun glint, were recorded. Each of these dates was then viewed in Google Earth Engine with high contrast settings to identify images that had high water surface noise due to algal blooms, waves, or re-suspension. These were excluded from the list. All the images were then combined by applying a histogram analysis of each pixel, with the final image using the 40th percentile of the time series of the brightness of each pixel. This approach helps exclude effects from clouds.
The contrast of the image was stretched to highlight the marine features, whilst retaining detail in the land features. This was done by choosing a black point for each channel that would provide a dark setting for deep clear water. Gamma correction was then used to lighten up the dark water features, whilst not ove- exposing the brighter shallow areas.
Both the high resolution satellite imagery and Sentinel 2 imagery was combined at 1 m pixel resolution. The resolution of the Sentinel 2 tiles was up sampled to match the resolution of the high-resolution imagery. These two sets of imagery were then layered in Photoshop. The brightness of the high-resolution satellite imagery was then adjusting to match the Sentinel 2 imagery. A mask was then used to retain and blend the imagery that showed the best detail of each area. The blended tiles were then merged with the overall area imagery by performing a GDAL merge, resulting in an upscaling of the Sentinel 2 imagery to 1 m resolution.
Habitat Mapping:
A 5 m resolution habitat mapping was developed based on the satellite imagery, aerial imagery available, and monitoring site information. This habitat mapping was developed to help with monitoring site selection and for the mapping workshop with the Woppaburra TOs on North Keppel Island in Dec 2019.
The habitat maps should be considered as draft as they don't consider all available in water observations. They are primarily based on aerial and satellite images.
The habitat mapping includes: Asphalt, Buildings, Mangrove, Cabbage-tree palm, Sheoak, Other vegetation, Grass, Salt Flat, Rock, Beach Rock, Gravel, Coral, Sparse coral, Unknown not rock (macroalgae on rubble), Marine feature (rock).
This assumed layers allowed the digitisation of these features to be sped up, so for example, if there was coral growing over a marine feature then the boundary of the marine feature would need to be digitised, then the coral feature, but not the boundary between the marine feature and the coral. We knew that the coral was going to cut out from the marine feature because the coral is on top of the marine feature, saving us time in digitising this boundary. Digitisation was performed on an iPad using Procreate software and an Apple pencil to draw the features as layers in a drawing. Due to memory limitations of the iPad the region was digitised using 6000x6000 pixel tiles. The raster images were converted back to polygons and the tiles merged together.
A python script was then used to clip the layer sandwich so that there is no overlap between feature types.
Habitat Validation:
Only limited validation was performed on the habitat map. To assist in the development of the habitat mapping, nearly every YouTube video available, at the time of development (2019), on the Keppel Islands was reviewed and, where possible, georeferenced to provide a better understanding of the local habitats at the scale of the mapping, prior to the mapping being conducted. Several validation points were observed during the workshop. The map should be considered as largely unvalidated.
data/coastline/Keppels_AIMS_Coastline_2017.shp:
The coastline dataset was produced by starting with the Queensland coastline dataset by DNRME (Downloaded from http://qldspatial.information.qld.gov.au/catalogue/custom/detail.page?fid={369DF13C-1BF3-45EA-9B2B-0FA785397B34} on 31 Aug 2019). This was then edited to work at a scale of 1:5000, using the aerial imagery from Queensland Globe as a reference and a high-tide satellite image from 22 Feb 2015 from Google Earth Pro. The perimeter of each island was redrawn. This line feature was then converted to a polygon using the "Lines to Polygon" QGIS tool. The Keppel island features were then saved to a shapefile by exporting with a limited extent.
data/labels/Keppel-Is-Map-Labels.shp:
This contains 70 named places in the Keppel island region. These names were sourced from literature and existing maps. Unfortunately, no provenance of the names was recorded. These names are not official. This includes the following attributes:
- Name: Name of the location. Examples Bald, Bluff
- NameSuffix: End of the name which is often a description of the feature type: Examples: Rock, Point
- TradName: Traditional name of the location
- Scale: Map scale where the label should be displayed.
data/lat/Keppel-Is-Sentinel2-2016-19_B4-LAT_Poly3m_V3.shp:
This corresponds to a rough estimate of the LAT contours around the Keppel Islands. LAT was estimated from tidal differences in Sentinel-2 imagery and light penetration in the red channel. Note this is not very calibrated and should be used as a rough guide. Only one rough in-situ validation was performed at low tide on Ko-no-mie at the edge of the reef near the education centre. This indicated that the LAT estimate was within a depth error range of about +-0.5 m.
data/habitat/Keppels_AIMS_Habitat-mapping_2019.shp:
This shapefile contains the mapped land and marine habitats. The classification type is recorded in the Type attribute.
Format:
GeoTiff (Internal JPEG format - 538 MB)
PDF (A0 regional maps - ~30MB each)
Shapefile (Habitat map, Coastline, Labels, LAT estimate)
Data Location:
This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\Keppels_AIMS_Regional-maps
Suggested use: Use tiled Map Service for large scale mapping when high resolution color imagery is needed.A web app to view tile and block metadata such as year, sensor, and cloud cover can be found here. CoverageState of AlaskaProduct TypeTile CacheImage BandsRGBSpatial Resolution50cmAccuracy5m CE90 or betterCloud Cover<10% overallOff Nadir Angle<30 degreesSun Elevation>30 degreesWMS version of this data: https://geoportal.alaska.gov/arcgis/services/ahri_2020_rgb_cache/MapServer/WMSServer?request=GetCapabilities&service=WMSWMTS version of this data:https://geoportal.alaska.gov/arcgis/rest/services/ahri_2020_rgb_cache/MapServer/WMTS/1.0.0/WMTSCapabilities.xml
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of collections of satellite image composites (Sentinel 2 and Landsat 8) that are created from manually curated image dates for a range of projects. These images are typically prepared for subsequent analysis or testing of analysis algorithms as part of other projects. This dataset acts as a repository of reproducible test sets of images processed from Google Earth Engine using a standardised workflow.
Details of the algorithms used to produce the imagery are described in the GEE code and code repository available on GitHub (https://github.com/eatlas/World_AIMS_Marine-satellite-imagery).
Project test image sets:
As new projects are added to this dataset, their details will be described here:
- NESP MaC 2.3 Benthic reflection estimation (projects/CS_NESP-MaC-2-3_AIMS_Benth-reflect):
This collection consists of six Sentinel 2 image composites in the Coral Sea and GBR for the purpose of testing a method of determining benthic reflectance of deep lagoonal areas of coral atolls. These image composites are in GeoTiff format, using 16-bit encoding and LZW compression. These images do not have internal image pyramids to save on space.
[Status: final and available for download]
- NESP MaC 2.3 Oceanic Vegetation (projects/CS_NESP-MaC-2-3_AIMS_Oceanic-veg):
This project is focused on mapping vegetation on the bottom of coral atolls in the Coral Sea. This collection consists of additional images of Ashmore Reef. The lagoonal area of Ashmore has low visibility due to coloured dissolved organic matter, making it very hard to distinguish areas that are covered in vegetation. These images were manually curated to best show the vegetation. While these are the best images in the Sentinel 2 series up to 2023, they are still not very good. Probably 80 - 90% of the lagoonal benthos is not visible.
[Status: final and available for download]
- NESP MaC 3.17 Australian reef mapping (projects/AU_NESP-MaC-3-17_AIMS_Reef-mapping):
This collection of test images was prepared to determine if creating a composite from manually curated image dates (corresponding to images with the clearest water) would produce a better composite than a fully automated composite based on cloud filtering. The automated composites are described in https://doi.org/10.26274/HD2Z-KM55. This test set also includes composites from low tide imagery. The images in this collection are not yet available for download as the collection of images that will be used in the analysis has not been finalised.
[Status: under development, code is available, but not rendered images]
- Capricorn Regional Map (projects/CapBunk_AIMS_Regional-map): This collection was developed for making a set of maps for the region to facilitate participatory mapping and reef restoration field work planning.
[Status: final and available for download]
- Default (project/default): This collection of manual selected scenes are those that were prepared for the Coral Sea and global areas to test the algorithms used in the developing of the original Google Earth Engine workflow. This can be a good starting point for new test sets. Note that the images described in the default project are not rendered and made available for download to save on storage space.
[Status: for reference, code is available, but not rendered images]
Filename conventions:
The images in this dataset are all named using a naming convention. An example file name is Wld_AIMS_Marine-sat-img_S2_NoSGC_Raw-B1-B4_54LZP.tif
. The name is made up of:
- Dataset name (Wld_AIMS_Marine-sat-img
), short for World, Australian Institute of Marine Science, Marine Satellite Imagery.
- Satellite source: L8
for Landsat 8 or S2
for Sentinel 2.
- Additional information or purpose: NoSGC
- No sun glint correction, R1
best reference imagery set or R2
second reference imagery.
- Colour and contrast enhancement applied (DeepFalse
, TrueColour
,Shallow
,Depth5m
,Depth10m
,Depth20m
,Raw-B1-B4
),
- Image tile (example: Sentinel 2 54LZP
, Landsat 8 091086
)
Limitations:
Only simple atmospheric correction is applied to land areas and as a result the imagery only approximates the bottom of atmosphere reflectance.
For the sentinel 2 imagery the sun glint correction algorithm transitions between different correction levels from deep water (B8) to shallow water (B11) and a fixed atmospheric correction for land (bright B8 areas). Slight errors in the tuning of these transitions can result in unnatural tonal steps in the transitions between these areas, particularly in very shallow areas.
For the Landsat 8 image processing land areas appear as black from the sun glint correction, which doesn't separately mask out the land. The code for the Landsat 8 imagery is less developed than for the Sentinel 2 imagery.
The depth contours are estimated using satellite derived bathymetry that is subject to errors caused by cloud artefacts, substrate darkness, water clarity, calibration issues and uncorrected tides. They were tuned in the clear waters of the Coral Sea. The depth contours in this dataset are RAW and contain many false positives due to clouds. They should not be used without additional dataset cleanup.
Change log:
As changes are made to the dataset, or additional image collections are added to the dataset then those changes will be recorded here.
2nd Edition, 2024-06-22: CapBunk_AIMS_Regional-map
1st Edition, 2024-03-18: Initial publication of the dataset, with CS_NESP-MaC-2-3_AIMS_Benth-reflect, CS_NESP-MaC-2-3_AIMS_Oceanic-veg and code for AU_NESP-MaC-3-17_AIMS_Reef-mapping and Default projects.
Data Format:
GeoTiff images with LZW compression. Most images do not have internal image pyramids to save on storage space. This makes rendering these images very slow in a desktop GIS. Pyramids should be added to improve performance.
Data Location:
This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\Wld-AIMS-Marine-sat-img
https://www.ontario.ca/page/open-government-licence-ontariohttps://www.ontario.ca/page/open-government-licence-ontario
The Ontario Imagery Web Map Service (OIWMS) is an open data service available to everyone free of charge. It provides instant online access to the most recent, highest quality, province wide imagery. GEOspatial Ontario (GEO) makes this data available as an Open Geospatial Consortium (OGC) compliant web map service or as an ArcGIS map service. Imagery was compiled from many different acquisitions which are detailed in the Ontario Imagery Web Map Service Metadata Guide linked below. Instructions on how to use the service can also be found in the Imagery User Guide linked below. Note: This map displays the Ontario Imagery Web Map Service Source, a companion ArcGIS web map service to the Ontario Imagery Web Map Service. It provides an overlay that can be used to identify acquisition relevant information such as sensor source and acquisition date. OIWMS contains several hierarchical layers of imagery, with coarser less detailed imagery that draws at broad scales, such as a province wide zooms, and finer more detailed imagery that draws when zoomed in, such as city-wide zooms. The attributes associated with this data describes at what scales (based on a computer screen) the specific imagery datasets are visible. Available Products Ontario Imagery OCG Web Map Service – public linkOntario Imagery ArcGIS Map Service – public linkOntario Imagery Web Map Service Source – public linkOntario Imagery ArcGIS Map Service – OPS internal linkOntario Imagery Web Map Service Source – OPS internal linkAdditional Documentation Ontario Imagery Web Map Service Metadata Guide (PDF)Ontario Imagery Web Map Service Copyright Document (PDF) Imagery User Guide (Word)StatusCompleted: Production of the data has been completed Maintenance and Update FrequencyAnnually: Data is updated every year ContactOntario Ministry of Natural Resources, Geospatial Ontario, imagery@ontario.ca
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the code and resources for the project titled "Detection of Areas with Human Vulnerability Using Public Satellite Images and Deep Learning". The goal of this project is to identify regions where individuals are living under precarious conditions and facing neglected basic needs, a situation often seen in Brazil. This concept is referred to as "human vulnerability" and is exemplified by families living in inadequate shelters or on the streets in both urban and rural areas.
Focusing on the Federal District of Brazil as the research area, this project aims to develop two novel public datasets consisting of satellite images. The datasets contain imagery captured at 50m and 100m scales, covering regions of human vulnerability, traditional areas, and improperly disposed waste sites.
The project also leverages these datasets for training deep learning models, including YOLOv7 and other state-of-the-art models, to perform image segmentation. A comparative analysis is conducted between the models using two training strategies: training from scratch with random weight initialization and fine-tuning using pre-trained weights through transfer learning.
This repository provides the code, models, and data pipelines used for training, evaluation, and performance comparison of these deep learning models.
@TECHREPORT {TechReport-Julia-Laura-HumanVulnerability-2024,
author = "Julia Passos Pontes, Laura Maciel Neves Franco, Flavio De Barros Vidal",
title = "Detecção de Áreas com Atividades de Vulnerabilidade Humana utilizando Imagens Públicas de Satélites e Aprendizagem Profunda",
institution = "University of Brasilia",
year = "2024",
type = "Undergraduate Thesis",
address = "Computer Science Department - University of Brasilia - Asa Norte - Brasilia - DF, Brazil",
month = "aug",
note = "People living in precarious conditions and with their basic needs neglected is an unfortunate reality in Brazil. This scenario will be approached in this work according to the concept of \"human vulnerability\" and can be exemplified through families who live in inadequate shelters, without basic structures and on the streets of urban or rural centers. Therefore, assuming the Federal District as the research scope, this project proposes to develop two new databases to be made available publicly, considering the map scales of 50m and 100m, and composed by satellite images of human vulnerability areas,
regions treated as traditional and waste disposed inadequately. Furthermore, using these image bases, trainings were done with the YOLOv7 model and other deep learning models for image segmentation. By adopting an exploratory approach, this work compares the results of different image segmentation models and training strategies, using random weight initialization
(from scratch) and pre-trained weights (transfer learning). Thus, the present work was able to reach maximum F1
score values of 0.55 for YOLOv7 and 0.64 for other segmentation models."
}
This project is licensed under the MIT License - see the LICENSE file for details.
Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) source code for performing this analysis is provided in the function NN_depth_ensembling.m and the figure included on this landing page provides a flow chart illustrating the four different neural network-based depth retrieval methods. As examples of the resulting models, MATLAB *.mat data files containing the best-performing neural network model for each site are provided below, along with a file that lists the PlanetScope image identifiers for the images that were used for each site. To develop and test this new NNDR approach, the method was applied to satellite images from three rivers across the U.S.: the American, Colorado, and Potomac. For each site, field measurements of water depth available through other data releases were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: X_mean-spec.tif, X_mean-depth.tif, X_NN-depth.tif, and X-single-image.tif, where X denotes the site name. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Detecting Landscape Objects on Satellite Images with Artificial Intelligence In recent years, there has been a significant increase in the use of artificial intelligence (AI) for image recognition and object detection. This technology has proven to be useful in a wide range of applications, from self-driving cars to facial recognition systems. In this project, the focus lies on using AI to detect landscape objects in satellite images (aerial photography angle) with the goal to create an annotated map of The Netherlands with all the coordinates of the given landscape objects.
Background Information
Problem Statement One of the things that Naturalis does is conducting research into the distribution of wild bees (Naturalis, n.d.). For their research they use a model that predicts whether or not a certain species can occur at a given location. Representing the real world in a digital form, there is at the moment not yet a way to generate an inventory of landscape features such as presence of trees, ponds and hedges, with their precise location on the digital map. The current models rely on species observation data and climate variables, but it is expected that adding detailed physical landscape information could increase the prediction accuracy. Common maps do not contain this level of detail, but high-resolution satellite images do.
Possible opportunities Based on the problem statement, there is at the moment at Naturalis not a map that does contain the level of detail where detection of landscape elements could be made, according to their wishes. The idea emerged that it should be possible to use satellite images to find the locations of small landscape elements and produce an annotated map. Therefore, by refining the accuracy of the current prediction model, researchers can gain a profound understanding of wild bees in the Netherlands with the goal to take effective measurements to protect wild bees and their living environment.
Goal of project The goal of the project is to develop an artificial intelligence model for landscape detection on satellite images to create an annotated map of The Netherlands that would therefore increase the accuracy prediction of the current model that is used at Naturalis. The project aims to address the problem of a lack of detailed maps of landscapes that could revolutionize the way Naturalis conduct their research on wild bees. Therefore, the ultimate aim of the project in the long term is to utilize the comprehensive knowledge to protect both the wild bees population and their natural habitats in the Netherlands.
Data Collection Google Earth One of the main challenges of this project was the difficulty in obtaining a qualified dataset (with or without data annotation). Obtaining high-quality satellite images for the project presents challenges in terms of cost and time. The costs in obtaining high-quality satellite images of the Netherlands is 1,038,575 $ in total (for further details and information of the costs of satellite images. On top of that, the acquisition process for such images involves various steps, from the initial request to the actual delivery of the images, numerous protocols and processes need to be followed.
After conducting further research, the best possible solution was to use Google Earth as the primary source of data. While Google Earth is not allowed to be used for commercial or promotional purposes, this project is for research purposes only for Naturalis on their research of wild bees, hence the regulation does not apply in this case.
This map features the World Imagery map, focused on the continent of Africa. World Imagery provides one meter or better satellite and aerial imagery in many parts of the world and lower resolution satellite imagery worldwide. The map includes 15m TerraColor imagery at small and mid-scales (~1:591M down to ~1:72k) and 2.5m SPOT Imagery (~1:288k to ~1:72k) for the world. DigitalGlobe sub-meter imagery is featured in many parts of the world, including Africa. Sub-meter Pléiades imagery is available in select urban areas. Additionally, imagery at different resolutions has been contributed by the GIS User Community.For more information on this map, view the World Imagery item description. Metadata: This service is metadata-enabled. With the Identify tool in ArcMap or the World Imagery with Metadata web map, you can see the resolution, collection date, and source of the imagery at the location you click. Values of "99999" mean that metadata is not available for that field. The metadata applies only to the best available imagery at that location. You may need to zoom in to view the best available imagery.Feedback: Have you ever seen a problem in the Esri World Imagery Map that you wanted to see fixed? You can use the Imagery Map Feedback web map to provide feedback on issues or errors that you see. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
High resolution orthorectified images combine the image characteristics of an aerial photograph with the geometric qualities of a map. An orthoimage is a uniform-scale image where corrections have been made for feature displacement such as building tilt and for scale variations caused by terrain relief, sensor geometry, and camera tilt. A mathematical equation based on ground control points, sensor calibration information, and a digital elevation model is applied to each pixel to rectify the image to obtain the geometric qualities of a map.
A digital orthoimage may be created from several photographs mosaicked to form the final image. The source imagery may be black-and-white, natural color, or color infrared with a pixel resolution of 1-meter or finer. With orthoimagery, the resolution refers to the distance on the ground represented by each pixel.
Cloud-free Landsat satellite imagery mosaics of the islands of the main 8 Hawaiian Islands (Hawaii, Maui, Kahoolawe, Lanai, Molokai, Oahu, Kauai and Niihau). Landsat 7 ETM (enhanced thematic mapper) is a polar orbiting 8 band multispectral satellite-borne sensor. The ETM+ instrument provides image data from eight spectral bands. The spatial resolution is 30 meters for the visible and near-infra...
World Imagery provides one meter or better satellite and aerial imagery in many parts of the world and lower resolution satellite imagery worldwide. The map includes 15m TerraColor imagery at small and mid-scales (~1:591M down to ~1:72k) and 2.5m SPOT Imagery (~1:288k to ~1:72k) for the world. The map features 0.5m resolution imagery in the continental United States and parts of Western Europe from DigitalGlobe. Additional DigitalGlobe sub-meter imagery is featured in many parts of the world. In the United States, 1 meter or better resolution NAIP imagery is available in some areas. In other parts of the world, imagery at different resolutions has been contributed by the GIS User Community. In select communities, very high resolution imagery (down to 0.03m) is available down to ~1:280 scale. You can contribute your imagery to this map and have it served by Esri via the Community Maps Program. View the list of Contributors for the World Imagery Map.CoverageView the links below to learn more about recent updates and map coverage:What's new in World ImageryWorld coverage mapCitationsThis layer includes imagery provider, collection date, resolution, accuracy, and source of the imagery. With the Identify tool in ArcGIS Desktop or the ArcGIS Online Map Viewer you can see imagery citations. Citations returned apply only to the available imagery at that location and scale. You may need to zoom in to view the best available imagery. Citations can also be accessed in the World Imagery with Metadata web map.UseYou can add this layer to the ArcGIS Online Map Viewer, ArcGIS Desktop, or ArcGIS Pro. To view this layer with a useful reference overlay, open the Imagery Hybrid web map. A similar raster web map, Imagery with Labels, is also available.FeedbackHave you ever seen a problem in the Esri World Imagery Map that you wanted to report? You can use the Imagery Map Feedback web map to provide comments on issues. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
World Imagery provides one meter or better satellite and aerial imagery for most of the world’s landmass and lower resolution satellite imagery worldwide. The map is currently comprised of the following sources: Worldwide 15-m resolution TerraColor imagery at small and medium map scales.Maxar imagery basemap products around the world: Vivid Premium at 15-cm HD resolution for select metropolitan areas, Vivid Advanced 30-cm HD for more than 1,000 metropolitan areas, and Vivid Standard from 1.2-m to 0.6-cm resolution for the most of the world, with 30-cm HD across the United States and parts of Western Europe. More information on the Maxar products is included below. High-resolution aerial photography contributed by the GIS User Community. This imagery ranges from 30-cm to 3-cm resolution. You can contribute your imagery to this map and have it served by Esri via the Community Maps Program. Maxar Basemap ProductsVivid PremiumProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product provides 15-cm HD resolution imagery.Vivid AdvancedProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product includes a mix of native 30-cm and 30-cm HD resolution imagery.Vivid StandardProvides a visually consistent and continuous image layer over large areas through advanced image mosaicking techniques, including tonal balancing and seamline blending across thousands of image strips. Available from 1.2-m down to 30-cm HD. More on Maxar HD. Imagery UpdatesYou can use the Updates Mode in the World Imagery Wayback app to learn more about recent and pending updates. Accessing this information requires a user login with an ArcGIS organizational account. CitationsThis layer includes imagery provider, collection date, resolution, accuracy, and source of the imagery. With the Identify tool in ArcGIS Desktop or the ArcGIS Online Map Viewer you can see imagery citations. Citations returned apply only to the available imagery at that location and scale. You may need to zoom in to view the best available imagery. Citations can also be accessed in the World Imagery with Metadata web map.UseYou can add this layer to the ArcGIS Online Map Viewer, ArcGIS Desktop, or ArcGIS Pro. To view this layer with a useful reference overlay, open the Imagery Hybrid web map.FeedbackHave you ever seen a problem in the Esri World Imagery Map that you wanted to report? You can use the Imagery Map Feedback web map to provide comments on issues. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains Sentinel 2 and Landsat 8 cloud free composite satellite images of the Coral Sea reef areas and some parts of the Great Barrier Reef. It also contains raw depth contours derived from the satellite imagery. This dataset was developed as the base information for mapping the boundaries of reefs and coral cays in the Coral Sea. It is likely that the satellite imagery is useful for numerous other applications. The full source code is available and can be used to apply these techniques to other locations.
This dataset contains two sets of raw satellite derived bathymetry polygons for 5 m, 10 m and 20 m depths based on both the Landsat 8 and Sentinel 2 imagery. These are intended to be post-processed using clipping and manual clean up to provide an estimate of the top structure of reefs. This dataset also contains select scenes on the Great Barrier Reef and Shark bay in Western Australia that were used to calibrate the depth contours. Areas in the GBR were compared with the GA GBR30 2020 (Beaman, 2017) bathymetry dataset and the imagery in Shark bay was used to tune and verify the Satellite Derived Bathymetry algorithm in the handling of dark substrates such as by seagrass meadows. This dataset also contains a couple of small Sentinel 3 images that were used to check the presence of reefs in the Coral Sea outside the bounds of the Sentinel 2 and Landsat 8 imagery.
The Sentinel 2 and Landsat 8 imagery was prepared using the Google Earth Engine, followed by post processing in Python and GDAL. The processing code is available on GitHub (https://github.com/eatlas/CS_AIMS_Coral-Sea-Features_Img).
This collection contains composite imagery for Sentinel 2 tiles (59 in Coral Sea, 8 in GBR) and Landsat 8 tiles (12 in Coral Sea, 4 in GBR and 1 in WA). For each Sentinel tile there are 3 different colour and contrast enhancement styles intended to highlight different features. These include:
- TrueColour
- Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to identifying shallow features are and in mapping the vegetation on cays.
- DeepFalse
- Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This imagery has a high level of contrast enhancement applied to the imagery and so it appears more noisy (in particular showing artefact from clouds) than the TrueColour styling.
- Shallow
- Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Features less than a couple of metres appear dark blue, dry areas are white. This imagery is intended to help identify coral cay boundaries.
For Landsat 8 imagery only the TrueColour
and DeepFalse
stylings were rendered.
All Sentinel 2 and Landsat 8 imagery has Satellite Derived Bathymetry (SDB) depth contours.
- Depth5m
- This corresponds to an estimate of the area above 5 m depth (Mean Sea Level).
- Depth10m
- This corresponds to an estimate of the area above 10 m depth (Mean Sea Level).
- Depth20m
- This corresponds to an estimate of the area above 20 m depth (Mean Sea Level).
For most Sentinel and some Landsat tiles there are two versions of the DeepFalse imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery. This allows any mapped features to be checked against two images. Typically the R2 imagery will have more artefacts from clouds. In one Sentinel 2 tile a third image was created to help with mapping the reef platform boundary.
The satellite imagery was processed in tiles (approximately 100 x 100 km for Sentinel 2 and 200 x 200 km for Landsat 8) to keep each final image small enough to manage. These tiles were not merged into a single mosaic as it allowed better individual image contrast enhancement when mapping deep features. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs and where their might have been potential new reef platforms indicated by existing bathymetry datasets and the AHO Marine Charts. The extent of the imagery was limited by those available through the Google Earth Engine.
# Methods:
The Sentinel 2 imagery was created using the Google Earth Engine. The core algorithm was:
1. For each Sentinel 2 tile, images from 2015 – 2021 were reviewed manually after first filtering to remove cloudy scenes. The allowable cloud cover was adjusted so that at least the 50 least cloud free images were reviewed. The typical cloud cover threshold was 1%. Where very few images were available the cloud cover filter threshold was raised to 100% and all images were reviewed. The Google Earth Engine image IDs of the best images were recorded, along with notes to help sort the images based on those with the clearest water, lowest waves, lowest cloud, and lowest sun glint. Images where there were no or few clouds over the known coral reefs were preferred. No consideration of tides was used in the image selection process. The collection of usable images were grouped into two sets that would be combined together into composite images. The best were added to the R1 composite, and the next best images into the R2 composite. Consideration was made as to whether each image would improve the resultant composite or make it worse. Adding clear images to the collection reduces the visual noise in the image allowing deeper features to be observed. Adding images with clouds introduces small artefacts to the images, which are magnified due to the high contrast stretching applied to the imagery. Where there were few images all available imagery was typically used.
2. Sunglint was removed from the imagery using estimates of the sunglint using two of the infrared bands (described in detail in the section on Sun glint removal and atmospheric correction).
3. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later).
4. The brightness of the composite image was normalised so that all tiles would have a similar average brightness for deep water areas. This correction was applied to allow more consistent contrast enhancement. Note: this brightness adjustment was applied as a single offset across all pixels in the tile and so this does not correct for finer spatial brightness variations.
5. The contrast of the images was enhanced to create a series of products for different uses. The TrueColour
colour image retained the full range of tones visible, so that bright sand cays still retain detail. The DeepFalse
style was optimised to see features at depth and the Shallow
style provides access to far red and infrared bands for assessing shallow features, such as cays and island.
6. The various contrast enhanced composite images were exported from Google Earth Engine and optimised using Python and GDAL. This optimisation added internal tiling and overviews to the imagery. The depth polygons from each tile were merged into shapefiles covering the whole for each depth.
## Cloud Masking
Prior to combining the best images each image was processed to mask out clouds and their shadows.
The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.
A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.
A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.
The buffering was applied as the cloud masking would often miss significant portions of the edges of clouds and their shadows. The buffering allowed a higher percentage of the cloud to be excluded, whilst retaining as much of the original imagery as possible.
The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. The algorithm used is significantly better than the default Sentinel 2 cloud masking and slightly better than the COPERNICUS/S2_CLOUD_PROBABILITY cloud mask because it masks out shadows, however there is potentially significant improvements that could be made to the method in the future.
Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask
This dataset contains Landsat 5 imagery for selected areas of Queensland, currently Torres Strait and around Lizard Island and Cape Tribulation. This collection was made as a result of the development of the Torres Strait Features dataset. It includes a number (typically 4 - 8) of selected Landsat images for each scene from the entire Landsat 5 archive. These images were selected for having low cloud cover and clear water. The aim of this collection was to allow investigation of the marine features. The complete catalogue of Landsat 5 for scenes 96_70, 96_71, 97_67, 97_68, 98_66, 98_67, 98_68_99_66, 99_67 and 99_68 were downloaded from the Google Earth Engine site ( https://console.developers.google.com/storage/earthengine-public/landsat/ ). The images were then processed into low resolution true colour using GDAL. They were then reviewed for picture clarity and the best ones were selected and processed at full resolution to be part of this collection. The true colour conversion was achieved by applying level adjustment to each channel to ensure that the tonal scaling of each channel was adjusted to give a good overall colour balance. This effectively set the black point of the channel and the gain. This adjustment was applied consistently to all images. Red: Channel B3, Black level 8, White level 58 Green: Channel B2, Black level 10, White level 55 Blue: Channel B1, Black level 32, White level 121 Note: A constant level adjustment was made to the images regardless of the time of the year that the images were taken. As a result images in the summer tend to be brighter than those in the winter. After level adjustment the three channels were merged into a single colour image using gdal_merge. The black surround on the image was then made transparent using the GDAL nearblack command. This collection consists of 59 images saved as 4 channel (Red, Green, Blue, Alpha) GeoTiff images with LZW compression (lossless) and internal overviews with a WGS 84 UTM 54N projection. Each of the individual images can be downloaded from the eAtlas map client (Overlay layers: eAtlas/Imagery Base Maps Earth Cover/Landsat 5) or as a collection of all images for each scene. Data Location: This dataset is filed in the eAtlas enduring data repository at: data\NERP-TE\13.1_eAtlas\QLD_NERP-TE-13-1_eAtlas_Landsat-5_1988-2011
This layer presents detectable thermal activity from VIIRS satellites for the last 7 days. VIIRS Thermal Hotspots and Fire Activity is a product of NASA’s Land, Atmosphere Near real-time Capability for EOS (LANCE) Earth Observation Data, part of NASA's Earth Science Data.Consumption Best Practices: As a service that is subject to Viral loads (very high usage), avoid adding Filters that use a Date/Time type field. These queries are not cacheable and WILL be subject to Rate Limiting by ArcGIS Online. To accommodate filtering events by Date/Time, we encourage using the included "Age" fields that maintain the number of Days or Hours since a record was created or last modified compared to the last service update. These queries fully support the ability to cache a response, allowing common query results to be supplied to many users without adding load on the service.When ingesting this service in your applications, avoid using POST requests, these requests are not cacheable and will also be subject to Rate Limiting measures.Source: NASA LANCE - VNP14IMG_NRT active fire detection - WorldScale/Resolution: 375-meterUpdate Frequency: Hourly using the aggregated live feed methodologyArea Covered: WorldWhat can I do with this layer?This layer represents the most frequently updated and most detailed global remotely sensed wildfire information. Detection attributes include time, location, and intensity. It can be used to track the location of fires from the recent past, a few hours up to seven days behind real time. This layer also shows the location of wildfire over the past 7 days as a time-enabled service so that the progress of fires over that timeframe can be reproduced as an animation.The VIIRS thermal activity layer can be used to visualize and assess wildfires worldwide. However, it should be noted that this dataset contains many “false positives” (e.g., oil/natural gas wells or volcanoes) since the satellite will detect any large thermal signal.Fire points in this service are generally available within 3 1/4 hours after detection by a VIIRS device. LANCE estimates availability at around 3 hours after detection, and esri livefeeds updates this feature layer every 15 minutes from LANCE.Even though these data display as point features, each point in fact represents a pixel that is >= 375 m high and wide. A point feature means somewhere in this pixel at least one "hot" spot was detected which may be a fire.VIIRS is a scanning radiometer device aboard the Suomi NPP and NOAA-20 satellites that collects imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans in several visible and infrared bands. The VIIRS Thermal Hotspots and Fire Activity layer is a livefeed from a subset of the overall VIIRS imagery, in particular from NASA's VNP14IMG_NRT active fire detection product. The downloads are automatically downloaded from LANCE, NASA's near real time data and imagery site, every 15 minutes.The 375-m data complements the 1-km Moderate Resolution Imaging Spectroradiometer (MODIS) Thermal Hotspots and Fire Activity layer; they both show good agreement in hotspot detection but the improved spatial resolution of the 375 m data provides a greater response over fires of relatively small areas and provides improved mapping of large fire perimeters.Attribute informationLatitude and Longitude: The center point location of the 375 m (approximately) pixel flagged as containing one or more fires/hotspots.Satellite: Whether the detection was picked up by the Suomi NPP satellite (N) or NOAA-20 satellite (1). For best results, use the virtual field WhichSatellite, redefined by an arcade expression, that gives the complete satellite name.Confidence: The detection confidence is a quality flag of the individual hotspot/active fire pixel. This value is based on a collection of intermediate algorithm quantities used in the detection process. It is intended to help users gauge the quality of individual hotspot/fire pixels. Confidence values are set to low, nominal and high. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Nominal confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.Please note: Low confidence nighttime pixels occur only over the geographic area extending from 11 deg E to 110 deg W and 7 deg N to 55 deg S. This area describes the region of influence of the South Atlantic Magnetic Anomaly which can cause spurious brightness temperatures in the mid-infrared channel I4 leading to potential false positive alarms. These have been removed from the NRT data distributed by FIRMS.FRP: Fire Radiative Power. Depicts the pixel-integrated fire radiative power in MW (MegaWatts). FRP provides information on the measured radiant heat output of detected fires. The amount of radiant heat energy liberated per unit time (the Fire Radiative Power) is thought to be related to the rate at which fuel is being consumed (Wooster et. al. (2005)).DayNight: D = Daytime fire, N = Nighttime fireHours Old: Derived field that provides age of record in hours between Acquisition date/time and latest update date/time. 0 = less than 1 hour ago, 1 = less than 2 hours ago, 2 = less than 3 hours ago, and so on.Additional information can be found on the NASA FIRMS site FAQ.Note about near real time data:Near real time data is not checked thoroughly before it's posted on LANCE or downloaded and posted to the Living Atlas. NASA's goal is to get vital fire information to its customers within three hours of observation time. However, the data is screened by a confidence algorithm which seeks to help users gauge the quality of individual hotspot/fire points. Low confidence daytime fire pixels are typically associated with areas of sun glint and lower relative temperature anomaly (<15K) in the mid-infrared channel I4. Medium confidence pixels are those free of potential sun glint contamination during the day and marked by strong (>15K) temperature anomaly in either day or nighttime data. High confidence fire pixels are associated with day or nighttime saturated pixels.RevisionsSeptember 15, 2022: Updated to include 'Hours_Old' field. Time series has been disabled by default, but still available.July 5, 2022: Terms of Use updated to Esri Master License Agreement, no longer stating that a subscription is required!This layer is provided for informational purposes and is not monitored 24/7 for accuracy and currency.If you would like to be alerted to potential issues or simply see when this Service will update next, please visit our Live Feed Status Page!
The Digital Geologic-GIS Map of parts of Great Sand Dunes National Park and Preserve (Sangre de Cristo Mountains and part of the Dunes), Colorado is composed of GIS data layers and GIS tables, and is available in the following GRI-supported GIS data formats: 1.) an ESRI file geodatabase (gsam_geology.gdb), a 2.) Open Geospatial Consortium (OGC) geopackage, and 3.) 2.2 KMZ/KML file for use in Google Earth, however, this format version of the map is limited in data layers presented and in access to GRI ancillary table information. The file geodatabase format is supported with a 1.) ArcGIS Pro 3.X map file (.mapx) file (gsam_geology.mapx) and individual Pro 3.X layer (.lyrx) files (for each GIS data layer). The OGC geopackage is supported with a QGIS project (.qgz) file. Upon request, the GIS data is also available in ESRI shapefile format. Contact Stephanie O'Meara (see contact information below) to acquire the GIS data in these GIS data formats. In addition to the GIS data and supporting GIS files, three additional files comprise a GRI digital geologic-GIS dataset or map: 1.) a readme file (grsa_geology_gis_readme.pdf), 2.) the GRI ancillary map information document (.pdf) file (grsa_geology.pdf) which contains geologic unit descriptions, as well as other ancillary map information and graphics from the source map(s) used by the GRI in the production of the GRI digital geologic-GIS data for the park, and 3.) a user-friendly FAQ PDF version of the metadata (gsam_geology_metadata_faq.pdf). Please read the grsa_geology_gis_readme.pdf for information pertaining to the proper extraction of the GIS data and other map files. Google Earth software is available for free at: https://www.google.com/earth/versions/. QGIS software is available for free at: https://www.qgis.org/en/site/. Users are encouraged to only use the Google Earth data for basic visualization, and to use the GIS data for any type of data analysis or investigation. The data were completed as a component of the Geologic Resources Inventory (GRI) program, a National Park Service (NPS) Inventory and Monitoring (I&M) Division funded program that is administered by the NPS Geologic Resources Division (GRD). For a complete listing of GRI products visit the GRI publications webpage: https://www.nps.gov/subjects/geology/geologic-resources-inventory-products.htm. For more information about the Geologic Resources Inventory Program visit the GRI webpage: https://www.nps.gov/subjects/geology/gri.htm. At the bottom of that webpage is a "Contact Us" link if you need additional information. You may also directly contact the program coordinator, Jason Kenworthy (jason_kenworthy@nps.gov). Source geologic maps and data used to complete this GRI digital dataset were provided by the following: U.S. Geological Survey. Detailed information concerning the sources used and their contribution the GRI product are listed in the Source Citation section(s) of this metadata record (gsam_geology_metadata.txt or gsam_geology_metadata_faq.pdf). Users of this data are cautioned about the locational accuracy of features within this dataset. Based on the source map scale of 1:24,000 and United States National Map Accuracy Standards features are within (horizontally) 12.2 meters or 40 feet of their actual location as presented by this dataset. Users of this data should thus not assume the location of features is exactly where they are portrayed in Google Earth, ArcGIS Pro, QGIS or other software used to display this dataset. All GIS and ancillary tables were produced as per the NPS GRI Geology-GIS Geodatabase Data Model v. 2.3. (available at: https://www.nps.gov/articles/gri-geodatabase-model.htm).
This map features the World Imagery map, focused on the Carribean region. World Imagery provides one meter or better satellite and aerial imagery in many parts of the world and lower resolution satellite imagery worldwide. The map includes 15m TerraColor imagery at small and mid-scales (~1:591M down to ~1:72k) and 2.5m SPOT Imagery (~1:288k to ~1:72k) for the world. DigitalGlobe sub-meter imagery is featured in many parts of the world, including Africa. Sub-meter Pléiades imagery is available in select urban areas. Additionally, imagery at different resolutions has been contributed by the GIS User Community.
For more information on this map, view the World Imagery item description.
Metadata: This service is metadata-enabled. With the Identify tool in ArcMap or the World Imagery with Metadata web map, you can see the resolution, collection date, and source of the imagery at the location you click. Values of "99999" mean that metadata is not available for that field. The metadata applies only to the best available imagery at that location. You may need to zoom in to view the best available imagery.
Feedback: Have you ever seen a problem in the Esri World Imagery Map that you wanted to see fixed? You can use the Imagery Map Feedback web map to provide feedback on issues or errors that you see. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
World Imagery provides one meter or better satellite and aerial imagery for most of the world’s landmass and lower resolution satellite imagery worldwide. The map is currently comprised of the following sources: Worldwide 15-m resolution TerraColor imagery at small and medium map scales.Maxar imagery basemap products around the world: Vivid Premium at 15-cm HD resolution for select metropolitan areas, Vivid Advanced 30-cm HD for more than 1,000 metropolitan areas, and Vivid Standard from 1.2-m to 0.6-cm resolution for the most of the world, with 30-cm HD across the United States and parts of Western Europe. More information on the Maxar products is included below. High-resolution aerial photography contributed by the GIS User Community. This imagery ranges from 30-cm to 3-cm resolution. You can contribute your imagery to this map and have it served by Esri via the Community Maps Program. Maxar Basemap ProductsVivid PremiumProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product provides 15-cm HD resolution imagery.Vivid AdvancedProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product includes a mix of native 30-cm and 30-cm HD resolution imagery.Vivid StandardProvides a visually consistent and continuous image layer over large areas through advanced image mosaicking techniques, including tonal balancing and seamline blending across thousands of image strips. Available from 1.2-m down to 30-cm HD. More on Maxar HD. Imagery UpdatesYou can use the Updates Mode in the World Imagery Wayback app to learn more about recent and pending updates. Accessing this information requires a user login with an ArcGIS organizational account. CitationsThis layer includes imagery provider, collection date, resolution, accuracy, and source of the imagery. With the Identify tool in ArcGIS Desktop or the ArcGIS Online Map Viewer you can see imagery citations. Citations returned apply only to the available imagery at that location and scale. You may need to zoom in to view the best available imagery. Citations can also be accessed in the World Imagery with Metadata web map.UseYou can add this layer to the ArcGIS Online Map Viewer, ArcGIS Desktop, or ArcGIS Pro. To view this layer with a useful reference overlay, open the Imagery Hybrid web map.FeedbackHave you ever seen a problem in the Esri World Imagery Map that you wanted to report? You can use the Imagery Map Feedback web map to provide comments on issues. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.