Declassified satellite images provide an important worldwide record of land-surface change. With the success of the first release of classified satellite photography in 1995, images from U.S. military intelligence satellites KH-7 and KH-9 were declassified in accordance with Executive Order 12951 in 2002. The data were originally used for cartographic information and reconnaissance for U.S. intelligence agencies. Since the images could be of historical value for global change research and were no longer critical to national security, the collection was made available to the public. Keyhole (KH) satellite systems KH-7 and KH-9 acquired photographs of the Earth’s surface with a telescopic camera system and transported the exposed film through the use of recovery capsules. The capsules or buckets were de-orbited and retrieved by aircraft while the capsules parachuted to earth. The exposed film was developed and the images were analyzed for a range of military applications. The KH-7 surveillance system was a high resolution imaging system that was operational from July 1963 to June 1967. Approximately 18,000 black-and-white images and 230 color images are available from the 38 missions flown during this program. Key features for this program were larger area of coverage and improved ground resolution. The cameras acquired imagery in continuous lengthwise sweeps of the terrain. KH-7 images are 9 inches wide, vary in length from 4 inches to 500 feet long, and have a resolution of 2 to 4 feet. The KH-9 mapping program was operational from March 1973 to October 1980 and was designed to support mapping requirements and exact positioning of geographical points for the military. This was accomplished by using image overlap for stereo coverage and by using a camera system with a reseau grid to correct image distortion. The KH-9 framing cameras produced 9 x 18 inch imagery at a resolution of 20-30 feet. Approximately 29,000 mapping images were acquired from 12 missions. The original film sources are maintained by the National Archives and Records Administration (NARA). Duplicate film sources held in the USGS EROS Center archive are used to produce digital copies of the imagery.
World Imagery provides one meter or better satellite and aerial imagery for most of the world’s landmass and lower resolution satellite imagery worldwide. The map is currently comprised of the following sources:Worldwide 15-m resolution TerraColor imagery at small and medium map scales.Maxar imagery basemap products around the world: Vivid Premium at 15-cm HD resolution for select metropolitan areas, Vivid Advanced 30-cm HD for more than 1,000 metropolitan areas, and Vivid Standard from 1.2-m to 0.6-cm resolution for the most of the world, with 30-cm HD across the United States and parts of Western Europe. More information on the Maxar products is included below. High-resolution aerial photography contributed by the GIS User Community. This imagery ranges from 30-cm to 3-cm resolution. You can contribute your imagery to this map and have it served by Esri via the Community Maps Program. Maxar Basemap ProductsVivid PremiumProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product provides 15-cm HD resolution imagery.Vivid AdvancedProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product includes a mix of native 30-cm and 30-cm HD resolution imagery.Vivid StandardProvides a visually consistent and continuous image layer over large areas through advanced image mosaicking techniques, including tonal balancing and seamline blending across thousands of image strips. Available from 1.2-m down to 30-cm HD. More on Maxar HD. Imagery UpdatesYou can use the Updates Mode in the World Imagery Wayback app to learn more about recent and pending updates. Accessing this information requires a user login with an ArcGIS organizational account. CitationsThis layer includes imagery provider, collection date, resolution, accuracy, and source of the imagery. With the Identify tool in ArcGIS Desktop or the ArcGIS Online Map Viewer you can see imagery citations. Citations returned apply only to the available imagery at that location and scale. You may need to zoom in to view the best available imagery. Citations can also be accessed in the World Imagery with Metadata web map. UseYou can add this layer to the ArcGIS Online Map Viewer, ArcGIS Desktop, or ArcGIS Pro. To view this layer with a useful reference overlay, open the Imagery Hybrid web map. FeedbackHave you ever seen a problem in the Esri World Imagery Map that you wanted to report? You can use the Imagery Map Feedback web map to provide comments on issues. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Metadata: NOAA GOES-R Series Advanced Baseline Imager (ABI) Level 1b RadiancesMore information about this imagery can be found here.This satellite imagery combines data from the NOAA GOES East and West satellites and the JMA Himawari satellite, providing full coverage of weather events for most of the world, from the west coast of Africa west to the east coast of India. The tile service updates to the most recent image every 10 minutes at 1.5 km per pixel resolution.The infrared (IR) band detects radiation that is emitted by the Earth’s surface, atmosphere and clouds, in the “infrared window” portion of the spectrum. The radiation has a wavelength near 10.3 micrometers, and the term “window” means that it passes through the atmosphere with relatively little absorption by gases such as water vapor. It is useful for estimating the emitting temperature of the Earth’s surface and cloud tops. A major advantage of the IR band is that it can sense energy at night, so this imagery is available 24 hours a day.The Advanced Baseline Imager (ABI) instrument samples the radiance of the Earth in sixteen spectral bands using several arrays of detectors in the instrument’s focal plane. Single reflective band ABI Level 1b Radiance Products (channels 1 - 6 with approximate center wavelengths 0.47, 0.64, 0.865, 1.378, 1.61, 2.25 microns, respectively) are digital maps of outgoing radiance values at the top of the atmosphere for visible and near-infrared (IR) bands. Single emissive band ABI L1b Radiance Products (channels 7 - 16 with approximate center wavelengths 3.9, 6.185, 6.95, 7.34, 8.5, 9.61, 10.35, 11.2, 12.3, 13.3 microns, respectively) are digital maps of outgoing radiance values at the top of the atmosphere for IR bands. Detector samples are compressed, packetized and down-linked to the ground station as Level 0 data for conversion to calibrated, geo-located pixels (Level 1b Radiance data). The detector samples are decompressed, radiometrically corrected, navigated and resampled onto an invariant output grid, referred to as the ABI fixed grid.McIDAS merge technique and color mapping provided by the Cooperative Institute for Meteorological Satellite Studies (Space Science and Engineering Center, University of Wisconsin - Madison) using satellite data from SSEC Satellite Data Services and the McIDAS visualization software.
For the purposes of training AI-based models to identify (map) road features in rural/remote tropical regions on the basis of true-colour satellite imagery, and subsequently testing the accuracy of these AI-derived road maps, we produced a dataset of 8904 satellite image ‘tiles’ and their corresponding known road features across Equatorial Asia (Indonesia, Malaysia, Papua New Guinea)., 1.     INPUT 200 SATELLITE IMAGES
The main dataset shared here was derived from a set of 200 input satellite images, also provided here. These 200 images are effectively ‘screenshots’ (i.e., reduced-resolution copies) of high-resolution true-colour satellite imagery (~0.5-1m pixel resolution) observed using the Elvis Elevation and Depth spatial data portal (https://elevation.fsdf.org.au/), which here is functionally equivalent to the more familiar Google Earth. Each of these original images was initially acquired at a resolution of 1920x886 pixels. Actual image resolution was coarser than the native high-resolution imagery. Visual inspection of these 200 images suggests a pixel resolution of ~5 meters, given the number of pixels required to span features of familiar scale, such as roads and roofs, as well as the ready discrimination of specific land uses, vegetation types, etc. These 200 images generally spanned either forest-agricultural mosaics or intact forest landscapes with limi..., , # Satellite images and road-reference data for AI-based road mapping in Equatorial Asia
https://doi.org/10.5061/dryad.bvq83bkg7
1. INTRODUCTION For the purposes of training AI-based models to identify (map) road features in rural/remote tropical regions on the basis of true-colour satellite imagery, and subsequently testing the accuracy of these AI-derived road maps, we produced a dataset of 8904 satellite image ‘tiles’ and their corresponding known road features across Equatorial Asia (Indonesia, Malaysia, Papua New Guinea).  2. FURTHER INFORMATION The following is a summary of our data. Fuller details on these data and their underlying methodology are given in the corresponding article, cited below:  Sloan, S., Talkhani, R.R., Huang, T., Engert, J., Laurance, W.F. (2023) Mapping remote roads using artificial intelligence and satellite imagery. Remote Sensing. 16(5): 839. [https://doi.org/10.3390/rs16050839](https://doi.org/10.3...
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset collection contains A0 maps of the Keppel Island region based on satellite imagery and fine-scale habitat mapping of the islands and marine environment. This collection provides the source satellite imagery used to produce these maps and the habitat mapping data.
The imagery used to produce these maps was developed by blending high-resolution imagery (1 m) from ArcGIS Online with a clear-sky composite derived from Sentinel 2 imagery (10 m). The Sentinel 2 imagery was used to achieve full coverage of the entire region, while the high-resolution was used to provide detail around island areas.
The blended imagery is a derivative product of the Sentinel 2 imagery and ArcGIS Online imagery, using Photoshop to to manually blend the best portions of each imagery into the final product. The imagery is provided for the sole purpose of reproducing the A0 maps.
Methods:
The high resolution satellite composite composite was developed by manual masking and blending of a Sentinel 2 composite image and high resolution imagery from ArcGIS Online World Imagery (2019).
The Sentinel 2 composite was produced by statistically combining the clearest 10 images from 2016 - 2019. These images were manually chosen based on their very low cloud cover, lack of sun glint and clear water conditions. These images were then combined together to remove clouds and reduce the noise in the image.
The processing of the images was performed using a script in Google Earth Engine. The script combines the manually chosen imagery to estimate the clearest imagery. The dates of the images were chosen using the EOBrowser (https://www.sentinel-hub.com/explore/eobrowser) to preview all the Sentinel 2 imagery from 2015-2019. The images that were mostly free of clouds, with little or no sun glint, were recorded. Each of these dates was then viewed in Google Earth Engine with high contrast settings to identify images that had high water surface noise due to algal blooms, waves, or re-suspension. These were excluded from the list. All the images were then combined by applying a histogram analysis of each pixel, with the final image using the 40th percentile of the time series of the brightness of each pixel. This approach helps exclude effects from clouds.
The contrast of the image was stretched to highlight the marine features, whilst retaining detail in the land features. This was done by choosing a black point for each channel that would provide a dark setting for deep clear water. Gamma correction was then used to lighten up the dark water features, whilst not ove- exposing the brighter shallow areas.
Both the high resolution satellite imagery and Sentinel 2 imagery was combined at 1 m pixel resolution. The resolution of the Sentinel 2 tiles was up sampled to match the resolution of the high-resolution imagery. These two sets of imagery were then layered in Photoshop. The brightness of the high-resolution satellite imagery was then adjusting to match the Sentinel 2 imagery. A mask was then used to retain and blend the imagery that showed the best detail of each area. The blended tiles were then merged with the overall area imagery by performing a GDAL merge, resulting in an upscaling of the Sentinel 2 imagery to 1 m resolution.
Habitat Mapping:
A 5 m resolution habitat mapping was developed based on the satellite imagery, aerial imagery available, and monitoring site information. This habitat mapping was developed to help with monitoring site selection and for the mapping workshop with the Woppaburra TOs on North Keppel Island in Dec 2019.
The habitat maps should be considered as draft as they don't consider all available in water observations. They are primarily based on aerial and satellite images.
The habitat mapping includes: Asphalt, Buildings, Mangrove, Cabbage-tree palm, Sheoak, Other vegetation, Grass, Salt Flat, Rock, Beach Rock, Gravel, Coral, Sparse coral, Unknown not rock (macroalgae on rubble), Marine feature (rock).
This assumed layers allowed the digitisation of these features to be sped up, so for example, if there was coral growing over a marine feature then the boundary of the marine feature would need to be digitised, then the coral feature, but not the boundary between the marine feature and the coral. We knew that the coral was going to cut out from the marine feature because the coral is on top of the marine feature, saving us time in digitising this boundary. Digitisation was performed on an iPad using Procreate software and an Apple pencil to draw the features as layers in a drawing. Due to memory limitations of the iPad the region was digitised using 6000x6000 pixel tiles. The raster images were converted back to polygons and the tiles merged together.
A python script was then used to clip the layer sandwich so that there is no overlap between feature types.
Habitat Validation:
Only limited validation was performed on the habitat map. To assist in the development of the habitat mapping, nearly every YouTube video available, at the time of development (2019), on the Keppel Islands was reviewed and, where possible, georeferenced to provide a better understanding of the local habitats at the scale of the mapping, prior to the mapping being conducted. Several validation points were observed during the workshop. The map should be considered as largely unvalidated.
data/coastline/Keppels_AIMS_Coastline_2017.shp:
The coastline dataset was produced by starting with the Queensland coastline dataset by DNRME (Downloaded from http://qldspatial.information.qld.gov.au/catalogue/custom/detail.page?fid={369DF13C-1BF3-45EA-9B2B-0FA785397B34} on 31 Aug 2019). This was then edited to work at a scale of 1:5000, using the aerial imagery from Queensland Globe as a reference and a high-tide satellite image from 22 Feb 2015 from Google Earth Pro. The perimeter of each island was redrawn. This line feature was then converted to a polygon using the "Lines to Polygon" QGIS tool. The Keppel island features were then saved to a shapefile by exporting with a limited extent.
data/labels/Keppel-Is-Map-Labels.shp:
This contains 70 named places in the Keppel island region. These names were sourced from literature and existing maps. Unfortunately, no provenance of the names was recorded. These names are not official. This includes the following attributes:
- Name: Name of the location. Examples Bald, Bluff
- NameSuffix: End of the name which is often a description of the feature type: Examples: Rock, Point
- TradName: Traditional name of the location
- Scale: Map scale where the label should be displayed.
data/lat/Keppel-Is-Sentinel2-2016-19_B4-LAT_Poly3m_V3.shp:
This corresponds to a rough estimate of the LAT contours around the Keppel Islands. LAT was estimated from tidal differences in Sentinel-2 imagery and light penetration in the red channel. Note this is not very calibrated and should be used as a rough guide. Only one rough in-situ validation was performed at low tide on Ko-no-mie at the edge of the reef near the education centre. This indicated that the LAT estimate was within a depth error range of about +-0.5 m.
data/habitat/Keppels_AIMS_Habitat-mapping_2019.shp:
This shapefile contains the mapped land and marine habitats. The classification type is recorded in the Type attribute.
Format:
GeoTiff (Internal JPEG format - 538 MB)
PDF (A0 regional maps - ~30MB each)
Shapefile (Habitat map, Coastline, Labels, LAT estimate)
Data Location:
This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\Keppels_AIMS_Regional-maps
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This application is intended for informational purposes only and is not an operational product. The tool provides the capability to access, view and interact with satellite imagery, and shows the latest view of Earth as it appears from space.For additional imagery from NOAA's GOES East and GOES West satellites, please visit our Imagery and Data page or our cooperative institute partners at CIRA and CIMSS.This website should not be used to support operational observation, forecasting, emergency, or disaster mitigation operations, either public or private. In addition, we do not provide weather forecasts on this site — that is the mission of the National Weather Service. Please contact them for any forecast questions or issues. Using the MapsWhat does the Layering Options icon mean?The Layering Options widget provides a list of operational layers and their symbols, and allows you to turn individual layers on and off. The order in which layers appear in this widget corresponds to the layer order in the map. The top layer ‘checked’ will indicate what you are viewing in the map, and you may be unable to view the layers below.Layers with expansion arrows indicate that they contain sublayers or subtypes.Do these maps work on mobile devices and different browsers?Yes!Why are there black stripes / missing data on the map?NOAA Satellite Maps is for informational purposes only and is not an operational product; there are times when data is not available.Why are the North and South Poles dark?The raw satellite data used in these web map apps goes through several processing steps after it has been acquired from space. These steps translate the raw data into geospatial data and imagery projected onto a map. NOAA Satellite Maps uses the Mercator projection to portray the Earth's 3D surface in two dimensions. This Mercator projection does not include data at 80 degrees north and south latitude due to distortion, which is why the poles appear black in these maps. NOAA's polar satellites are a critical resource in acquiring operational data at the poles of the Earth and some of this imagery is available on our website (for example, here ).Why does the imagery load slowly?This map viewer does not load pre-generated web-ready graphics and animations like many satellite imagery apps you may be used to seeing. Instead, it downloads geospatial data from our data servers through a Map Service, and the app in your browser renders the imagery in real-time. Each pixel needs to be rendered and geolocated on the web map for it to load.How can I get the raw data and download the GIS World File for the images I choose?NOAA Satellite Maps offers an interoperable map service to the public. Use the camera tool to select the area of the map you would like to capture and click ‘download GIS WorldFile.’The geospatial data Map Service for the NOAA Satellite Maps GOES satellite imagery is located on our Satellite Maps ArcGIS REST Web Service ( available here ).We support open information sharing and integration through this RESTful Service, which can be used by a multitude of GIS software packages and web map applications (both open and licensed).Data is for display purposes only, and should not be used operationally.Are there any restrictions on using this imagery?NOAA supports an open data policy and we encourage publication of imagery from NOAA Satellite Maps; when doing so, please cite it as "NOAA" and also consider including a permalink (such as this one) to allow others to explore the imagery.For acknowledgment in scientific journals, please use:We acknowledge the use of imagery from the NOAA Satellite Maps application: LINKThis imagery is not copyrighted. You may use this material for educational or informational purposes, including photo collections, textbooks, public exhibits, computer graphical simulations and internet web pages. This general permission extends to personal web pages. About this satellite imageryWhat am I looking at in these maps?What am I seeing in the NOAA Satellite Maps 3D Scene?There are four options to choose from, each depicting a different view of the Earth using the latest satellite imagery available. The first three views show the Western Hemisphere and the Pacific Ocean, as captured by the NOAA GOES East (GOES-16) and GOES West (GOES-17) satellites. These images are updated approximately every 15 minutes as we receive data from the satellites in space. The three views show GeoColor, infrared and water vapor. See our other FAQs to learn more about what the imagery layering options depict.The fourth option is a global view, captured by NOAA’s polar-orbiting satellites (NOAA/NASA Suomi NPP and NOAA-20). The polar satellites circle the globe 14 times a day, taking in one complete view of the Earth in daylight every 24 hours. This composite view is what is projected onto the 3D map scene each morning, so you are seeing how the Earth looked from space one day ago.What am I seeing in the Latest 24 Hrs. GOES Constellation Map?In this map you are seeing the past 24 hours (updated approximately every 15 minutes) of the Western Hemisphere and Pacific Ocean, as seen by the NOAA GOES East (GOES-16) and GOES West (GOES-17) satellites. In this map you can also view three different ‘layers’. The three views show ‘GeoColor’ ‘infrared’ and ‘water vapor’.(Please note: GOES West imagery is currently only available in GeoColor. The infrared and water vapor imagery will be available in Spring 2019.)This maps shows the coverage area of the GOES East and GOES West satellites. GOES East, which orbits the Earth from 75.2 degrees west longitude, provides a continuous view of the Western Hemisphere, from the West Coast of Africa to North and South America. GOES West, which orbits the Earth at 137.2 degrees west longitude, sees western North and South America and the central and eastern Pacific Ocean all the way to New Zealand.What am I seeing in the Global Archive Map?In this map, you will see the whole Earth as captured each day by our polar satellites, based on our multi-year archive of data. This data is provided by NOAA’s polar orbiting satellites (NOAA/NASA Suomi NPP from January 2014 to April 19, 2018 and NOAA-20 from April 20, 2018 to today). The polar satellites circle the globe 14 times a day taking in one complete view of the Earth every 24 hours. This complete view is what is projected onto the flat map scene each morning.What does the GOES GeoColor imagery show?The 'Merged GeoColor’ map shows the coverage area of the GOES East and GOES West satellites and includes the entire Western Hemisphere and most of the Pacific Ocean. This imagery uses a combination of visible and infrared channels and is updated approximately every 15 minutes in real time. GeoColor imagery approximates how the human eye would see Earth from space during daylight hours, and is created by combining several of the spectral channels from the Advanced Baseline Imager (ABI) – the primary instrument on the GOES satellites. The wavelengths of reflected sunlight from the red and blue portions of the spectrum are merged with a simulated green wavelength component, creating RGB (red-green-blue) imagery. At night, infrared imagery shows high clouds as white and low clouds and fog as light blue. The static city lights background basemap is derived from a single composite image from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day Night Band. For example, temporary power outages will not be visible. Learn more.What does the GOES infrared map show?The 'GOES infrared' map displays heat radiating off of clouds and the surface of the Earth and is updated every 15 minutes in near real time. Higher clouds colorized in orange often correspond to more active weather systems. This infrared band is one of 12 channels on the Advanced Baseline Imager, the primary instrument on both the GOES East and West satellites. on the GOES the multiple GOES East ABI sensor’s infrared bands, and is updated every 15 minutes in real time. Infrared satellite imagery can be "colorized" or "color-enhanced" to bring out details in cloud patterns. These color enhancements are useful to meteorologists because they signify “brightness temperatures,” which are approximately the temperature of the radiating body, whether it be a cloud or the Earth’s surface. In this imagery, yellow and orange areas signify taller/colder clouds, which often correlate with more active weather systems. Blue areas are usually “clear sky,” while pale white areas typically indicate low-level clouds. During a hurricane, cloud top temperatures will be higher (and colder), and therefore appear dark red. This imagery is derived from band #13 on the GOES East and GOES West Advanced Baseline Imager.How does infrared satellite imagery work?The infrared (IR) band detects radiation that is emitted by the Earth’s surface, atmosphere and clouds, in the “infrared window” portion of the spectrum. The radiation has a wavelength near 10.3 micrometers, and the term “window” means that it passes through the atmosphere with relatively little absorption by gases such as water vapor. It is useful for estimating the emitting temperature of the Earth’s surface and cloud tops. A major advantage of the IR band is that it can sense energy at night, so this imagery is available 24 hours a day.What do the colors on the infrared map represent?In this imagery, yellow and orange areas signify taller/colder clouds, which often correlate with more active weather systems. Blue areas are clear sky, while pale white areas indicate low-level clouds, or potentially frozen surfaces. Learn more about this weather imagery.What does the GOES water vapor map layer show?The GOES ‘water vapor’ map displays the concentration and location of clouds and water vapor in the atmosphere and shows data from both the GOES East and GOES West satellites. Imagery is updated approximately every 15 minutes in
This data set includes: (1) fine-scale snow and land cover maps from two mountainous study sites in the Western U.S., produced using machine-learning models trained to extract land cover data from WorldView-2 and WorldView-3 stereo panchromatic and multispectral images; (2) binary snow maps derived from the land cover maps; and (3) 30 m and 465 m fractional snow-covered area (fSCA) maps, produced via downsampling of the binary snow maps. The land cover classification maps feature between three and six classes common to mountainous regions and integral for accurate stereo snow depth mapping: illuminated snow, shaded snow, vegetation, exposed surfaces, surface water, and clouds. Also included are Landsat and MODSCAG fSCA map products. The source imagery for these data are the Maxar WorldView-2 and Maxar WorldView-3 Level-1B 8-band multispectral images, orthorectified and converted to top-of-atmosphere reflectance. These Level-1B images are available under the NGA NextView/EnhancedView license.
This map was created to be used in the CBF website map gallery as updated satellite imagery content for the Chesapeake Bay watershed.This map includes the Chesapeake Bay watershed boundary, state boundaries that intersect the watershed boundary, and NLCD 2019 Land Cover data as well as a imagery background. This will be shared as a web application on the CBF website within the map gallery subpage.
Suggested use: Use tiled Map Service for large scale mapping when high resolution color imagery is needed.A web app to view tile and block metadata such as year, sensor, and cloud cover can be found here. CoverageState of AlaskaProduct TypeTile CacheImage BandsRGBSpatial Resolution50cmAccuracy5m CE90 or betterCloud Cover<10% overallOff Nadir Angle<30 degreesSun Elevation>30 degreesWMS version of this data: https://geoportal.alaska.gov/arcgis/services/ahri_2020_rgb_cache/MapServer/WMSServer?request=GetCapabilities&service=WMSWMTS version of this data:https://geoportal.alaska.gov/arcgis/rest/services/ahri_2020_rgb_cache/MapServer/WMTS/1.0.0/WMTSCapabilities.xml
Satellite image map of Amanda Bay, Antarctica. This map was produced for the Australian Antarctic Division by AUSLIG (now Geoscience Australia) Commercial, in Australia, in 1991. The map is at a scale of 1:100 000, and was produced from Landsat 4 TM imagery (124-108, 124-109). It is projected on a Transverse Mercator projection, and shows traverses/routes/foot track charts, glaciers/ice shelves, penguin colonies, stations/bases, runways/helipads, and gives some historical text information. The map has both geographical and UTM co-ordinates.
The objective of CSEAS metadatabase is to provide an Internet catalogue that documents the spatial data holdings and other media collections related to Southeast Asian Studies.;The collection is stored on a web database using Dublin Core standard and Unicode characters (UTF-8). This allowed us to easily employ the Internet resources for discovery and electronic document interchange in multiple languages.;At the beginning of 2002, we already have 429 satellite images, 715 maps, 2591 photos and GPS photos, 6810 anthropology files, and some SEAS articles. Continually the collection increases as time goes on.
This imagery service contains natural color orthophotos covering counties in north Florida that had imagery captured from October 2012 till spring 2013. An orthophoto is remotely sensed image data in which displacement of features in the image caused by terrain relief and sensor orientation have been mathematically removed. Orthophotography combines the image characteristics of a photograph with the geometric qualities of a map. Counties covered in this dataset are: Bay, Bradford, Calhoun, Columbia, Dixie, Duval, Escambia, Franklin, Gadsden, Gilchrist, Gulf, Hamilton, Holmes, Jackson, Jefferson, Lafayette, Levy, Madison, Okaloosa, Palm Beach (partial), Santa Rosa, Suwannee, Taylor, Union, Wakulla, Walton, and Washington. Please contact GIS.Librarian@FloridaDEP.gov for more information.
https://cubig.ai/store/terms-of-servicehttps://cubig.ai/store/terms-of-service
1) Data Introduction • The Satellite Image Classification Dataset is a benchmark image classification dataset constructed using satellite remote sensing imagery. It includes a total of four land surface classes—cloudy, desert, green_area, and water—collected from various sensor-based images and Google Maps snapshots. The dataset is designed for training and evaluating image-based scene recognition models.
2) Data Utilization (1) Characteristics of the Satellite Image Classification Dataset: • The dataset was collected with the aim of automatic interpretation of satellite imagery and consists of a combination of sensor-based images and map snapshots, offering a realistic representation of real-world conditions. • All images are of fixed resolution and include diverse landform features, making the dataset suitable for classification experiments across different environments and for evaluating model generalization performance.
(2) Applications of the Satellite Image Classification Dataset: • Land surface classification model training: Can be used in experiments to classify various types of terrain such as buildings, farmland, and roads. • Research and application in geospatial information analysis: Useful for developing models that support spatial decision-making through tasks such as land use monitoring, urban structure analysis, and land surface inference.
World Imagery provides one meter or better satellite and aerial imagery in many parts of the world and lower resolution satellite imagery worldwide. The map includes 15m TerraColor imagery at small and mid-scales (~1:591M down to ~1:72k) and 2.5m SPOT Imagery (~1:288k to ~1:72k) for the world. The map features 0.5m resolution imagery in the continental United States and parts of Western Europe from DigitalGlobe. Additional DigitalGlobe sub-meter imagery is featured in many parts of the world. In the United States, 1 meter or better resolution NAIP imagery is available in some areas. In other parts of the world, imagery at different resolutions has been contributed by the GIS User Community. In select communities, very high resolution imagery (down to 0.03m) is available down to ~1:280 scale. You can contribute your imagery to this map and have it served by Esri via the Community Maps Program. View the list of Contributors for the World Imagery Map.CoverageView the links below to learn more about recent updates and map coverage:What's new in World ImageryWorld coverage mapCitationsThis layer includes imagery provider, collection date, resolution, accuracy, and source of the imagery. With the Identify tool in ArcGIS Desktop or the ArcGIS Online Map Viewer you can see imagery citations. Citations returned apply only to the available imagery at that location and scale. You may need to zoom in to view the best available imagery. Citations can also be accessed in the World Imagery with Metadata web map.UseYou can add this layer to the ArcGIS Online Map Viewer, ArcGIS Desktop, or ArcGIS Pro. To view this layer with a useful reference overlay, open the Imagery Hybrid web map. A similar raster web map, Imagery with Labels, is also available.FeedbackHave you ever seen a problem in the Esri World Imagery Map that you wanted to report? You can use the Imagery Map Feedback web map to provide comments on issues. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The satellite image of Canada is a composite of several individual satellite images form the Advanced Very High Resolution Radiometre (AVHRR) sensor on board various NOAA Satellites. The colours reflect differences in the density of vegetation cover: bright green for dense vegetation in humid southern regions; yellow for semi-arid and for mountainous regions; brown for the north where vegetation cover is very sparse; and white for snow and ice. An inset map shows a satellite image mosaic of North America with 35 land cover classes, based on data from the SPOT satellite VGT (vegetation) sensor.
This data set contains a time series of snow depth maps and related intermediary snow-on and snow-off DEMs for Grand Mesa and the Banded Peak Ranch areas of Colorado derived from very-high-resolution (VHR) satellite stereo images and lidar point cloud data. Two of the snow depth maps coincide temporally with the 2017 NASA SnowEx Grand Mesa field campaign, providing a comparison between the satellite derived snow depth and in-situ snow depth measurements. The VHR stereo images were acquired each year between 2016 and 2022 during the approximate timing of peak snow depth by the Maxar WorldView-2, WorldView-3, and CNES/Airbus Pléiades-HR 1A and 1B satellites, while lidar data was sourced from the USGS 3D Elevation Program.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Detecting Landscape Objects on Satellite Images with Artificial Intelligence In recent years, there has been a significant increase in the use of artificial intelligence (AI) for image recognition and object detection. This technology has proven to be useful in a wide range of applications, from self-driving cars to facial recognition systems. In this project, the focus lies on using AI to detect landscape objects in satellite images (aerial photography angle) with the goal to create an annotated map of The Netherlands with all the coordinates of the given landscape objects.
Background Information
Problem Statement One of the things that Naturalis does is conducting research into the distribution of wild bees (Naturalis, n.d.). For their research they use a model that predicts whether or not a certain species can occur at a given location. Representing the real world in a digital form, there is at the moment not yet a way to generate an inventory of landscape features such as presence of trees, ponds and hedges, with their precise location on the digital map. The current models rely on species observation data and climate variables, but it is expected that adding detailed physical landscape information could increase the prediction accuracy. Common maps do not contain this level of detail, but high-resolution satellite images do.
Possible opportunities Based on the problem statement, there is at the moment at Naturalis not a map that does contain the level of detail where detection of landscape elements could be made, according to their wishes. The idea emerged that it should be possible to use satellite images to find the locations of small landscape elements and produce an annotated map. Therefore, by refining the accuracy of the current prediction model, researchers can gain a profound understanding of wild bees in the Netherlands with the goal to take effective measurements to protect wild bees and their living environment.
Goal of project The goal of the project is to develop an artificial intelligence model for landscape detection on satellite images to create an annotated map of The Netherlands that would therefore increase the accuracy prediction of the current model that is used at Naturalis. The project aims to address the problem of a lack of detailed maps of landscapes that could revolutionize the way Naturalis conduct their research on wild bees. Therefore, the ultimate aim of the project in the long term is to utilize the comprehensive knowledge to protect both the wild bees population and their natural habitats in the Netherlands.
Data Collection Google Earth One of the main challenges of this project was the difficulty in obtaining a qualified dataset (with or without data annotation). Obtaining high-quality satellite images for the project presents challenges in terms of cost and time. The costs in obtaining high-quality satellite images of the Netherlands is 1,038,575 $ in total (for further details and information of the costs of satellite images. On top of that, the acquisition process for such images involves various steps, from the initial request to the actual delivery of the images, numerous protocols and processes need to be followed.
After conducting further research, the best possible solution was to use Google Earth as the primary source of data. While Google Earth is not allowed to be used for commercial or promotional purposes, this project is for research purposes only for Naturalis on their research of wild bees, hence the regulation does not apply in this case.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Contained within the 5th Edition (1978 to 1995) of the National Atlas of Canada is a map that shows Canada as seen from space in August, 1990 using uninterrupted 1.1 kilometer resolution imagery; final colors are adjusted to approximate those of the land cover portrayed.
https://www.ontario.ca/page/open-government-licence-ontariohttps://www.ontario.ca/page/open-government-licence-ontario
The Ontario Imagery Web Map Service (OIWMS) is an open data service available to everyone free of charge. It provides instant online access to the most recent, highest quality, province wide imagery. GEOspatial Ontario (GEO) makes this data available as an Open Geospatial Consortium (OGC) compliant web map service or as an ArcGIS map service. Imagery was compiled from many different acquisitions which are detailed in the Ontario Imagery Web Map Service Metadata Guide linked below. Instructions on how to use the service can also be found in the Imagery User Guide linked below. Note: This map displays the Ontario Imagery Web Map Service Source, a companion ArcGIS web map service to the Ontario Imagery Web Map Service. It provides an overlay that can be used to identify acquisition relevant information such as sensor source and acquisition date. OIWMS contains several hierarchical layers of imagery, with coarser less detailed imagery that draws at broad scales, such as a province wide zooms, and finer more detailed imagery that draws when zoomed in, such as city-wide zooms. The attributes associated with this data describes at what scales (based on a computer screen) the specific imagery datasets are visible. Available Products Ontario Imagery OCG Web Map Service – public linkOntario Imagery ArcGIS Map Service – public linkOntario Imagery Web Map Service Source – public linkOntario Imagery ArcGIS Map Service – OPS internal linkOntario Imagery Web Map Service Source – OPS internal linkAdditional Documentation Ontario Imagery Web Map Service Metadata Guide (PDF)Ontario Imagery Web Map Service Copyright Document (PDF) Imagery User Guide (Word)StatusCompleted: Production of the data has been completed Maintenance and Update FrequencyAnnually: Data is updated every year ContactOntario Ministry of Natural Resources, Geospatial Ontario, imagery@ontario.ca
Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) for performing this analysis is provided in the function NN_depth_ensembling.m available on the main landing page for the data release of which this is a child item, along with a flow chart illustrating the four different neural network-based depth retrieval methods. To develop and test this new NNDR approach, the method was applied to satellite images from the American River near Fair Oaks, CA, acquired in October 2020. Field measurements of water depth available through another data release (Legleiter, C.J., and Harrison, L.R., 2022, Field measurements of water depth from the American River near Fair Oaks, CA, October 19-21, 2020: U.S. Geological Survey data release, https://res1doid-o-torg.vcapture.xyz/10.5066/P92PNWE5) were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: American_mean-spec.tif, American_mean-depth.tif, American_NN-depth.tif, and American-single-image.tif. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.
Declassified satellite images provide an important worldwide record of land-surface change. With the success of the first release of classified satellite photography in 1995, images from U.S. military intelligence satellites KH-7 and KH-9 were declassified in accordance with Executive Order 12951 in 2002. The data were originally used for cartographic information and reconnaissance for U.S. intelligence agencies. Since the images could be of historical value for global change research and were no longer critical to national security, the collection was made available to the public. Keyhole (KH) satellite systems KH-7 and KH-9 acquired photographs of the Earth’s surface with a telescopic camera system and transported the exposed film through the use of recovery capsules. The capsules or buckets were de-orbited and retrieved by aircraft while the capsules parachuted to earth. The exposed film was developed and the images were analyzed for a range of military applications. The KH-7 surveillance system was a high resolution imaging system that was operational from July 1963 to June 1967. Approximately 18,000 black-and-white images and 230 color images are available from the 38 missions flown during this program. Key features for this program were larger area of coverage and improved ground resolution. The cameras acquired imagery in continuous lengthwise sweeps of the terrain. KH-7 images are 9 inches wide, vary in length from 4 inches to 500 feet long, and have a resolution of 2 to 4 feet. The KH-9 mapping program was operational from March 1973 to October 1980 and was designed to support mapping requirements and exact positioning of geographical points for the military. This was accomplished by using image overlap for stereo coverage and by using a camera system with a reseau grid to correct image distortion. The KH-9 framing cameras produced 9 x 18 inch imagery at a resolution of 20-30 feet. Approximately 29,000 mapping images were acquired from 12 missions. The original film sources are maintained by the National Archives and Records Administration (NARA). Duplicate film sources held in the USGS EROS Center archive are used to produce digital copies of the imagery.