100+ datasets found
  1. World Imagery

    • onemap-esri.hub.arcgis.com
    • inspiracie.arcgeo.sk
    • +11more
    Updated Dec 12, 2009
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2009). World Imagery [Dataset]. https://onemap-esri.hub.arcgis.com/maps/10df2279f9684e4a9f6a7f08febac2a9
    Explore at:
    Dataset updated
    Dec 12, 2009
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    World Imagery provides one meter or better satellite and aerial imagery for most of the world’s landmass and lower resolution satellite imagery worldwide. The map is currently comprised of the following sources:Worldwide 15-m resolution TerraColor imagery at small and medium map scales.Maxar imagery basemap products around the world: Vivid Premium at 15-cm HD resolution for select metropolitan areas, Vivid Advanced 30-cm HD for more than 1,000 metropolitan areas, and Vivid Standard from 1.2-m to 0.6-cm resolution for the most of the world, with 30-cm HD across the United States and parts of Western Europe. More information on the Maxar products is included below. High-resolution aerial photography contributed by the GIS User Community. This imagery ranges from 30-cm to 3-cm resolution. You can contribute your imagery to this map and have it served by Esri via the Community Maps Program.Maxar Basemap ProductsVivid PremiumProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product provides 15-cm HD resolution imagery.Vivid AdvancedProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product includes a mix of native 30-cm and 30-cm HD resolution imagery.Vivid StandardProvides a visually consistent and continuous image layer over large areas through advanced image mosaicking techniques, including tonal balancing and seamline blending across thousands of image strips. Available from 1.2-m down to 30-cm HD. More on Maxar HD.Updates and CoverageYou can use the World Imagery Updates app to learn more about recent updates and map coverage.CitationsThis layer includes imagery provider, collection date, resolution, accuracy, and source of the imagery. With the Identify tool in ArcGIS Desktop or the ArcGIS Online Map Viewer you can see imagery citations. Citations returned apply only to the available imagery at that location and scale. You may need to zoom in to view the best available imagery. Citations can also be accessed in the World Imagery with Metadata web map.UseYou can add this layer to the ArcGIS Online Map Viewer, ArcGIS Desktop, or ArcGIS Pro. To view this layer with a useful reference overlay, open the Imagery Hybrid web map.FeedbackHave you ever seen a problem in the Esri World Imagery Map that you wanted to report? You can use the Imagery Map Feedback web map to provide comments on issues. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.

  2. Global commercial satellite imagery data 2022, by spatial resolution

    • statista.com
    Updated Mar 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2022). Global commercial satellite imagery data 2022, by spatial resolution [Dataset]. https://www.statista.com/statistics/1293723/commercial-satellite-imagery-resolution-worldwide/
    Explore at:
    Dataset updated
    Mar 4, 2022
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    2022
    Area covered
    World
    Description

    Satellite images are essentially the eyes in the sky. Some of the recent satellites, such as WorldView-3, provide images with a spatial resolution of 0.3 meters. This satellite with a revisit time of under 24 hours can scan a new image of the exact location with every revisit.

    Spatial resolution explained Spatial resolution is the size of the physical dimension that can be represented on a pixel of the image. Or in other words, spatial resolution is a measure of the smallest object that the sensor can resolve measured in meters. Generally, spatial resolution can be divided into three categories:

    – Low resolution: over 60m/pixel. (useful for regional perspectives such as monitoring larger forest areas)

    – Medium resolution: 10‒30m/pixel. (Useful for monitoring crop fields or smaller forest patches)

    – High to very high resolution: 0.30‒5m/pixel. (Useful for monitoring smaller objects like buildings, narrow streets, or vehicles)

    Based on the application of the imagery for the final product, a choice can be made on the resolution, as labor intensity from person-hours to computing power required increases with the resolution of the imagery.

  3. G

    High Resolution Satellite Imagery

    • open.canada.ca
    • catalogue.arctic-sdi.org
    • +1more
    esri rest, html
    Updated Jan 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Government of Yukon (2025). High Resolution Satellite Imagery [Dataset]. https://open.canada.ca/data/en/dataset/0a14b357-8a89-6e98-720e-3a800022cb99
    Explore at:
    html, esri restAvailable download formats
    Dataset updated
    Jan 9, 2025
    Dataset provided by
    Government of Yukon
    License

    Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
    License information was derived automatically

    Description

    This image service contains high resolution satellite imagery for selected regions throughout the Yukon. Imagery is 1m pixel resolution, or better. Imagery was supplied by the Government of Yukon, and the Canadian Department of National Defense. All the imagery in this service is licensed. If you have any questions about Yukon government satellite imagery, please contact Geomatics.Help@gov.yk.can. This service is managed by Geomatics Yukon.

  4. NZ 10m Satellite Imagery (2021-2022)

    • data.linz.govt.nz
    • geodata.nz
    dwg with geojpeg +8
    Updated Jul 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Land Information New Zealand (2022). NZ 10m Satellite Imagery (2021-2022) [Dataset]. https://data.linz.govt.nz/layer/109401-nz-10m-satellite-imagery-2021-2022/
    Explore at:
    kml, pdf, geojpeg, jpeg2000, geotiff, jpeg2000 lossless, erdas imagine, kea, dwg with geojpegAvailable download formats
    Dataset updated
    Jul 1, 2022
    Dataset authored and provided by
    Land Information New Zealandhttps://www.linz.govt.nz/
    License

    https://data.linz.govt.nz/license/attribution-4-0-international/https://data.linz.govt.nz/license/attribution-4-0-international/

    Area covered
    Description

    This dataset provides a seamless cloud-free 10m resolution satellite imagery layer of the New Zealand mainland and offshore islands.

    The imagery was captured by the European Space Agency Sentinel-2 satellites between September 2021 - April 2022.

    Technical specifications:

    • 450 x ortho-rectified RGB GeoTIFF images in NZTM projection, tiled into the LINZ Standard 1:50,000 tile layout
    • Satellite sensors: ESA Sentinel-2A and Sentinel-2B
    • Acquisition dates: September 2021 - April 2022
    • Spectral resolution: R, G, B
    • Spatial resolution: 10 meters
    • Radiometric resolution: 8-bits (downsampled from 12-bits)

    This is a visual product only. The data has been downsampled from 12-bits to 8-bits, and the original values of the images have been modified for visualisation purposes.

  5. r

    Marine satellite image test collections (AIMS)

    • researchdata.edu.au
    • eatlas.org.au
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hammerton, Marc; Lawrey, Eric, Dr (2024). Marine satellite image test collections (AIMS) [Dataset]. http://doi.org/10.26274/ZQ26-A956
    Explore at:
    Dataset updated
    Jul 9, 2024
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Hammerton, Marc; Lawrey, Eric, Dr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 1, 2016 - Sep 20, 2021
    Area covered
    Description

    This dataset consists of collections of satellite image composites (Sentinel 2 and Landsat 8) that are created from manually curated image dates for a range of projects. These images are typically prepared for subsequent analysis or testing of analysis algorithms as part of other projects. This dataset acts as a repository of reproducible test sets of images processed from Google Earth Engine using a standardised workflow.

    Details of the algorithms used to produce the imagery are described in the GEE code and code repository available on GitHub (https://github.com/eatlas/World_AIMS_Marine-satellite-imagery).

    Project test image sets:

    As new projects are added to this dataset, their details will be described here:

    • NESP MaC 2.3 Benthic reflection estimation (projects/CS_NESP-MaC-2-3_AIMS_Benth-reflect): This collection consists of six Sentinel 2 image composites in the Coral Sea and GBR for the purpose of testing a method of determining benthic reflectance of deep lagoonal areas of coral atolls. These image composites are in GeoTiff format, using 16-bit encoding and LZW compression. These images do not have internal image pyramids to save on space. [Status: final and available for download]

    • NESP MaC 2.3 Oceanic Vegetation (projects/CS_NESP-MaC-2-3_AIMS_Oceanic-veg): This project is focused on mapping vegetation on the bottom of coral atolls in the Coral Sea. This collection consists of additional images of Ashmore Reef. The lagoonal area of Ashmore has low visibility due to coloured dissolved organic matter, making it very hard to distinguish areas that are covered in vegetation. These images were manually curated to best show the vegetation. While these are the best images in the Sentinel 2 series up to 2023, they are still not very good. Probably 80 - 90% of the lagoonal benthos is not visible. [Status: final and available for download]

    • NESP MaC 3.17 Australian reef mapping (projects/AU_NESP-MaC-3-17_AIMS_Reef-mapping): This collection of test images was prepared to determine if creating a composite from manually curated image dates (corresponding to images with the clearest water) would produce a better composite than a fully automated composite based on cloud filtering. The automated composites are described in https://doi.org/10.26274/HD2Z-KM55. This test set also includes composites from low tide imagery. The images in this collection are not yet available for download as the collection of images that will be used in the analysis has not been finalised.
      [Status: under development, code is available, but not rendered images]

    • Capricorn Regional Map (projects/CapBunk_AIMS_Regional-map): This collection was developed for making a set of maps for the region to facilitate participatory mapping and reef restoration field work planning. [Status: final and available for download]

    • Default (project/default): This collection of manual selected scenes are those that were prepared for the Coral Sea and global areas to test the algorithms used in the developing of the original Google Earth Engine workflow. This can be a good starting point for new test sets. Note that the images described in the default project are not rendered and made available for download to save on storage space. [Status: for reference, code is available, but not rendered images]

    Filename conventions:

    The images in this dataset are all named using a naming convention. An example file name is Wld_AIMS_Marine-sat-img_S2_NoSGC_Raw-B1-B4_54LZP.tif. The name is made up of: - Dataset name (Wld_AIMS_Marine-sat-img), short for World, Australian Institute of Marine Science, Marine Satellite Imagery.
    - Satellite source: L8 for Landsat 8 or S2 for Sentinel 2. - Additional information or purpose: NoSGC - No sun glint correction, R1 best reference imagery set or R2 second reference imagery. - Colour and contrast enhancement applied (DeepFalse, TrueColour,Shallow,Depth5m,Depth10m,Depth20m,Raw-B1-B4), - Image tile (example: Sentinel 2 54LZP, Landsat 8 091086)

    Limitations:

    Only simple atmospheric correction is applied to land areas and as a result the imagery only approximates the bottom of atmosphere reflectance.

    For the sentinel 2 imagery the sun glint correction algorithm transitions between different correction levels from deep water (B8) to shallow water (B11) and a fixed atmospheric correction for land (bright B8 areas). Slight errors in the tuning of these transitions can result in unnatural tonal steps in the transitions between these areas, particularly in very shallow areas.

    For the Landsat 8 image processing land areas appear as black from the sun glint correction, which doesn't separately mask out the land. The code for the Landsat 8 imagery is less developed than for the Sentinel 2 imagery.

    The depth contours are estimated using satellite derived bathymetry that is subject to errors caused by cloud artefacts, substrate darkness, water clarity, calibration issues and uncorrected tides. They were tuned in the clear waters of the Coral Sea. The depth contours in this dataset are RAW and contain many false positives due to clouds. They should not be used without additional dataset cleanup.

    Change log:

    As changes are made to the dataset, or additional image collections are added to the dataset then those changes will be recorded here.

    2nd Edition, 2024-06-22: CapBunk_AIMS_Regional-map 1st Edition, 2024-03-18: Initial publication of the dataset, with CS_NESP-MaC-2-3_AIMS_Benth-reflect, CS_NESP-MaC-2-3_AIMS_Oceanic-veg and code for AU_NESP-MaC-3-17_AIMS_Reef-mapping and Default projects.

    Data Format:

    GeoTiff images with LZW compression. Most images do not have internal image pyramids to save on storage space. This makes rendering these images very slow in a desktop GIS. Pyramids should be added to improve performance.

    Data Location:

    This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\Wld-AIMS-Marine-sat-img

  6. Landsat 5 Satellite Imagery for selected areas of Great Barrier Reef and...

    • catalogue.eatlas.org.au
    • researchdata.edu.au
    Updated Aug 20, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Institute of Marine Science (AIMS) (2014). Landsat 5 Satellite Imagery for selected areas of Great Barrier Reef and Torres Strait (NERP TE 13.1, eAtlas AIMS, source: NASA) [Dataset]. https://catalogue.eatlas.org.au/geonetwork/srv/api/records/bc667743-3f77-4533-82a7-5b45c317dd89
    Explore at:
    www:link-1.0-http--link, www:link-1.0-http--downloaddata, www:link-1.0-http--relatedAvailable download formats
    Dataset updated
    Aug 20, 2014
    Dataset provided by
    Australian Institute Of Marine Sciencehttp://www.aims.gov.au/
    Time period covered
    Sep 1, 1988 - Jul 1, 2010
    Area covered
    Description

    This dataset contains Landsat 5 imagery for selected areas of Queensland, currently Torres Strait and around Lizard Island and Cape Tribulation.

    This collection was made as a result of the development of the Torres Strait Features dataset. It includes a number (typically 4 - 8) of selected Landsat images for each scene from the entire Landsat 5 archive. These images were selected for having low cloud cover and clear water. The aim of this collection was to allow investigation of the marine features.

    The complete catalogue of Landsat 5 for scenes 96_70, 96_71, 97_67, 97_68, 98_66, 98_67, 98_68_99_66, 99_67 and 99_68 were downloaded from the Google Earth Engine site ( https://console.developers.google.com/storage/earthengine-public/landsat/ ). The images were then processed into low resolution true colour using GDAL. They were then reviewed for picture clarity and the best ones were selected and processed at full resolution to be part of this collection.

    The true colour conversion was achieved by applying level adjustment to each channel to ensure that the tonal scaling of each channel was adjusted to give a good overall colour balance. This effectively set the black point of the channel and the gain. This adjustment was applied consistently to all images.

    • Red: Channel B3, Black level 8, White level 58
    • Green: Channel B2, Black level 10, White level 55
    • Blue: Channel B1, Black level 32, White level 121

    Note: A constant level adjustment was made to the images regardless of the time of the year that the images were taken. As a result images in the summer tend to be brighter than those in the winter.

    After level adjustment the three channels were merged into a single colour image using gdal_merge. The black surround on the image was then made transparent using the GDAL nearblack command.

    This collection consists of 59 images saved as 4 channel (Red, Green, Blue, Alpha) GeoTiff images with LZW compression (lossless) and internal overviews with a WGS 84 UTM 54N projection.

    Each of the individual images can be downloaded from the eAtlas map client (Overlay layers: eAtlas/Imagery Base Maps Earth Cover/Landsat 5) or as a collection of all images for each scene.

    Data Location:

    This dataset is filed in the eAtlas enduring data repository at: data\NERP-TE\13.1_eAtlas\QLD_NERP-TE-13-1_eAtlas_Landsat-5_1988-2011

  7. Commercial Satellite Imaging market size will be USD 10.21 Billion by 2030!

    • cognitivemarketresearch.com
    pdf,excel,csv,ppt
    Updated Jun 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cognitive Market Research (2024). Commercial Satellite Imaging market size will be USD 10.21 Billion by 2030! [Dataset]. https://www.cognitivemarketresearch.com/commercial-satellite-imaging-market-report
    Explore at:
    pdf,excel,csv,pptAvailable download formats
    Dataset updated
    Jun 14, 2024
    Dataset provided by
    Decipher Market Research
    Authors
    Cognitive Market Research
    License

    https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy

    Time period covered
    2021 - 2033
    Area covered
    Global
    Description

    According to Cognitive Market Research, the global commercial satellite imaging market size will be USD 10.21 billion in 2024 and will expand at a compound annual growth rate (CAGR) of 10.95% from 2024 to 2031.

    • The global commercial satellite imaging market will expand significantly by 10.95% CAGR between 2024 to 2031. • North America held the major market of more than XX% of the global revenue with a market size of USD XX million in 2023 and will grow at a compound annual growth rate (CAGR) of XX% from 2024 to 2031. • Europe accounted for a share of over XX% of the global market size of USD XX million. • Asia Pacific held a market of around XX% of the global revenue with a market size of USD XX million in 2023 and will grow at a compound annual growth rate (CAGR) of XX% from 2024 to 2031. • Latin America's market will have more than XX% of the global revenue with a market size of USD XX million in 2023 and will grow at a compound annual growth rate (CAGR) of XX% from 2024 to 2031. • Middle East and Africa held the major market of around XX% of the global revenue with a market size of USD XX million in 2023 and will grow at a compound annual growth rate (CAGR) of XX% from 2024 to 2031. • The Geospatial Data Acquisition segment is set to rise due to the need for evaluating a range of economic factors, such as farming methods, infrastructure, urbanization, and environmental effects. Governments and businesses in the private sector are also investing heavily in satellite imaging to obtain information on urban planning and natural resources.

    • The commercial satellite imaging market is driven by the increasing use of Satellite Images for real-time data access in defense applications, Government Support, rising demand for High-resolution imaging for various end-use applications, and Technological advancements leading to high-resolution satellite imaging. • North America held the highest commercial satellite imaging market revenue share in 2023.

    Current Scenario of the Commercial Satellite Imaging Market

    Key Drivers of the Commercial Satellite Imaging Market

    Increasing the Use of Satellite Images for Real-Time Data Access in Defence Applications to Accelerate Market Growth
    

    A comprehensive understanding of Automated Optical Inspection (AOI) and satellite imagery has become an advantage and a necessity in today's asymmetric warfare. Digital Elevation Models (DEMs) and 3D models of rural and urban regions may be produced quickly and reliably with the help of Airbus' Pleiades Neo military satellite. To determine if a target is mobile or fixed, high-resolution photos are particularly helpful. Furthermore, assets or targets can be recognized, identified, and detected down to the finest detail due to the Very High Resolution (VHR). Additionally, reliable topography data from satellite imagery helps the armed forces plan ahead and gain a comprehensive understanding of the situation. For instance, according to a report published by the Government of India, Digital Video Broadcasting-Satellite Version 2 (DVB-S2) technology has been added to the satellite-based communication network to improve efficiency and make the best use of available spectrum. More than 785 DCPW, State/UT Police, and CAPF-updated VSATs have been deployed.

    (Source-https://www.mha.gov.in/sites/default/files/AnnualreportEnglish_04102023.pdf )

    According to a news report by Airbus, Poland, and Airbus Defence and Space have signed a deal for the development, production, launch, and onboard supply of two high-performance optical Earth observation satellites as part of a geospatial intelligence system. Moreover, the contract includes the provision of Very High Resolution (VHR) imagery.

    (Source- https://www.airbus.com/sites/g/files/jlcbta136/files/2023-01/EN-Airbus-SpS-Press-Release-Airbus-to-provide-Poland-with-a-very-high-resolution-optical-satellite-system_0.pdf )

    Thus, the increasing use of satellite images for real-time data access in defense applications accelerates market growth.

    Government Support will drive the Commercial Satellite Imaging market-
    

    Governments throughout the world are realizing increasingly how important sate...

  8. r

    Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and...

    • researchdata.edu.au
    Updated Feb 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hammerton, Marc; Lawrey, Eric, Dr; mailto:b.robson@aims.gov.au; eAtlas Data Manager; e-Atlas; Wolfe, Kennedy (Dr); Lawrey, Eric, Dr.; Lawrey, Eric, Dr (2024). Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and Landsat 8) 2015 – 2021 (AIMS) [Dataset]. http://doi.org/10.26274/NH77-ZW79
    Explore at:
    Dataset updated
    Feb 29, 2024
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Hammerton, Marc; Lawrey, Eric, Dr; mailto:b.robson@aims.gov.au; eAtlas Data Manager; e-Atlas; Wolfe, Kennedy (Dr); Lawrey, Eric, Dr.; Lawrey, Eric, Dr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 1, 2016 - Sep 20, 2021
    Area covered
    Description

    This dataset contains Sentinel 2 and Landsat 8 cloud free composite satellite images of the Coral Sea reef areas and some parts of the Great Barrier Reef. It also contains raw depth contours derived from the satellite imagery. This dataset was developed as the base information for mapping the boundaries of reefs and coral cays in the Coral Sea. It is likely that the satellite imagery is useful for numerous other applications. The full source code is available and can be used to apply these techniques to other locations.

    This dataset contains two sets of raw satellite derived bathymetry polygons for 5 m, 10 m and 20 m depths based on both the Landsat 8 and Sentinel 2 imagery. These are intended to be post-processed using clipping and manual clean up to provide an estimate of the top structure of reefs. This dataset also contains select scenes on the Great Barrier Reef and Shark bay in Western Australia that were used to calibrate the depth contours. Areas in the GBR were compared with the GA GBR30 2020 (Beaman, 2017) bathymetry dataset and the imagery in Shark bay was used to tune and verify the Satellite Derived Bathymetry algorithm in the handling of dark substrates such as by seagrass meadows. This dataset also contains a couple of small Sentinel 3 images that were used to check the presence of reefs in the Coral Sea outside the bounds of the Sentinel 2 and Landsat 8 imagery.

    The Sentinel 2 and Landsat 8 imagery was prepared using the Google Earth Engine, followed by post processing in Python and GDAL. The processing code is available on GitHub (https://github.com/eatlas/CS_AIMS_Coral-Sea-Features_Img).

    This collection contains composite imagery for Sentinel 2 tiles (59 in Coral Sea, 8 in GBR) and Landsat 8 tiles (12 in Coral Sea, 4 in GBR and 1 in WA). For each Sentinel tile there are 3 different colour and contrast enhancement styles intended to highlight different features. These include: - TrueColour - Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to identifying shallow features are and in mapping the vegetation on cays. - DeepFalse - Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This imagery has a high level of contrast enhancement applied to the imagery and so it appears more noisy (in particular showing artefact from clouds) than the TrueColour styling. - Shallow - Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Features less than a couple of metres appear dark blue, dry areas are white. This imagery is intended to help identify coral cay boundaries.

    For Landsat 8 imagery only the TrueColour and DeepFalse stylings were rendered.

    All Sentinel 2 and Landsat 8 imagery has Satellite Derived Bathymetry (SDB) depth contours. - Depth5m - This corresponds to an estimate of the area above 5 m depth (Mean Sea Level). - Depth10m - This corresponds to an estimate of the area above 10 m depth (Mean Sea Level). - Depth20m - This corresponds to an estimate of the area above 20 m depth (Mean Sea Level).

    For most Sentinel and some Landsat tiles there are two versions of the DeepFalse imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery. This allows any mapped features to be checked against two images. Typically the R2 imagery will have more artefacts from clouds. In one Sentinel 2 tile a third image was created to help with mapping the reef platform boundary.

    The satellite imagery was processed in tiles (approximately 100 x 100 km for Sentinel 2 and 200 x 200 km for Landsat 8) to keep each final image small enough to manage. These tiles were not merged into a single mosaic as it allowed better individual image contrast enhancement when mapping deep features. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs and where their might have been potential new reef platforms indicated by existing bathymetry datasets and the AHO Marine Charts. The extent of the imagery was limited by those available through the Google Earth Engine.

    Methods:

    The Sentinel 2 imagery was created using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile, images from 2015 – 2021 were reviewed manually after first filtering to remove cloudy scenes. The allowable cloud cover was adjusted so that at least the 50 least cloud free images were reviewed. The typical cloud cover threshold was 1%. Where very few images were available the cloud cover filter threshold was raised to 100% and all images were reviewed. The Google Earth Engine image IDs of the best images were recorded, along with notes to help sort the images based on those with the clearest water, lowest waves, lowest cloud, and lowest sun glint. Images where there were no or few clouds over the known coral reefs were preferred. No consideration of tides was used in the image selection process. The collection of usable images were grouped into two sets that would be combined together into composite images. The best were added to the R1 composite, and the next best images into the R2 composite. Consideration was made as to whether each image would improve the resultant composite or make it worse. Adding clear images to the collection reduces the visual noise in the image allowing deeper features to be observed. Adding images with clouds introduces small artefacts to the images, which are magnified due to the high contrast stretching applied to the imagery. Where there were few images all available imagery was typically used. 2. Sunglint was removed from the imagery using estimates of the sunglint using two of the infrared bands (described in detail in the section on Sun glint removal and atmospheric correction). 3. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later). 4. The brightness of the composite image was normalised so that all tiles would have a similar average brightness for deep water areas. This correction was applied to allow more consistent contrast enhancement. Note: this brightness adjustment was applied as a single offset across all pixels in the tile and so this does not correct for finer spatial brightness variations. 5. The contrast of the images was enhanced to create a series of products for different uses. The TrueColour colour image retained the full range of tones visible, so that bright sand cays still retain detail. The DeepFalse style was optimised to see features at depth and the Shallow style provides access to far red and infrared bands for assessing shallow features, such as cays and island. 6. The various contrast enhanced composite images were exported from Google Earth Engine and optimised using Python and GDAL. This optimisation added internal tiling and overviews to the imagery. The depth polygons from each tile were merged into shapefiles covering the whole for each depth.

    Cloud Masking

    Prior to combining the best images each image was processed to mask out clouds and their shadows.

    The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.

    A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.

    A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.

    The buffering was applied as the cloud masking would often miss significant portions of the edges of clouds and their shadows. The buffering allowed a higher percentage of the cloud to be excluded, whilst retaining as much of the original imagery as possible.

    The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. The algorithm used is significantly better than the default Sentinel 2 cloud masking and slightly better than the COPERNICUS/S2_CLOUD_PROBABILITY cloud mask because it masks out shadows, however there is potentially significant improvements that could be made to the method in the future.

    Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations.

  9. Landsat 7 ETM/1G satellite imagery - Hawaiian Islands cloud-free mosaics

    • fisheries.noaa.gov
    • datadiscoverystudio.org
    • +2more
    tiff
    Updated Jan 31, 2002
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tim Battista (2002). Landsat 7 ETM/1G satellite imagery - Hawaiian Islands cloud-free mosaics [Dataset]. https://www.fisheries.noaa.gov/inport/item/38723
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jan 31, 2002
    Dataset provided by
    National Centers for Coastal Ocean Science
    Authors
    Tim Battista
    Time period covered
    Jul 12, 1999 - Aug 21, 2000
    Area covered
    Description

    Cloud-free Landsat satellite imagery mosaics of the islands of the main 8 Hawaiian Islands (Hawaii, Maui, Kahoolawe, Lanai, Molokai, Oahu, Kauai and Niihau). Landsat 7 ETM (enhanced thematic mapper) is a polar orbiting 8 band multispectral satellite-borne sensor. The ETM+ instrument provides image data from eight spectral bands. The spatial resolution is 30 meters for the visible and near-infra...

  10. d

    Tree Canopy 2022

    • catalog.data.gov
    • s.cnmilf.com
    Updated Mar 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.austintexas.gov (2025). Tree Canopy 2022 [Dataset]. https://catalog.data.gov/dataset/tree-canopy-2022
    Explore at:
    Dataset updated
    Mar 25, 2025
    Dataset provided by
    data.austintexas.gov
    Description

    City of Austin Open Data Terms of Use https://data.austintexas.gov/stories/s/ranj-cccq This dataset was created to depict approximate tree canopy cover for all land within the City of Austin's "full watershed regulation area." Intended for planning purposes and measuring citywide percent canopy. Definition: Tree canopy is defined as the layer of leaves, branches, and stems of trees that cover the ground when viewed from above. Methods: The 2022 tree canopy layer was derived from satellite imagery (Maxar) and aerial imagery (NAIP). Images were used to extract tree canopy into GIS vector features. First, a “visual recognition engine” generated the vector features. The engine used machine learning algorithms to detect and label image pixels as tree canopy. Then using prior knowledge of feature geometries, more modeling algorithms were used to predict and transform probability maps of labeled pixels into finished vector polygons depicting tree canopy. The resulting features were reviewed and edited through manual interpretation by GIS professionals. When appropriate, NAIP 2022 aerial imagery supplemented satellite images that had cloud cover, and a manual editing process made sure tree canopy represented 2022 conditions. Finally, an independent accuracy assessment was performed by the City of Austin and the Texas A&M Forest Service for quality assurance. GIS professionals assessed agreement between the tree canopy data and its source satellite imagery. An overall accuracy of 98% was found. Only 23 errors were found out of a total 1,000 locations reviewed. These were mostly omission errors (e.g. not including canopy in this dataset when canopy is shown in the satellite or aerial image). Best efforts were made to ensure ground-truth locations contained a tree on the ground. To ensure this, location data were used from City of Austin and Texas A&M Forest Service databases. Analysis: The City of Austin measures tree canopy using the calculation: acres of tree canopy divided by acres of land. The area of interest for the land acres is evaluated at the City of Austin's jurisdiction including Full Purpose, Limited Purpose, and Extraterritorial jurisdictions as of May 2023. New data show, in 2022, tree canopy covered 41% of the total land area within Austin's city limits (using city limit boundaries May 2023 and included in the download as layer name "city_of_austin_2023"). 160,046.50 canopy acres (2022) / 395,037.53 land acres = 40.51% ~41%. This compares to 36% last measured in 2018, and a historical average that’s also hovered around 36%. The time period between 2018 and 2022 saw a 5 percentage point change resulting in over 19K acres of canopy gained (estimated). Data Disclaimer: It's possible changes in percent canopy over the years is due to annexation and improved data methods (e.g. higher resolution imagery, AI, software used, etc.) in addition to actual in changes in tree canopy cover on the ground. For planning purposes only. Dataset does not account for individual trees, tree species nor any metric for tree canopy height. Tree canopy data is provided in vector GIS format housed in a Geodatabase. Download and unzip the folder to get started. Please note, errors may exist in this dataset due to the variation in species composition and land use found across the study area. This product is for informational purposes and may not have been prepared for or be suitable for legal, engineering, or surveying purposes. It does not represent an on-the-ground survey and represents only the approximate relative location of property boundaries. This product has been produced by the City of Austin for the sole purpose of geographic reference. No warranty is made by the City of Austin regarding specific accuracy or completeness. Data Provider: Ecopia AI Tech Corporation and PlanIT Geo, Inc. Data derived from Maxar Technologies, Inc. and USDA NAIP imagery

  11. n

    USGS High Resolution Orthoimagery

    • cmr.earthdata.nasa.gov
    • data.nasa.gov
    • +2more
    Updated Jan 29, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). USGS High Resolution Orthoimagery [Dataset]. https://cmr.earthdata.nasa.gov/search/concepts/C1220567548-USGS_LTA.html
    Explore at:
    Dataset updated
    Jan 29, 2016
    Time period covered
    Jan 1, 1970 - Present
    Area covered
    Earth
    Description

    High resolution orthorectified images combine the image characteristics of an aerial photograph with the geometric qualities of a map. An orthoimage is a uniform-scale image where corrections have been made for feature displacement such as building tilt and for scale variations caused by terrain relief, sensor geometry, and camera tilt. A mathematical equation based on ground control points, sensor calibration information, and a digital elevation model is applied to each pixel to rectify the image to obtain the geometric qualities of a map.

    A digital orthoimage may be created from several photographs mosaicked to form the final image. The source imagery may be black-and-white, natural color, or color infrared with a pixel resolution of 1-meter or finer. With orthoimagery, the resolution refers to the distance on the ground represented by each pixel.

  12. PlanetScope Full Archive

    • earth.esa.int
    • eocat.esa.int
    • +2more
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PlanetScope Full Archive [Dataset]. https://earth.esa.int/eogateway/catalog/planetscope-full-archive
    Explore at:
    Dataset authored and provided by
    European Space Agencyhttp://www.esa.int/
    License

    https://earth.esa.int/eogateway/documents/20142/1560778/ESA-Third-Party-Missions-Terms-and-Conditions.pdfhttps://earth.esa.int/eogateway/documents/20142/1560778/ESA-Third-Party-Missions-Terms-and-Conditions.pdf

    Description

    The PlanetScope Level 1B Basic Scene and Level 3B Ortho Scene full archive products are available as part of Planet imagery offer. The Unrectified Asset: PlanetScope Basic Analytic Radiance (TOAR) product is a Scaled Top of Atmosphere Radiance (at sensor) and sensor corrected product, without correction for any geometric distortions inherent in the imaging processes and is not mapped to a cartographic projection. The imagery data is accompanied by Rational Polynomial Coefficients (RPCs) to enable orthorectification by the user. This kind of product is designed for users with advanced image processing and geometric correction capabilities. Basic Scene Product Components and Format Product Components Image File (GeoTIFF format) Metadata File (XML format) Rational Polynomial Coefficients (XML format) Thumbnail File (GeoTIFF format) Unusable Data Mask UDM File (GeoTIFF format) Usable Data Mask UDM2 File (GeoTIFF format) Bands 4-band multispectral image (blue, green, red, near-infrared) or 8-band (coastal-blue, blue, green I, green, yellow, red, Rededge, near-infrared) Ground Sampling Distance Approximate, satellite altitude dependent Dove-C: 3.0 m-4.1 m Dove-R: 3.0 m-4.1 m SuperDove: 3.7 m-4.2 m Accuracy <10 m RMSE The Rectified assets: The PlanetScope Ortho Scene product is radiometrically-, sensor- and geometrically- corrected and is projected to a UTM/WGS84 cartographic map projection. The geometric correction uses fine Digital Elevation Models (DEMs) with a post spacing of between 30 and 90 metres. Ortho Scene Product Components and Format Product Components Image File (GeoTIFF format) Metadata File (XML format) Thumbnail File (GeoTIFF format) Unusable Data Mask UDM File (GeoTIFF format) Usable Data Mask UDM2 File (GeoTIFF format) Bands 3-band natural colour (red, green, blue) or 4-band multispectral image (blue, green, red, near-infrared) or 8-band (coastal-blue, blue, green I, green, yellow, red, RedEdge, near-infrared) Ground Sampling Distance Approximate, satellite altitude dependent Dove-C: 3.0 m-4.1 m Dove-R: 3.0 m-4.1 m SuperDove: 3.7 m-4.2 m Projection UTM WGS84 Accuracy <10 m RMSE PlanetScope Ortho Scene product is available in the following: PlanetScope Visual Ortho Scene product is orthorectified and colour-corrected (using a colour curve) 3-band RGB Imagery. This correction attempts to optimise colours as seen by the human eye providing images as they would look if viewed from the perspective of the satellite. PlanetScope Surface Reflectance product is orthorectified, 4-band BGRN or 8-band Coastal Blue, Blue, Green I, Green, Yellow, Red, RedEdge, NIR Imagery with geometric, radiometric and corrected for surface reflection. This data is optimal for value-added image processing such as land cover classifications. PlanetScope Analytic Ortho Scene Surface Reflectance product is orthorectified, 4-band BGRN or 8-band Coastal Blue, Blue, Green I, Green, Yellow, Red, RedEdge, NIR Imagery with geometric, radiometric and calibrated to top of atmosphere radiance. As per ESA policy, very high-resolution imagery of conflict areas cannot be provided.

  13. w

    A National Space Policy: Views from the Earth Observation Community

    • data.wu.ac.at
    • datadiscoverystudio.org
    • +1more
    pdf
    Updated Jun 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). A National Space Policy: Views from the Earth Observation Community [Dataset]. https://data.wu.ac.at/schema/data_gov_au/MjA4ZmI0YjgtODU1Yi00MjYyLWFlNzAtMmY3MjJmMDE5YjIw
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 26, 2018
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Earth
    Description

    Australia has been receiving Earth Observations from Space (EOS) for over 50 years. Meteorological imagery dates from 1960 and Earth observation imagery from 1979. Australia has developed world-class scientific, environmental and emergency management EOS applications.

    However, in the top fifty economies of the world, Australia is one of only three nations which does not have a space program. The satellites on which Australia depends are supplied by other countries which is a potential problem due to Australia having limited control over data continuity and data access.

    The National Remote Sensing Technical Reference Group (NRSTRG) was established by Geoscience Australia as an advisory panel in 2004. It represents a cross-section of the remote sensing community and is made up of representatives from government, universities and private companies. Through the NRSTRG these parties provide Geoscience Australia with advice on technical and policy matters related to remote sensing.

    In February 2009 the NRSTRG met for a day specifically to discuss Australia's reliance on EOS, with a view to informing the development of space policy. This report is the outcome of that meeting. Australia has some 92 programs dependent on EOS data. These programs are concerned with environmental issues, natural resource management, water, agriculture, meteorology, forestry, emergency management, border security, mapping and planning. Approximately half these programs have a high dependency on EOS data. While these programs are quite diverse there is considerable overlap in the technology and data.

    Of Australia's EOS dependent programs 71 (77%) are valued between $100,000 and $10 million and 82 (89%) of all these programs have a medium or high dependency on EOS data demonstrating Australia's dependency on space based imaging.

    Earth observation dependencies within currently active Federal and state government programs are calculated to be worth just over $949 million, calculated by weighting the level of dependency on EOS for each program. This includes two programs greater than $100 million in scale and one program greater than a billion dollars in scale.

    This document is intended as a summary of Australia's current space and Earth observation dependencies, compiled by the NRSTRG, to be presented to the Federal Government's Space Policy Unit, a section of the Department of Innovation, Industry, Science and Research, as an aid to space policy formation.

  14. NOAA Colorized Satellite Imagery

    • uneca.africageoportal.com
    • disasterpartners.org
    • +15more
    Updated Jun 26, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA GeoPlatform (2019). NOAA Colorized Satellite Imagery [Dataset]. https://uneca.africageoportal.com/maps/8e93e0f942ae4d54a8d089e3cd5d2774
    Explore at:
    Dataset updated
    Jun 26, 2019
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Authors
    NOAA GeoPlatform
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Description

    Metadata: NOAA GOES-R Series Advanced Baseline Imager (ABI) Level 1b RadiancesMore information about this imagery can be found here.This satellite imagery combines data from the NOAA GOES East and West satellites and the JMA Himawari satellite, providing full coverage of weather events for most of the world, from the west coast of Africa west to the east coast of India. The tile service updates to the most recent image every 10 minutes at 1.5 km per pixel resolution.The infrared (IR) band detects radiation that is emitted by the Earth’s surface, atmosphere and clouds, in the “infrared window” portion of the spectrum. The radiation has a wavelength near 10.3 micrometers, and the term “window” means that it passes through the atmosphere with relatively little absorption by gases such as water vapor. It is useful for estimating the emitting temperature of the Earth’s surface and cloud tops. A major advantage of the IR band is that it can sense energy at night, so this imagery is available 24 hours a day.The Advanced Baseline Imager (ABI) instrument samples the radiance of the Earth in sixteen spectral bands using several arrays of detectors in the instrument’s focal plane. Single reflective band ABI Level 1b Radiance Products (channels 1 - 6 with approximate center wavelengths 0.47, 0.64, 0.865, 1.378, 1.61, 2.25 microns, respectively) are digital maps of outgoing radiance values at the top of the atmosphere for visible and near-infrared (IR) bands. Single emissive band ABI L1b Radiance Products (channels 7 - 16 with approximate center wavelengths 3.9, 6.185, 6.95, 7.34, 8.5, 9.61, 10.35, 11.2, 12.3, 13.3 microns, respectively) are digital maps of outgoing radiance values at the top of the atmosphere for IR bands. Detector samples are compressed, packetized and down-linked to the ground station as Level 0 data for conversion to calibrated, geo-located pixels (Level 1b Radiance data). The detector samples are decompressed, radiometrically corrected, navigated and resampled onto an invariant output grid, referred to as the ABI fixed grid.McIDAS merge technique and color mapping provided by the Cooperative Institute for Meteorological Satellite Studies (Space Science and Engineering Center, University of Wisconsin - Madison) using satellite data from SSEC Satellite Data Services and the McIDAS visualization software.

  15. MSG: Cloud top temperature product imagery over the tropics

    • catalogue.ceda.ac.uk
    Updated Jun 19, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NERC EDS Centre for Environmental Data Analysis (2023). MSG: Cloud top temperature product imagery over the tropics [Dataset]. https://catalogue.ceda.ac.uk/uuid/dd7a1031e2ad46339c8ae982db8d5d5e
    Explore at:
    Dataset updated
    Jun 19, 2023
    Dataset provided by
    Centre for Environmental Data Analysishttp://www.ceda.ac.uk/
    License

    https://artefacts.ceda.ac.uk/licences/specific_licences/msg.pdfhttps://artefacts.ceda.ac.uk/licences/specific_licences/msg.pdf

    Area covered
    Variables measured
    Cloud Top Temperature, Brightness Temperature, http://vocab.ndg.nerc.ac.uk/term/P141/4/GVAR0104, http://vocab.ndg.nerc.ac.uk/term/P141/4/GVAR0150
    Description

    The Meteosat Second Generation (MSG) satellites, operated by EUMETSAT (The European Organisation for the Exploitation of Meteorological Satellites), provide almost continuous imagery to meteorologists and researchers in Europe and around the world. These include visible, infra-red, water vapour, High Resolution Visible (HRV) images and derived cloud top height, cloud top temperature, fog, snow detection and volcanic ash products. These images are available for a range of geographical areas.

    This dataset contains cloud top temperature product images from MSG satellites over the tropics. Imagery available from March 2005 onwards at a frequency of 15 minutes (some are hourly) and are at least 24 hours old.

    The geographic extent for images within this datasets is available via the linked documentation 'MSG satellite imagery product geographic area details'. Each MSG imagery product area can be referenced from the third and fourth character of the image product name giving in the filename. E.g. for EEAO11 the corresponding geographic details can be found under the entry for area code 'AO' (i.e West Africa).

  16. i

    Indiana Current Imagery

    • indianamap.org
    • hub.arcgis.com
    • +1more
    Updated Jun 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IndianaMap (2023). Indiana Current Imagery [Dataset]. https://www.indianamap.org/datasets/INMap::indiana-current-imagery/about
    Explore at:
    Dataset updated
    Jun 26, 2023
    Dataset authored and provided by
    IndianaMap
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Indiana,
    Description

    The State of Indiana Geographic Information Office (GIO) has published a State-wide Digital Aerial Imagery Catalog consisting of orthoimagery files from 2016-2019 and 2021 – 2022 in Cloud-Optimized GeoTIFF (COG) format on the AWS Registry of Open Data Account. These COG formatted files support the dynamic imagery services available from the GIO ESRI-based imagery solution. The Open Data on AWS is a repository of publicly available datasets for access from AWS resources. These datasets are owned and maintained by the Indiana GIO. These images are licensed by Creative Commons 0 (CC0). Cloud Optimized GeoTIF behaves as a GeoTIFF in all products; however, the optimization becomes apparent when incorporating them into web services.

  17. NOAA Infrared Satellite Imagery

    • pacificgeoportal.com
    • cacgeoportal.com
    • +11more
    Updated Jun 26, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA GeoPlatform (2019). NOAA Infrared Satellite Imagery [Dataset]. https://www.pacificgeoportal.com/maps/4e681ff69e0e4b90866bb6a2e03db24a
    Explore at:
    Dataset updated
    Jun 26, 2019
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Authors
    NOAA GeoPlatform
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Description

    Metadata: NOAA GOES-R Series Advanced Baseline Imager (ABI) Level 1b RadiancesMore information about this imagery can be found here.This satellite imagery combines data from the NOAA GOES East and West satellites and the JMA Himawari satellite, providing full coverage of weather events for most of the world, from the west coast of Africa west to the east coast of India. The tile service updates to the most recent image every 10 minutes at 1.5 km per pixel resolution.The infrared (IR) band detects radiation that is emitted by the Earth’s surface, atmosphere and clouds, in the “infrared window” portion of the spectrum. The radiation has a wavelength near 10.3 micrometers, and the term “window” means that it passes through the atmosphere with relatively little absorption by gases such as water vapor. It is useful for estimating the emitting temperature of the Earth’s surface and cloud tops. A major advantage of the IR band is that it can sense energy at night, so this imagery is available 24 hours a day.The Advanced Baseline Imager (ABI) instrument samples the radiance of the Earth in sixteen spectral bands using several arrays of detectors in the instrument’s focal plane. Single reflective band ABI Level 1b Radiance Products (channels 1 - 6 with approximate center wavelengths 0.47, 0.64, 0.865, 1.378, 1.61, 2.25 microns, respectively) are digital maps of outgoing radiance values at the top of the atmosphere for visible and near-infrared (IR) bands. Single emissive band ABI L1b Radiance Products (channels 7 - 16 with approximate center wavelengths 3.9, 6.185, 6.95, 7.34, 8.5, 9.61, 10.35, 11.2, 12.3, 13.3 microns, respectively) are digital maps of outgoing radiance values at the top of the atmosphere for IR bands. Detector samples are compressed, packetized and down-linked to the ground station as Level 0 data for conversion to calibrated, geo-located pixels (Level 1b Radiance data). The detector samples are decompressed, radiometrically corrected, navigated and resampled onto an invariant output grid, referred to as the ABI fixed grid.Data source and merge technique provided by the Cooperative Institute for Meteorological Satellite Studies at the University of Wisconsin- Madison.

  18. A

    1939 Lake County Aerial - NW Quarter

    • data.amerigeoss.org
    • gimi9.com
    • +4more
    Updated Mar 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2022). 1939 Lake County Aerial - NW Quarter [Dataset]. https://data.amerigeoss.org/dataset/1939-lake-county-aerial-nw-quarter-0aa56
    Explore at:
    html, arcgis geoservices rest apiAvailable download formats
    Dataset updated
    Mar 23, 2022
    Dataset provided by
    United States
    License

    https://www.arcgis.com/sharing/rest/content/items/89679671cfa64832ac2399a0ef52e414/datahttps://www.arcgis.com/sharing/rest/content/items/89679671cfa64832ac2399a0ef52e414/data

    Description

    This two foot pixel resolution black and white aerial photography was flown on various dates in July and August 1939. They were scanned in 2001, and georeferenced in 2002. This data should NOT be used at a scale larger than 1 inch = 400 feet. Due to the lack of sufficient camera calibration information, errors will increase towards the margin of each underlying photo, although this effect has been minimized by cropping individual photos to make this mosaic. Since these photos were scanned from paper prints, local distortions (from the media stretching and/or shrinking) may be present as well as pen marks and fading. Caution should be used in interpreting features in this photography with reference to current conditions. In particular, many roads and road intersections have been realigned in the more than 60 years since this photography was taken. This historic aerial photography was captured in digital form as the result of a cooperative project between the Illinois State Geological Survey and the Geographic Information Systems (GIS) and Mapping Division of the Lake County Department of Information Technology. It is part of a statewide program to preserve the oldest known extensive aerial photography for future generations. The original photography was performed by the U.S. Department of Agriculture as part of a nation-wide program for use in agricultural assessment. Since the original negatives became unstable and were destroyed by the National Archives in the 1980's, only paper prints remain. A set of paper prints representing the best available quality was assembled from the collections of several agencies.

  19. d

    Maps of water depth derived from satellite images of selected reaches of the...

    • catalog.data.gov
    • data.usgs.gov
    Updated Sep 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Maps of water depth derived from satellite images of selected reaches of the American, Colorado, and Potomac Rivers acquired in 2020 and 2021 (ver. 2.0, September 2024) [Dataset]. https://catalog.data.gov/dataset/maps-of-water-depth-derived-from-satellite-images-of-selected-reaches-of-the-american-colo
    Explore at:
    Dataset updated
    Sep 12, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Colorado, United States
    Description

    Information on water depth in river channels is important for a number of applications in water resource management but can be difficult to obtain via conventional field methods, particularly over large spatial extents and with the kind of frequency and regularity required to support monitoring programs. Remote sensing methods could provide a viable alternative means of mapping river bathymetry (i.e., water depth). The purpose of this study was to develop and test new, spectrally based techniques for estimating water depth from satellite image data. More specifically, a neural network-based temporal ensembling approach was evaluated in comparison to several other neural network depth retrieval (NNDR) algorithms. These methods are described in a manuscript titled "Neural Network-Based Temporal Ensembling of Water Depth Estimates Derived from SuperDove Images" and the purpose of this data release is to make available the depth maps produced using these techniques. The images used as input were acquired by the SuperDove cubesats comprising the PlanetScope constellation, but the original images cannot be redistributed due to licensing restrictions; the end products derived from these images are provided instead. The large number of cubesats in the PlanetScope constellation allows for frequent temporal coverage and the neural network-based approach takes advantage of this high density time series of information by estimating depth via one of four NNDR methods described in the manuscript: 1. Mean-spec: the images are averaged over time and the resulting mean image is used as input to the NNDR. 2. Mean-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is averaged to obtain the final depth map. 3. NN-depth: a separate NNDR is applied independently to each image in the time series and the resulting time series of depth estimates is then used as input to a second, ensembling neural network that essentially weights the depth estimates from the individual images so as to optimize the agreement between the image-derived depth estimates and field measurements of water depth used for training; the output from the ensembling neural network serves as the final depth map. 4. Optimal single image: a separate NNDR is applied independently to each image in the time series and only the image that yields the strongest agreement between the image-derived depth estimates and the field measurements of water depth used for training is used as the final depth map. MATLAB (Version 24.1, including the Deep Learning Toolbox) source code for performing this analysis is provided in the function NN_depth_ensembling.m and the figure included on this landing page provides a flow chart illustrating the four different neural network-based depth retrieval methods. As examples of the resulting models, MATLAB *.mat data files containing the best-performing neural network model for each site are provided below, along with a file that lists the PlanetScope image identifiers for the images that were used for each site. To develop and test this new NNDR approach, the method was applied to satellite images from three rivers across the U.S.: the American, Colorado, and Potomac. For each site, field measurements of water depth available through other data releases were used for training and validation. The depth maps produced via each of the four methods described above are provided as GeoTIFF files, with file name suffixes that indicate the method employed: X_mean-spec.tif, X_mean-depth.tif, X_NN-depth.tif, and X-single-image.tif, where X denotes the site name. The spatial resolution of the depth maps is 3 meters and the pixel values within each map are water depth estimates in units of meters.

  20. Coral Sea Sentinel 2 Marine Satellite Composite Draft Imagery version 0...

    • catalogue.eatlas.org.au
    • researchdata.edu.au
    Updated Nov 21, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Institute of Marine Science (AIMS) (2021). Coral Sea Sentinel 2 Marine Satellite Composite Draft Imagery version 0 (AIMS) [Dataset]. https://catalogue.eatlas.org.au/geonetwork/srv/api/records/2932dc63-9c9b-465f-80bf-09073aacaf1c
    Explore at:
    www:link-1.0-http--related, www:link-1.0-http--downloaddataAvailable download formats
    Dataset updated
    Nov 21, 2021
    Dataset provided by
    Australian Institute Of Marine Sciencehttp://www.aims.gov.au/
    Time period covered
    Oct 1, 2016 - Sep 20, 2021
    Area covered
    Coral Sea
    Description

    This dataset contains composite satellite images for the Coral Sea region based on 10 m resolution Sentinel 2 imagery from 2015 – 2021. This image collection is intended to allow mapping of the reef and island features of the Coral Sea. This is a draft version of the dataset prepared from approximately 60% of the available Sentinel 2 image. An improved version of this dataset was released https://doi.org/10.26274/NH77-ZW79.

    This collection contains composite imagery for 31 Sentinel 2 tiles in the Coral Sea. For each tile there are 5 different colour and contrast enhancement styles intended to highlight different features. These include: - DeepFalse - Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This technique doesn't work where the water is not as clear as the ultraviolet get scattered easily. - DeepMarine - Bands: B2 (blue), B3 (green), B4 (red): This is a contrast enhanced version of the true colour imagery, focusing on being able to better see the deeper features. Shallow features are over exposed due to the increased contrast. - ReefTop - Bands: B3 (red): This imagery is contrast enhanced to create an mask (black and white) of reef tops, delineating areas that are shallower or deeper than approximately 4 - 5 m. This mask is intended to assist in the creating of a GIS layer equivalent to the 'GBR Dry Reefs' dataset. The depth mapping exploits the limited water penetration of the red channel. In clear water the red channel can only see features to approximately 6 m regardless of the substrate type. - Shallow - Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Feature less than a couple of metres appear dark blue, dry areas are white. - TrueColour - Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to interpreting what shallow features are and in mapping the vegetation on cays and identifying beach rock.

    For most Sentinel tiles there are two versions of the DeepFalse and DeepMarine imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery so that mapped features could be checked against two images. Typically the R2 imagery will have more artefacts from clouds.

    The satellite imagery was processed in tiles (approximately 100 x 100 km) to keep each final image small enough to manage. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs.

    Methods:

    The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile, the set of Sentinel images from 2015 – 2021 were reviewed manually. In some tiles the cloud cover threshold was raised to gather more images, particularly if there were less than 20 images available. The Google Earth Engine image IDs of the best images were recorded. These were the images with the clearest water, lowest waves, lowest cloud, and lowest sun glint. 2. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later). 3. The contrast of the images was enhanced to create a series of products for different uses. The true colour image retained the full range of tones visible, so that bright sand cays still retained some detail. The marine enhanced version stretched the blue, green and red channels so that they focused on the deeper, darker marine features. This stretching was done to ensure that when converted to 8-bit colour imagery that all the dark detail in the deeper areas were visible. This contrast enhancement resulted in bright areas of the imagery clipping, leading to loss of detail in shallow reef areas and colours of land areas looking off. A reef top estimate was produced from the red channel (B4) where the contrast was stretched so that the imagery contains almost a binary mask. The threshold was chosen to approximate the 5 m depth contour for the clear waters of the Coral Sea. Lastly a false colour image was produced to allow mapping of shallow water features such as cays and islands. This image was produced from B5 (far red), B8 (nir), B11 (nir), where blue represents depths from approximately 0.5 – 5 m, green areas with 0 – 0.5 m depth, and brown and white corresponding to dry land. 4. The various contrast enhanced composite images were exported from Google Earth Engine (default of 32 bit GeoTiff) and reprocessed to smaller LZW compresed 8 bit GeoTiff images GDAL.

    Cloud Masking

    Prior to combining the best images each image was processed to mask out clouds and their shadows. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.

    A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.

    A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.

    The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm.

    Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations. With 4-pixel filter resolutions these operations were still using over 90% of the total processing resulting in each image taking approximately 10 min to compute on the Google Earth Engine.

    Sun glint removal and atmospheric correction.

    Sun glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun glint detected by B8 correlates very highly with the sun glint experienced by the ultra violet and visible channels (B1, B2, B3 and B4) and so the sun glint in these channels can be removed by subtracting B8 from these channels.

    This simple sun glint correction fails in very shallow and land areas. On land areas B8 is very bright and thus subtracting it from the other channels results in black land. In shallow areas (< 0.5 m) the B8 channel detects the substrate, resulting in too much sun glint correction. To resolve these issues the sun glint correction was adjusted by transitioning to B11 for shallow areas as it penetrates the water even less than B8. We don't use B11 everywhere because it is half the resolution of B8.

    Land areas need their tonal levels to be adjusted to match the water areas after sun glint correction. Ideally this would be achieved using an atmospheric correction that compensates for the contrast loss due to haze in the atmosphere. Complex models for atmospheric correction involve considering the elevation of the surface (higher areas have less atmosphere to pass through) and the weather conditions. Since this dataset is focused on coral reef areas, elevation compensation is unnecessary due to the very low and flat land features being imaged. Additionally the focus of the dataset it on marine features and so only a basic atmospheric correction is needed. Land areas (as determined by very bright B8 areas) where assigned a fixed smaller correction factor to approximate atmospheric correction. This fixed atmospheric correction was determined iteratively so that land areas matched the tonal value of shallow and water areas.

    Image selection

    Available Sentinel 2 images with a cloud cover of less than 0.5% were manually reviewed using an Google Earth Engine App 01-select-sentinel2-images.js. Where there were few images available (less than 30 images) the cloud cover threshold was raised to increase the set of images that were raised.

    Images were excluded from the composites primarily due to two main factors: sun glint and fine scattered clouds. The images were excluded if there was any significant uncorrected sun glint in the image, i.e. the brightness of the sun glint exceeded the sun glint correction. Fine

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Esri (2009). World Imagery [Dataset]. https://onemap-esri.hub.arcgis.com/maps/10df2279f9684e4a9f6a7f08febac2a9
Organization logo

World Imagery

Explore at:
Dataset updated
Dec 12, 2009
Dataset authored and provided by
Esrihttp://esri.com/
Area covered
Description

World Imagery provides one meter or better satellite and aerial imagery for most of the world’s landmass and lower resolution satellite imagery worldwide. The map is currently comprised of the following sources:Worldwide 15-m resolution TerraColor imagery at small and medium map scales.Maxar imagery basemap products around the world: Vivid Premium at 15-cm HD resolution for select metropolitan areas, Vivid Advanced 30-cm HD for more than 1,000 metropolitan areas, and Vivid Standard from 1.2-m to 0.6-cm resolution for the most of the world, with 30-cm HD across the United States and parts of Western Europe. More information on the Maxar products is included below. High-resolution aerial photography contributed by the GIS User Community. This imagery ranges from 30-cm to 3-cm resolution. You can contribute your imagery to this map and have it served by Esri via the Community Maps Program.Maxar Basemap ProductsVivid PremiumProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product provides 15-cm HD resolution imagery.Vivid AdvancedProvides committed image currency in a high-resolution, high-quality image layer over defined metropolitan and high-interest areas across the globe. The product includes a mix of native 30-cm and 30-cm HD resolution imagery.Vivid StandardProvides a visually consistent and continuous image layer over large areas through advanced image mosaicking techniques, including tonal balancing and seamline blending across thousands of image strips. Available from 1.2-m down to 30-cm HD. More on Maxar HD.Updates and CoverageYou can use the World Imagery Updates app to learn more about recent updates and map coverage.CitationsThis layer includes imagery provider, collection date, resolution, accuracy, and source of the imagery. With the Identify tool in ArcGIS Desktop or the ArcGIS Online Map Viewer you can see imagery citations. Citations returned apply only to the available imagery at that location and scale. You may need to zoom in to view the best available imagery. Citations can also be accessed in the World Imagery with Metadata web map.UseYou can add this layer to the ArcGIS Online Map Viewer, ArcGIS Desktop, or ArcGIS Pro. To view this layer with a useful reference overlay, open the Imagery Hybrid web map.FeedbackHave you ever seen a problem in the Esri World Imagery Map that you wanted to report? You can use the Imagery Map Feedback web map to provide comments on issues. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.

Search
Clear search
Close search
Google apps
Main menu