67 datasets found
  1. d

    Satellite images and road-reference data for AI-based road mapping in...

    • dataone.org
    • data.niaid.nih.gov
    • +1more
    Updated Jul 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sean Sloan; Raiyan Talkhani; Tao Huang; Jayden Engert; William Laurance (2025). Satellite images and road-reference data for AI-based road mapping in Equatorial Asia [Dataset]. http://doi.org/10.5061/dryad.bvq83bkg7
    Explore at:
    Dataset updated
    Jul 27, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Sean Sloan; Raiyan Talkhani; Tao Huang; Jayden Engert; William Laurance
    Time period covered
    Jan 1, 2023
    Area covered
    Asia
    Description

    1.     INTRODUCTION For the purposes of training AI-based models to identify (map) road features in rural/remote tropical regions on the basis of true-colour satellite imagery, and subsequently testing the accuracy of these AI-derived road maps, we produced a dataset of 8904 satellite image ‘tiles’ and their corresponding known road features across Equatorial Asia (Indonesia, Malaysia, Papua New Guinea). 2.     FURTHER INFORMATION The following is a summary of our data. Fuller details on these data and their underlying methodology are given in the corresponding article, under consideration by the journal Remote Sensing as of September 2023: Sloan, S., Talkhani, R.R., Huang, T., Engert, J., Laurance, W.F. (2023) Mapping remote roads using artificial intelligence and satellite imagery. Under consideration by Remote Sensing. Correspondence regarding these data can be directed to: Sean Sloan Department of Geography, Vancouver Island University, Nanaimo, B.C, Canada sean.sloan@viu.ca;  ..., 1.     INPUT 200 SATELLITE IMAGES

    The main dataset shared here was derived from a set of 200 input satellite images, also provided here. These 200 images are effectively ‘screenshots’ (i.e., reduced-resolution copies) of high-resolution true-colour satellite imagery (~0.5-1m pixel resolution) observed using the Elvis Elevation and Depth spatial data portal (https://elevation.fsdf.org.au/), which here is functionally equivalent to the more familiar Google Earth. Each of these original images was initially acquired at a resolution of 1920x886 pixels. Actual image resolution was coarser than the native high-resolution imagery. Visual inspection of these 200 images suggests a pixel resolution of ~5 meters, given the number of pixels required to span features of familiar scale, such as roads and roofs, as well as the ready discrimination of specific land uses, vegetation types, etc. These 200 images generally spanned either forest-agricultural mosaics or intact forest landscapes with limi..., , # Satellite images and road-reference data for AI-based road mapping in Equatorial Asia

    https://doi.org/10.5061/dryad.bvq83bkg7

    1. INTRODUCTION For the purposes of training AI-based models to identify (map) road features in rural/remote tropical regions on the basis of true-colour satellite imagery, and subsequently testing the accuracy of these AI-derived road maps, we produced a dataset of 8904 satellite image ‘tiles’ and their corresponding known road features across Equatorial Asia (Indonesia, Malaysia, Papua New Guinea).   2. FURTHER INFORMATION The following is a summary of our data. Fuller details on these data and their underlying methodology are given in the corresponding article, under consideration by the journal Remote Sensing as of September 2023:  Sloan, S., Talkhani, R.R., Huang, T., Engert, J., Laurance, W.F. (2023) Mapping remote roads using artificial intelligence and satellite imagery. Under consideration by...

  2. A

    Data from: Google Earth Engine (GEE)

    • data.amerigeoss.org
    • sdgs.amerigeoss.org
    • +6more
    esri rest, html
    Updated Nov 28, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AmeriGEO ArcGIS (2018). Google Earth Engine (GEE) [Dataset]. https://data.amerigeoss.org/de/dataset/google-earth-engine-gee1
    Explore at:
    esri rest, htmlAvailable download formats
    Dataset updated
    Nov 28, 2018
    Dataset provided by
    AmeriGEO ArcGIS
    Description

    Meet Earth Engine

    Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.

    Satellite imagerySATELLITE IMAGERY+Your algorithmsYOUR ALGORITHMS+Causes you care aboutREAL WORLD APPLICATIONS
    LEARN MORE
    GLOBAL-SCALE INSIGHT

    Explore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.

    EXPLORE TIMELAPSE
    READY-TO-USE DATASETS

    The public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.

    EXPLORE DATASETS
    SIMPLE, YET POWERFUL API

    The Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.

    EXPLORE THE API
    Google Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.
    CONVENIENT TOOLS

    Use our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.

    LEARN ABOUT THE CODE EDITOR
    SCIENTIFIC AND HUMANITARIAN IMPACT

    Scientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.

    SEE CASE STUDIES
    READY TO BE PART OF THE SOLUTION?SIGN UP NOW
    TERMS OF SERVICE PRIVACY ABOUT GOOGLE

  3. NZ 10m Satellite Imagery (2021-2022)

    • data.linz.govt.nz
    • geodata.nz
    dwg with geojpeg +8
    Updated Jul 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Land Information New Zealand (2022). NZ 10m Satellite Imagery (2021-2022) [Dataset]. https://data.linz.govt.nz/layer/109401-nz-10m-satellite-imagery-2021-2022/
    Explore at:
    kml, pdf, geojpeg, jpeg2000, geotiff, jpeg2000 lossless, erdas imagine, kea, dwg with geojpegAvailable download formats
    Dataset updated
    Jul 1, 2022
    Dataset authored and provided by
    Land Information New Zealandhttps://www.linz.govt.nz/
    License

    https://data.linz.govt.nz/license/attribution-4-0-international/https://data.linz.govt.nz/license/attribution-4-0-international/

    Area covered
    Description

    This dataset provides a seamless cloud-free 10m resolution satellite imagery layer of the New Zealand mainland and offshore islands.

    The imagery was captured by the European Space Agency Sentinel-2 satellites between September 2021 - April 2022.

    Technical specifications:

    • 450 x ortho-rectified RGB GeoTIFF images in NZTM projection, tiled into the LINZ Standard 1:50,000 tile layout
    • Satellite sensors: ESA Sentinel-2A and Sentinel-2B
    • Acquisition dates: September 2021 - April 2022
    • Spectral resolution: R, G, B
    • Spatial resolution: 10 meters
    • Radiometric resolution: 8-bits (downsampled from 12-bits)

    This is a visual product only. The data has been downsampled from 12-bits to 8-bits, and the original values of the images have been modified for visualisation purposes.

  4. e

    A global snapshot of the spatial and temporal distribution of very high...

    • b2find.eudat.eu
    Updated Oct 20, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). A global snapshot of the spatial and temporal distribution of very high resolution satellite imagery in Google Earth and Bing Maps as of 11th of January, 2017 - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/017c0447-4356-5fec-8c32-da26aa4d9385
    Explore at:
    Dataset updated
    Oct 20, 2023
    Description

    Very high resolution (VHR) satellite imagery from Google Earth and Microsoft Bing Maps is increasingly being used in a variety of applications from computer sciences to arts and humanities. In the field of remote sensing, one use of this imagery is to create reference data sets through visual interpretation, e.g., to complement existing training data or to aid in the validation of land-cover products. Through new applications such as Collect Earth, this imagery is also being used for monitoring purposes in the form of statistical surveys obtained through visual interpretation. However, little is known about where VHR satellite imagery exists globally or the dates of the imagery. Here we present a global overview of the spatial and temporal distribution of VHR satellite imagery in Google Earth and Microsoft Bing Maps. The results show an uneven availability globally, with biases in certain areas such as the USA, Europe and India, and with clear discontinuities at political borders. We also show that the availability of VHR imagery is currently not adequate for monitoring protected areas and deforestation, but is better suited for monitoring changes in cropland or urban areas using visual interpretation Note: (1) Information on growing and non-growing seasons has been derived from the remote sensing product: https://lpdaac.usgs.gov/dataset_discovery/measures/measures_products_table/vipphen_ndvi_v004(2) Google provides full global coverage by images, in contrast to Bing. However, in many areas, these are Landsat-based images (from 1984 up to now). For more objective comparison with Bing imagery, we have excluded those areas from the analysis. Supplement to: Lesiv, Myroslava; See, Linda; Laso-Bayas, Juan-Carlos; Sturn, Tobias; Schepaschenko, Dmitry; Karner, Mathias; Moorthy, Inian; McCallum, Ian; Fritz, Steffen (2018): Characterizing the Spatial and Temporal Availability of Very High Resolution Satellite Imagery in Google Earth and Microsoft Bing Maps as a Source of Reference Data. Land, 7(4), 118

  5. r

    Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and...

    • researchdata.edu.au
    Updated Mar 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hammerton, Marc; Lawrey, Eric (2025). Coral Sea features satellite imagery and raw depth contours (Sentinel 2 and Landsat 8) 2015 – 2021 (AIMS) [Dataset]. http://doi.org/10.26274/NH77-ZW79
    Explore at:
    Dataset updated
    Mar 7, 2025
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Hammerton, Marc; Lawrey, Eric
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 1, 2016 - Sep 20, 2021
    Area covered
    Description

    This dataset contains Sentinel 2 and Landsat 8 cloud free composite satellite images of the Coral Sea reef areas and some parts of the Great Barrier Reef. It also contains raw depth contours derived from the satellite imagery. This dataset was developed as the base information for mapping the boundaries of reefs and coral cays in the Coral Sea. It is likely that the satellite imagery is useful for numerous other applications. The full source code is available and can be used to apply these techniques to other locations.

    This dataset contains two sets of raw satellite derived bathymetry polygons for 5 m, 10 m and 20 m depths based on both the Landsat 8 and Sentinel 2 imagery. These are intended to be post-processed using clipping and manual clean up to provide an estimate of the top structure of reefs. This dataset also contains select scenes on the Great Barrier Reef and Shark bay in Western Australia that were used to calibrate the depth contours. Areas in the GBR were compared with the GA GBR30 2020 (Beaman, 2017) bathymetry dataset and the imagery in Shark bay was used to tune and verify the Satellite Derived Bathymetry algorithm in the handling of dark substrates such as by seagrass meadows. This dataset also contains a couple of small Sentinel 3 images that were used to check the presence of reefs in the Coral Sea outside the bounds of the Sentinel 2 and Landsat 8 imagery.

    The Sentinel 2 and Landsat 8 imagery was prepared using the Google Earth Engine, followed by post processing in Python and GDAL. The processing code is available on GitHub (https://github.com/eatlas/CS_AIMS_Coral-Sea-Features_Img).

    This collection contains composite imagery for Sentinel 2 tiles (59 in Coral Sea, 8 in GBR) and Landsat 8 tiles (12 in Coral Sea, 4 in GBR and 1 in WA). For each Sentinel tile there are 3 different colour and contrast enhancement styles intended to highlight different features. These include:
    - TrueColour - Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to identifying shallow features are and in mapping the vegetation on cays.
    - DeepFalse - Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This imagery has a high level of contrast enhancement applied to the imagery and so it appears more noisy (in particular showing artefact from clouds) than the TrueColour styling.
    - Shallow - Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Features less than a couple of metres appear dark blue, dry areas are white. This imagery is intended to help identify coral cay boundaries.

    For Landsat 8 imagery only the TrueColour and DeepFalse stylings were rendered.

    All Sentinel 2 and Landsat 8 imagery has Satellite Derived Bathymetry (SDB) depth contours.
    - Depth5m - This corresponds to an estimate of the area above 5 m depth (Mean Sea Level).
    - Depth10m - This corresponds to an estimate of the area above 10 m depth (Mean Sea Level).
    - Depth20m - This corresponds to an estimate of the area above 20 m depth (Mean Sea Level).

    For most Sentinel and some Landsat tiles there are two versions of the DeepFalse imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery. This allows any mapped features to be checked against two images. Typically the R2 imagery will have more artefacts from clouds. In one Sentinel 2 tile a third image was created to help with mapping the reef platform boundary.

    The satellite imagery was processed in tiles (approximately 100 x 100 km for Sentinel 2 and 200 x 200 km for Landsat 8) to keep each final image small enough to manage. These tiles were not merged into a single mosaic as it allowed better individual image contrast enhancement when mapping deep features. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs and where their might have been potential new reef platforms indicated by existing bathymetry datasets and the AHO Marine Charts. The extent of the imagery was limited by those available through the Google Earth Engine.

    # Methods:

    The Sentinel 2 imagery was created using the Google Earth Engine. The core algorithm was:
    1. For each Sentinel 2 tile, images from 2015 – 2021 were reviewed manually after first filtering to remove cloudy scenes. The allowable cloud cover was adjusted so that at least the 50 least cloud free images were reviewed. The typical cloud cover threshold was 1%. Where very few images were available the cloud cover filter threshold was raised to 100% and all images were reviewed. The Google Earth Engine image IDs of the best images were recorded, along with notes to help sort the images based on those with the clearest water, lowest waves, lowest cloud, and lowest sun glint. Images where there were no or few clouds over the known coral reefs were preferred. No consideration of tides was used in the image selection process. The collection of usable images were grouped into two sets that would be combined together into composite images. The best were added to the R1 composite, and the next best images into the R2 composite. Consideration was made as to whether each image would improve the resultant composite or make it worse. Adding clear images to the collection reduces the visual noise in the image allowing deeper features to be observed. Adding images with clouds introduces small artefacts to the images, which are magnified due to the high contrast stretching applied to the imagery. Where there were few images all available imagery was typically used.
    2. Sunglint was removed from the imagery using estimates of the sunglint using two of the infrared bands (described in detail in the section on Sun glint removal and atmospheric correction).
    3. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later).
    4. The brightness of the composite image was normalised so that all tiles would have a similar average brightness for deep water areas. This correction was applied to allow more consistent contrast enhancement. Note: this brightness adjustment was applied as a single offset across all pixels in the tile and so this does not correct for finer spatial brightness variations.
    5. The contrast of the images was enhanced to create a series of products for different uses. The TrueColour colour image retained the full range of tones visible, so that bright sand cays still retain detail. The DeepFalse style was optimised to see features at depth and the Shallow style provides access to far red and infrared bands for assessing shallow features, such as cays and island.
    6. The various contrast enhanced composite images were exported from Google Earth Engine and optimised using Python and GDAL. This optimisation added internal tiling and overviews to the imagery. The depth polygons from each tile were merged into shapefiles covering the whole for each depth.

    ## Cloud Masking

    Prior to combining the best images each image was processed to mask out clouds and their shadows.

    The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.

    A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.

    A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.

    The buffering was applied as the cloud masking would often miss significant portions of the edges of clouds and their shadows. The buffering allowed a higher percentage of the cloud to be excluded, whilst retaining as much of the original imagery as possible.

    The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. The algorithm used is significantly better than the default Sentinel 2 cloud masking and slightly better than the COPERNICUS/S2_CLOUD_PROBABILITY cloud mask because it masks out shadows, however there is potentially significant improvements that could be made to the method in the future.

    Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask

  6. r

    Marine satellite image test collections (AIMS)

    • researchdata.edu.au
    • catalogue.eatlas.org.au
    Updated Sep 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hammerton, Marc; Lawrey, Eric, Dr (2024). Marine satellite image test collections (AIMS) [Dataset]. http://doi.org/10.26274/ZQ26-A956
    Explore at:
    Dataset updated
    Sep 11, 2024
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Hammerton, Marc; Lawrey, Eric, Dr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 1, 2016 - Sep 20, 2021
    Area covered
    Description

    This dataset consists of collections of satellite image composites (Sentinel 2 and Landsat 8) that are created from manually curated image dates for a range of projects. These images are typically prepared for subsequent analysis or testing of analysis algorithms as part of other projects. This dataset acts as a repository of reproducible test sets of images processed from Google Earth Engine using a standardised workflow.

    Details of the algorithms used to produce the imagery are described in the GEE code and code repository available on GitHub (https://github.com/eatlas/World_AIMS_Marine-satellite-imagery).


    Project test image sets:

    As new projects are added to this dataset, their details will be described here:

    - NESP MaC 2.3 Benthic reflection estimation (projects/CS_NESP-MaC-2-3_AIMS_Benth-reflect):
    This collection consists of six Sentinel 2 image composites in the Coral Sea and GBR for the purpose of testing a method of determining benthic reflectance of deep lagoonal areas of coral atolls. These image composites are in GeoTiff format, using 16-bit encoding and LZW compression. These images do not have internal image pyramids to save on space.
    [Status: final and available for download]

    - NESP MaC 2.3 Oceanic Vegetation (projects/CS_NESP-MaC-2-3_AIMS_Oceanic-veg):
    This project is focused on mapping vegetation on the bottom of coral atolls in the Coral Sea. This collection consists of additional images of Ashmore Reef. The lagoonal area of Ashmore has low visibility due to coloured dissolved organic matter, making it very hard to distinguish areas that are covered in vegetation. These images were manually curated to best show the vegetation. While these are the best images in the Sentinel 2 series up to 2023, they are still not very good. Probably 80 - 90% of the lagoonal benthos is not visible.
    [Status: final and available for download]

    - NESP MaC 3.17 Australian reef mapping (projects/AU_NESP-MaC-3-17_AIMS_Reef-mapping):
    This collection of test images was prepared to determine if creating a composite from manually curated image dates (corresponding to images with the clearest water) would produce a better composite than a fully automated composite based on cloud filtering. The automated composites are described in https://doi.org/10.26274/HD2Z-KM55. This test set also includes composites from low tide imagery. The images in this collection are not yet available for download as the collection of images that will be used in the analysis has not been finalised.
    [Status: under development, code is available, but not rendered images]

    - Capricorn Regional Map (projects/CapBunk_AIMS_Regional-map): This collection was developed for making a set of maps for the region to facilitate participatory mapping and reef restoration field work planning.
    [Status: final and available for download]

    - Default (project/default): This collection of manual selected scenes are those that were prepared for the Coral Sea and global areas to test the algorithms used in the developing of the original Google Earth Engine workflow. This can be a good starting point for new test sets. Note that the images described in the default project are not rendered and made available for download to save on storage space.
    [Status: for reference, code is available, but not rendered images]


    Filename conventions:

    The images in this dataset are all named using a naming convention. An example file name is Wld_AIMS_Marine-sat-img_S2_NoSGC_Raw-B1-B4_54LZP.tif. The name is made up of:
    - Dataset name (Wld_AIMS_Marine-sat-img), short for World, Australian Institute of Marine Science, Marine Satellite Imagery.
    - Satellite source: L8 for Landsat 8 or S2 for Sentinel 2.
    - Additional information or purpose: NoSGC - No sun glint correction, R1 best reference imagery set or R2 second reference imagery.
    - Colour and contrast enhancement applied (DeepFalse, TrueColour,Shallow,Depth5m,Depth10m,Depth20m,Raw-B1-B4),
    - Image tile (example: Sentinel 2 54LZP, Landsat 8 091086)


    Limitations:

    Only simple atmospheric correction is applied to land areas and as a result the imagery only approximates the bottom of atmosphere reflectance.

    For the sentinel 2 imagery the sun glint correction algorithm transitions between different correction levels from deep water (B8) to shallow water (B11) and a fixed atmospheric correction for land (bright B8 areas). Slight errors in the tuning of these transitions can result in unnatural tonal steps in the transitions between these areas, particularly in very shallow areas.

    For the Landsat 8 image processing land areas appear as black from the sun glint correction, which doesn't separately mask out the land. The code for the Landsat 8 imagery is less developed than for the Sentinel 2 imagery.

    The depth contours are estimated using satellite derived bathymetry that is subject to errors caused by cloud artefacts, substrate darkness, water clarity, calibration issues and uncorrected tides. They were tuned in the clear waters of the Coral Sea. The depth contours in this dataset are RAW and contain many false positives due to clouds. They should not be used without additional dataset cleanup.



    Change log:

    As changes are made to the dataset, or additional image collections are added to the dataset then those changes will be recorded here.

    2nd Edition, 2024-06-22: CapBunk_AIMS_Regional-map
    1st Edition, 2024-03-18: Initial publication of the dataset, with CS_NESP-MaC-2-3_AIMS_Benth-reflect, CS_NESP-MaC-2-3_AIMS_Oceanic-veg and code for AU_NESP-MaC-3-17_AIMS_Reef-mapping and Default projects.


    Data Format:

    GeoTiff images with LZW compression. Most images do not have internal image pyramids to save on storage space. This makes rendering these images very slow in a desktop GIS. Pyramids should be added to improve performance.

    Data Location:

    This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\Wld-AIMS-Marine-sat-img

  7. M

    Mobile Mapping Market Report

    • promarketreports.com
    doc, pdf, ppt
    Updated Jan 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pro Market Reports (2025). Mobile Mapping Market Report [Dataset]. https://www.promarketreports.com/reports/mobile-mapping-market-8779
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Jan 21, 2025
    Dataset authored and provided by
    Pro Market Reports
    License

    https://www.promarketreports.com/privacy-policyhttps://www.promarketreports.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    Components: Hardware: Includes mobile mapping systems, sensors, and other equipment Software: Includes software for data collection, processing, and visualization Services: Includes data collection, processing, and analysis servicesSolutions: Location-based: Provides location-based information and services Indoor mapping: Creates maps of indoor spaces Asset management: Helps manage assets and track their location 3D mapping: Creates 3D models of buildings and infrastructureApplications: Land surveys: Used for surveying land and creating maps Aerial surveys: Used for surveying areas from the air Real estate & construction: Used for planning and designing buildings and infrastructure IT & telecom: Used for network planning and management Recent developments include: One of the pioneers in wearable mobile mapping technology, NavVis, revealed the NavVis VLX 3, their newest generation of wearable technology. As the name suggests, this is the third version of their wearable VLX system; the NavVis VLX 2 was released in July of 2021, which is over two years ago. In their news release, NavVis emphasises the NavVis VLX 3's improved accuracy in point clouds by highlighting the two brand-new, 32-layer lidars that have been "meticulously designed and crafted" to minimise noise and drift in point clouds while delivering "high detail at range.", According to the North American Mach9 Software Platform, mobile Lidar will produce 2D and 3D maps 30 times faster than current systems by 2023., Even though this is Mach9's first product launch, the business has already begun laying the groundwork for future expansion by updating its website, adding important engineering and sales professionals, relocating to new headquarters in Pittsburgh's Bloomfield area, and forging ties in Silicon Valley., In order to make search more accessible to more users in more useful ways, Google has unveiled a tonne of new search capabilities for 2022 spanning Google Search, Google Lens, Shopping, and Maps. These enhancements apply to Google Maps, Google Shopping, Google Leons, and Multisearch., A multi-year partnership to supply Velodyne Lidar, Inc.'s lidar sensors to GreenValley International for handheld, mobile, and unmanned aerial vehicle (UAV) 3D mapping solutions, especially in GPS-denied situations, was announced in 2022. GreenValley is already receiving sensors from Velodyne., The acquisition of UK-based GeoSLAM, a leading provider of mobile scanning solutions with exclusive high-productivity simultaneous localization and mapping (SLAM) programmes to create 3D models for use in Digital Twin applications, is expected to close in 2022 and be completed by FARO® Technologies, Inc., a global leader in 4D digital reality solutions., November 2022: Topcon donated to TU Dublin as part of their investment in the future of construction. Students learning experiences will be improved by instruction in the most cutting-edge digital building techniques at Ireland's first technical university., October 2022: Javad GNSS Inc has released numerous cutting-edge GNSS solutions for geospatial applications. The TRIUMPH-1M Plus and T3-NR smart antennas, which employ upgraded Wi-Fi, Bluetooth, UHF, and power management modules and integrate the most recent satellite tracking technology into the geospatial portfolio, are two examples of important items.. Key drivers for this market are: Improvements in GPS, LiDAR, and camera technologies have significantly enhanced the accuracy and efficiency of mobile mapping systems. Potential restraints include: The initial investment required for mobile mapping equipment, including sensors and software, can be a barrier for small and medium-sized businesses.. Notable trends are: Mobile mapping systems are increasingly integrated with cloud platforms and AI technologies to process and analyze large datasets, enabling more intelligent mapping and predictive analytics.

  8. D

    Digital Map Market Report

    • marketreportanalytics.com
    doc, pdf, ppt
    Updated Jun 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Report Analytics (2025). Digital Map Market Report [Dataset]. https://www.marketreportanalytics.com/reports/digital-map-market-88590
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Jun 19, 2025
    Dataset authored and provided by
    Market Report Analytics
    License

    https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The digital map market, currently valued at $25.55 billion in 2025, is experiencing robust growth, projected to expand at a compound annual growth rate (CAGR) of 13.39% from 2025 to 2033. This expansion is fueled by several key factors. The increasing adoption of location-based services (LBS) across various sectors, including transportation, logistics, and e-commerce, is a primary driver. Furthermore, the proliferation of smartphones and connected devices, coupled with advancements in GPS technology and mapping software, continues to fuel market growth. The rising demand for high-resolution, real-time mapping data for autonomous vehicles and smart city initiatives also significantly contributes to market expansion. Competition among established players like Google, TomTom, and ESRI, alongside emerging innovative companies, is fostering continuous improvement in map accuracy, functionality, and data accessibility. This competitive landscape drives innovation and lowers costs, making digital maps increasingly accessible to a broader range of users and applications. However, market growth is not without its challenges. Data security and privacy concerns surrounding the collection and use of location data represent a significant restraint. Ensuring data accuracy and maintaining up-to-date map information in rapidly changing environments also pose operational hurdles. Regulatory compliance with differing data privacy laws across various jurisdictions adds another layer of complexity. Despite these challenges, the long-term outlook for the digital map market remains positive, driven by the relentless integration of location intelligence into nearly every facet of modern life, from personal navigation to complex enterprise logistics solutions. The market's segmentation (although not explicitly provided) likely includes various map types (e.g., road maps, satellite imagery, 3D maps), pricing models (subscriptions, one-time purchases), and industry verticals served. This diversified market structure further underscores its resilience and potential for sustained growth. Recent developments include: December 2022 - The Linux Foundation has partnered with some of the biggest technology companies in the world to build interoperable and open map data in what is an apparent move t. The Overture Maps Foundation, as the new effort is called, is officially hosted by the Linux Foundation. The ultimate aim of the Overture Maps Foundation is to power new map products through openly available datasets that can be used and reused across applications and businesses, with each member throwing their data and resources into the mix., July 27, 2022 - Google declared the launch of its Street View experience in India in collaboration with Genesys International, an advanced mapping solutions company, and Tech Mahindra, a provider of digital transformation, consulting, and business re-engineering solutions and services. Google, Tech Mahindra, and Genesys International also plan to extend this to more than around 50 cities by the end of the year 2022.. Key drivers for this market are: Growth in Application for Advanced Navigation System in Automotive Industry, Surge in Demand for Geographic Information System (GIS); Increased Adoption of Connected Devices and Internet. Potential restraints include: Growth in Application for Advanced Navigation System in Automotive Industry, Surge in Demand for Geographic Information System (GIS); Increased Adoption of Connected Devices and Internet. Notable trends are: Surge in Demand for GIS and GNSS to Influence the Adoption of Digital Map Technology.

  9. QuickBird full archive

    • earth.esa.int
    • eocat.esa.int
    • +2more
    Updated Jun 21, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    European Space Agency (2016). QuickBird full archive [Dataset]. https://earth.esa.int/eogateway/catalog/quickbird-full-archive
    Explore at:
    Dataset updated
    Jun 21, 2016
    Dataset authored and provided by
    European Space Agencyhttp://www.esa.int/
    License

    https://earth.esa.int/eogateway/documents/20142/1560778/ESA-Third-Party-Missions-Terms-and-Conditions.pdfhttps://earth.esa.int/eogateway/documents/20142/1560778/ESA-Third-Party-Missions-Terms-and-Conditions.pdf

    Time period covered
    Nov 1, 2001 - Mar 31, 2015
    Description

    QuickBird high resolution optical products are available as part of the Maxar Standard Satellite Imagery products from the QuickBird, WorldView-1/-2/-3/-4, and GeoEye-1 satellites. All details about the data provision, data access conditions and quota assignment procedure are described into the Terms of Applicability available in Resources section. In particular, QuickBird offers archive panchromatic products up to 0.60 m GSD resolution and 4-Bands Multispectral products up to 2.4 m GSD resolution. Band Combination Data Processing Level Resolution Panchromatic and 4-bands Standard(2A)/View Ready Standard (OR2A) 15 cm HD, 30 cm HD, 30 cm, 40 cm, 50/60 cm View Ready Stereo 30 cm, 40 cm, 50/60 cm Map-Ready (Ortho) 1:12,000 Orthorectified 15 cm HD, 30 cm HD, 30 cm, 40 cm, 50/60 cm 4-Bands being an option from: 4-Band Multispectral (BLUE, GREEN, RED, NIR1) 4-Band Pan-sharpened (BLUE, GREEN, RED, NIR1) 4-Band Bundle (PAN, BLUE, GREEN, RED, NIR1) 3-Bands Natural Colour (pan-sharpened BLUE, GREEN, RED) 3-Band Colored Infrared (pan-sharpened GREEN, RED, NIR1) Natural Colour / Coloured Infrared (3-Band pan-sharpened) Native 30 cm and 50/60 cm resolution products are processed with MAXAR HD Technology to generate respectively the 15 cm HD and 30 cm HD products: the initial special resolution (GSD) is unchanged but the HD technique intelligently increases the number of pixels and improves the visual clarity achieving aesthetically refined imagery with precise edges and well reconstructed details.

  10. a

    GEOG 3520 Project Map 1

    • sdgs.amerigeoss.org
    Updated Dec 4, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    mlinderm@uiowa.edu_uiowa (2020). GEOG 3520 Project Map 1 [Dataset]. https://sdgs.amerigeoss.org/maps/e2b11b83cded45e9b9ccf17895bcf5c0
    Explore at:
    Dataset updated
    Dec 4, 2020
    Dataset authored and provided by
    mlinderm@uiowa.edu_uiowa
    Area covered
    Description

    Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE

  11. d

    Inventory of rock avalanches in the central Chugach Mountains, northern...

    • catalog.data.gov
    • data.usgs.gov
    Updated Feb 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Inventory of rock avalanches in the central Chugach Mountains, northern Prince William Sound, Alaska, 1984-2024 [Dataset]. https://catalog.data.gov/dataset/inventory-of-rock-avalanches-in-the-central-chugach-mountains-northern-prince-william-1984
    Explore at:
    Dataset updated
    Feb 21, 2025
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Prince William Sound, Chugach Census Area, Alaska, Chugach Mountains
    Description

    In the Prince William Sound region of Alaska, recent glacier retreat started in the mid-1800s and began to accelerate in the mid-2000s in response to warming air temperatures (Maraldo and others, 2020). Prince William Sound is surrounded by the central Chugach Mountains and consists of numerous ocean-terminating glaciers, with rapid deglaciation increasingly exposing oversteepened bedrock walls of fiords. Deglaciation may accelerate the occurrence of rapidly moving rock avalanches (RAs), which have the potential to generate tsunamis and adversely impact maritime vessels, marine activities, and coastal infrastructure and populations in the Prince William Sound region. RAs have been documented in the Chugach Mountains in the past (Post, 1967; McSaveney, 1978; Uhlmann and others, 2013), but a time series of RAs in the Chugach Mountains is not currently available. A systematic inventory of RAs in the Chugach is needed as a baseline to evaluate any future changes in RA frequency, magnitude, and mobility. This data release presents a comprehensive historical inventory of RAs in a 4600 km2 area of the Prince William Sound. The inventory was generated from: (1) visual inspection of 30-m resolution Landsat satellite images collected between July 1984 and August 2024; and (2) the use of an automated image classification script (Google earth Engine supRaglAciaL Debris INput dEtector (GERALDINE, Smith and others, 2020)) designed to detect new rock-on-snow events from repeat Landsat images from the same time period. RAs were visually identified and mapped in a Geographic Information System (GIS) from the near-infrared (NIR) band of Landsat satellite images. This band provides significant contrast between rock and snow to detect newly deposited rock debris. A total of 252 Landsat images were visually examined, with more images available in recent years compared to earlier years (Figure 1). Calendar year 1984 was the first year when 30-m resolution Landsat data were available, and thus provided a historical starting point from which RAs could be detected with consistent certainty. By 2017, higher resolution (<5-m) daily Planet satellite images became consistently available and were used to better constrain RA timing and extent. Figure 1. Diagram showing the number of usable Landsat images per year. This inventory reveals 118 RAs ranging in size from 0.1 km2 to 2.3 km2. All of these RAs occurred during the months of May through September (Figure 2). The data release includes three GIS feature classes (polygons, points, and polylines), each with its own attribute information. The polygon feature class contains the entire extent of individual RAs and does not differentiate the source and deposit areas. The point feature class contains headscarp and toe locations, and the polyline feature class contains curvilinear RA travel distance lines that connect the headscarp and toe points. Additional attribute information includes the following: location of headscarp and toe points, date of earliest identified occurrence, if and when the RA was sequestered into the glacier, presence and delineation confidence levels (see Table 1 for definition of A, B, and C confidence levels), identification method (visual inspection versus automated detection), image platform, satellite, estimated cloud cover, if the RA is lobate, image ID, image year, image band, affected area in km2, length, height, length/height, height/length, notes, minimum and maximum elevation, aspect at the headscarp point, slope at the headscarp point, and geology at the headscarp point. Topographic information was derived from 5-m interferometric synthetic aperture radar (IfSAR) Digital Elevation Models (DEMs) that were downloaded from the USGS National Elevation Dataset website (U.S. Geological Survey, 2015) and were mosaicked together in ArcGIS Pro. The aspect and slope layers were generated from the downloaded 5-m DEM with the “Aspect” and “Slope” tools in ArcGIS Pro. Aspect and slope at the headscarp mid-point were then recorded in the attribute table. A shapefile of Alaska state geology was downloaded from Wilson and others (2015) and was used to determine the geology at the headscarp location. The 118 identified RAs have the following confidence level breakdown for presence: 66 are A-level, 51 are B-level, and 1 is C-level. The 118 identified RAs have the following confidence level breakdown for delineation: 39 are A-level and 79 are B-level. Please see the provided attribute table spreadsheet for more detailed information. Figure 2. Diagram showing seasonal timing of mapped rock avalanches. Table 1. Rock avalanche presence and delineation confidence levels Category Grade Justification Presence A Feature is clearly visible in one or more satellite images. B Feature is clearly visible in one or more satellite images but has low contrast with the surroundings and may be surficial debris from rock fall, rather than from a rock avalanche. C Feature presence is possible but uncertain due to poor quality of imagery (e.g., heavy cloud cover or shadows) or lack of multiple views. Delineation A Exact outline of the feature from headscarp to toe is clear. B General shape of the feature is clear but the exact headscarp or toe location is unclear (e.g., due to clouds or shadows). Disclaimer: Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government. References Maraldo, D.R., 2020, Accelerated retreat of coastal glaciers in the Western Prince William Sound, Alaska: Arctic, Antarctic, and Alpine Research, v. 52, p. 617-634, https://doi.org/10.1080/15230430.2020.1837715 McSaveney, M.J., 1978, Sherman glacier rock avalanche, Alaska, U.S.A. in Voight, B., ed., Rockslides and Avalanches, Developments in Geotechnical Engineering, Amsterdam, Elsevier, v. 14, p. 197–258. Post, A., 1967, Effects of the March 1964 Alaska earthquake on glaciers: U.S. Geological Survey Professional Paper 544-D, Reston, Virgina, p. 42, https://pubs.usgs.gov/pp/0544d/ Smith, W. D., Dunning, S. A., Brough, S., Ross, N., and Telling, J., 2020, GERALDINE (Google Earth Engine supRaglAciaL Debris INput dEtector): A new tool for identifying and monitoring supraglacial landslide inputs: Earth Surface Dynamics, v. 8, p. 1053-1065, https://doi.org/10.5194/esurf-8-1053-2020 Uhlmann, M., Korup, O., Huggel, C., Fischer, L., and Kargel, J. S., 2013, Supra-glacial deposition and flux of catastrophic rock-slope failure debris, south-central Alaska: Earth Surface Processes and Landforms, v. 38, p. 675–682, https://doi.org/10.1002/esp.3311 U.S. Geological Survey, 2015, USGS NED Digital Surface Model AK IFSAR-Cell37 2010 TIFF 2015: U.S. Geological Survey, https://elevation.alaska.gov/#60.67183:-147.68372:8 Wilson, F.H., Hults, C.P., Mull, C.G, and Karl, S.M, compilers, 2015, Geologic map of Alaska: U.S. Geological Survey Scientific Investigations Map 3340, pamphlet p. 196, 2 sheets, scale 1:1,584,000, https://pubs.usgs.gov/publication/sim3340

  12. d

    Coral Sea Sentinel 2 Marine Satellite Composite Draft Imagery version 0...

    • data.gov.au
    • researchdata.edu.au
    • +1more
    au, html, png
    Updated Jun 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Ocean Data Network (2025). Coral Sea Sentinel 2 Marine Satellite Composite Draft Imagery version 0 (AIMS) [Dataset]. https://data.gov.au/data/dataset/coral-sea-sentinel-2-marine-satellite-composite-draft-imagery-version-0-aims
    Explore at:
    html, au, pngAvailable download formats
    Dataset updated
    Jun 23, 2025
    Dataset authored and provided by
    Australian Ocean Data Network
    Description

    This dataset contains composite satellite images for the Coral Sea region based on 10 m resolution Sentinel 2 imagery from 2015 – 2021. This image collection is intended to allow mapping of the reef and island features of the Coral Sea. This is a draft version of the dataset prepared from approximately 60% of the available Sentinel 2 image. An improved version of this dataset was released https://doi.org/10.26274/NH77-ZW79. This collection contains composite imagery for 31 Sentinel 2 tiles in the Coral Sea. For each tile there are 5 different colour and contrast enhancement styles intended to highlight different features. These include: - DeepFalse - Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This technique doesn't work where the water is not as clear as the ultraviolet get scattered easily. - DeepMarine - Bands: B2 (blue), B3 (green), B4 (red): This is a contrast enhanced version of the true colour imagery, focusing on being able to better see the deeper features. Shallow features are over exposed due to the increased contrast. - ReefTop - Bands: B3 (red): This imagery is contrast enhanced to create an mask (black and white) of reef tops, delineating areas that are shallower or deeper than approximately 4 - 5 m. This mask is intended to assist in the creating of a GIS layer equivalent to the 'GBR Dry Reefs' dataset. The depth mapping exploits the limited water penetration of the red channel. In clear water the red channel can only see features to approximately 6 m regardless of the substrate type. - Shallow - Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Feature less than a couple of metres appear dark blue, dry areas are white. - TrueColour - Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to interpreting what shallow features are and in mapping the vegetation on cays and identifying beach rock. For most Sentinel tiles there are two versions of the DeepFalse and DeepMarine imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery so that mapped features could be checked against two images. Typically the R2 imagery will have more artefacts from clouds. The satellite imagery was processed in tiles (approximately 100 x 100 km) to keep each final image small enough to manage. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs. Methods: The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was: 1. For each Sentinel 2 tile, the set of Sentinel images from 2015 – 2021 were reviewed manually. In some tiles the cloud cover threshold was raised to gather more images, particularly if there were less than 20 images available. The Google Earth Engine image IDs of the best images were recorded. These were the images with the clearest water, lowest waves, lowest cloud, and lowest sun glint. 2. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later). 3. The contrast of the images was enhanced to create a series of products for different uses. The true colour image retained the full range of tones visible, so that bright sand cays still retained some detail. The marine enhanced version stretched the blue, green and red channels so that they focused on the deeper, darker marine features. This stretching was done to ensure that when converted to 8-bit colour imagery that all the dark detail in the deeper areas were visible. This contrast enhancement resulted in bright areas of the imagery clipping, leading to loss of detail in shallow reef areas and colours of land areas looking off. A reef top estimate was produced from the red channel (B4) where the contrast was stretched so that the imagery contains almost a binary mask. The threshold was chosen to approximate the 5 m depth contour for the clear waters of the Coral Sea. Lastly a false colour image was produced to allow mapping of shallow water features such as cays and islands. This image was produced from B5 (far red), B8 (nir), B11 (nir), where blue represents depths from approximately 0.5 – 5 m, green areas with 0 – 0.5 m depth, and brown and white corresponding to dry land. 4. The various contrast enhanced composite images were exported from Google Earth Engine (default of 32 bit GeoTiff) and reprocessed to smaller LZW compresed 8 bit GeoTiff images GDAL. Cloud Masking Prior to combining the best images each image was processed to mask out clouds and their shadows. The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts. A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask. A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer. The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. As such there are probably significant potential improvements that could be made to this algorithm. Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask significantly more spatially coarse than the 10 m Sentinel imagery. This resolution was chosen as a trade-off between the coarseness of the mask verse the processing time for these operations. With 4-pixel filter resolutions these operations were still using over 90% of the total processing resulting in each image taking approximately 10 min to compute on the Google Earth Engine. Sun glint removal and atmospheric correction. Sun glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun glint detected by B8 correlates very highly with the sun glint experienced by the ultra violet and visible channels (B1, B2, B3 and B4) and so the sun glint in these channels can be removed by subtracting B8 from these channels. This simple sun glint correction fails in very shallow and land areas. On land areas B8 is very bright and thus subtracting it from the other channels results in black land. In shallow areas (< 0.5 m) the B8 channel detects the substrate, resulting in too much sun glint correction. To resolve these issues the sun glint correction was adjusted by transitioning to B11 for shallow areas as it penetrates the water even less than B8. We don't use B11 everywhere because it is half the resolution of B8. Land areas need their tonal levels to be adjusted to match the water areas after sun glint correction. Ideally this would be achieved using an atmospheric correction that compensates for the contrast loss due to haze in the atmosphere. Complex models for atmospheric correction involve considering the elevation of the surface (higher areas have less atmosphere to pass through) and the weather conditions. Since this dataset is focused on coral reef areas, elevation compensation is unnecessary due to the very low and flat land features being imaged. Additionally the focus of the dataset it on marine features and so only a basic atmospheric correction is needed. Land areas (as determined by very bright B8 areas) where assigned a fixed smaller correction factor to approximate atmospheric correction. This fixed atmospheric correction was determined iteratively so that land areas matched the tonal value of shallow and water areas. Image selection Available Sentinel 2 images with a cloud cover of less than 0.5% were manually reviewed using an Google Earth Engine App 01-select-sentinel2-images.js. Where there were few images available (less than 30 images) the cloud cover threshold was raised to increase the set of images that were raised. Images were excluded from the composites primarily due to two main factors: sun glint and fine scattered clouds. The images were excluded if there was any significant uncorrected sun glint in the image, i.e. the brightness of the sun glint exceeded the sun glint correction. Fine scattered clouds over reef areas were also a strong factor in down grading the quality rating of the image. As each satellite images were reviewed they were

  13. B

    Brazil Satellite Imagery Services Market Report

    • marketreportanalytics.com
    doc, pdf, ppt
    Updated Apr 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Report Analytics (2025). Brazil Satellite Imagery Services Market Report [Dataset]. https://www.marketreportanalytics.com/reports/brazil-satellite-imagery-services-market-88900
    Explore at:
    doc, pdf, pptAvailable download formats
    Dataset updated
    Apr 27, 2025
    Dataset authored and provided by
    Market Report Analytics
    License

    https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Brazil
    Variables measured
    Market Size
    Description

    The Brazil satellite imagery services market, valued at an estimated $X million in 2025, is projected to experience robust growth, driven by a Compound Annual Growth Rate (CAGR) of 9.62% from 2025 to 2033. This expansion is fueled by several key factors. The increasing adoption of satellite imagery across diverse sectors, including geospatial data acquisition and mapping for infrastructure development, natural resource management for precision agriculture and deforestation monitoring, and surveillance and security applications for enhancing public safety, are major contributors. Furthermore, the Brazilian government's ongoing investments in infrastructure projects and its commitment to sustainable development initiatives further bolster market demand. Specific applications like precision agriculture in the expansive agricultural sector and improved disaster management capabilities in a region prone to natural calamities are driving significant growth. Competitive landscape analysis reveals key players such as ESRI, Airbus Group Inc, and Maxar Technologies, among others, actively vying for market share. These companies are constantly innovating to provide higher resolution imagery, advanced analytical tools, and cloud-based solutions. This competitive landscape ensures a dynamic and evolving market with continuous technological advancements benefiting the overall growth. However, market growth may face some challenges. High initial investment costs for satellite technology and data processing infrastructure can act as a restraint, particularly for smaller companies. Moreover, data security concerns and regulations surrounding the use of satellite imagery could impact market expansion. Despite these potential restraints, the overall market outlook remains positive, driven by increasing government funding, expanding private sector investment, and a growing awareness of the value of satellite imagery across a wide spectrum of applications. The diverse end-user segments, including government, construction, transportation, and agriculture, provide a broad base for sustained market growth in the coming years. The market is segmented by application (geospatial data acquisition, natural resource management, etc.) and end-user (government, construction, etc.), providing a nuanced view of market dynamics and growth potential within each segment. Recent developments include: July 2023: NASA, the American space agency, extended its satellite technology to Brazil in support of the Amazon rainforest conservation efforts. NASA's contribution is geared towards the SERVIR Amazonia project, which aims to provide timely Earth science data to researchers and decision-makers within the Amazon region. This initiative facilitates the monitoring of environmental changes in almost real-time, aiding in the prediction of climate-related threats such as deforestation and food insecurity. Additionally, it equips emergency responders with crucial data during natural disasters., November 2022: In a collaborative effort between Google and the Geological Service of Brazil (SGB), unveiled in Florianópolis, Santa Catarina state, a system was developed to issue river flood alerts throughout the country. This innovative system integrates river water level data, meteorological indicators, and satellite imagery to provide real-time information to residents in over 60 locations. In the upcoming months, the alerts and forecasts are expected to expand their coverage to more regions. Users can access river condition alerts and forecasts seamlessly while using Google Maps, conducting search queries, or utilizing the new platform, FloodHub., . Key drivers for this market are: Increasing Adoption of Location-based Services, Satellite data usage is increasing. Potential restraints include: Increasing Adoption of Location-based Services, Satellite data usage is increasing. Notable trends are: Natural Resource Management is Expected to Significant Share.

  14. r

    North Australia Sentinel 2 Satellite Composite Imagery - 15th percentile...

    • researchdata.edu.au
    Updated Nov 30, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lawrey, Eric; Hammerton, Marc (2021). North Australia Sentinel 2 Satellite Composite Imagery - 15th percentile true colour (NESP MaC 3.17, AIMS) [Dataset]. http://doi.org/10.26274/HD2Z-KM55
    Explore at:
    Dataset updated
    Nov 30, 2021
    Dataset provided by
    Australian Ocean Data Network
    Authors
    Lawrey, Eric; Hammerton, Marc
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jun 27, 2015 - May 31, 2024
    Area covered
    Description

    This dataset is true colour cloud-free composite satellite imagery optimised for mapping shallow marine habitats in northern Australia, based on 10-meter resolution Sentinel 2 data collected from 2015 to 2024. It contains composite imagery for 333 Sentinel 2 tiles of northern Australia and the Great Barrier Reef. This dataset offers improved visual clarity of shallow water features as compared to existing satellite imagery, allowing deeper marine features to be observed. These composites were specifically designed to address challenges such as sun glint, clouds and turbidity that typically hinder marine environment analyses. No tides were considered in the selection of the imagery and so this imagery corresponds to an 'All tide' image, approximating mean sea level.

    This dataset is an updated version (Version 2), published in July 2024, which succeeds the initial draft version (Version 1, published in March 2024). The current version spans imagery from 2015–2024, an extension of the earlier timeframe that covered 2018–2022. This longer temporal range allowed the imagery to be cleaner with lower image noise allowing deeper marine features to be visible. The deprecated draft version was removed from online download to save on storage space and is now only available on request.

    While the final imagery corresponds to true colour based primarily Sentinel 2 bands B2 (blue), B3 (green), and B4 (red), the near infrared (B8) band was used as part of sun glint correction and automated selection of low noise imagery.

    Contrast enhancement was applied to the imagery to compress the original 12 bit per channel Sentinel 2 imagery into the final 8-bit per channel GeoTiffs. Black and white point correction was used to enhance the contrast as much as possible without too much clipping of the darkest and lightest marine features. Gamma correction of 2 (red), 2 (green) and 2.3 (blue) was applied allow a wider dynamic range to be represented in the 8-bit data, helping to ensure that little precision was lost in representing darker marine features. As a result, the image brightness is not linearly scaled. Further details of the corrections applied is available from https://github.com/eatlas/AU_NESP-MaC-3-17_AIMS_S2-comp/blob/main/src/processors/s2processor.py.


    Methods:

    The satellite image composites were created by combining multiple Sentinel 2 images using the Google Earth Engine. The core algorithm was:
    1. For each Sentinel 2 tile filter the "COPERNICUS/S2_HARMONIZED" image collection by
    - tile ID
    - maximum cloud cover 20%
    - date between '2015-06-27' and '2024-05-31'
    - asset_size > 100000000 (remove small fragments of tiles)
    Note: A maximum cloud cover of 20% was used to improve the processing times. In most cases this filtering does not have an effect on the final composite as images with higher cloud coverage mostly result in higher noise levels and are not used in the final composite.
    2. Split images by "SENSING_ORBIT_NUMBER" (see "Using SENSING_ORBIT_NUMBER for a more balanced composite" for more information).
    3. For each SENSING_ORBIT_NUMBER collection filter out all noise-adding images:
    3.1 Calculate image noise level for each image in the collection (see "Image noise level calculation for more information") and sort collection by noise level.
    3.2 Remove all images with a very high noise index (>15).
    3.3 Calculate a baseline noise level using a minimum number of images (min_images_in_collection=30). This minimum number of images is needed to ensure a smoth composite where cloud "holes" in one image are covered by other images.
    3.4 Iterate over remaining images (images not used in base noise level calculation) and check if adding image to the composite adds to or reduces the noise. If it reduces the noise add it to the composite. If it increases the noise stop iterating over images.
    4. Combine SENSING_ORBIT_NUMBER collections into one image collection.
    5. Remove sun-glint (true colour only) and apply atmospheric correction on each image (see "Sun-glint removal and atmospheric correction" for more information).
    6. Duplicate image collection to first create a composite image without cloud masking and using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used).
    7. Apply cloud masking to all images in the original image collection (see "Cloud Masking" for more information) and create a composite by using the 30th percentile of the images in the collection (i.e. for each pixel the 30th percentile value of all images is used).
    8. Combine the two composite images (no cloud mask composite and cloud mask composite). This solves the problem of some coral cays and islands being misinterpreted as clouds and therefore creating holes in the composite image. These holes are "plugged" with the underlying composite without cloud masking. (Lawrey et al. 2022)
    9. The final composite was exported as cloud optimized 8 bit GeoTIFF

    Note: The following tiles were generated with no "maximum cloud cover" as they did not have enough images to create a composite with the standard settings: 46LGM, 46LGN, 46LHM, 50KKD, 50KPG, 53LMH, 53LMJ, 53LNH, 53LPH, 53LPJ, 54LVP, 57JVH, 59JKJ.

    Compositing Process:

    The dataset was created using a multi-step compositing process. A percentile-based image compositing technique was employed, with the 15th percentile chosen as the optimal value for most regions. This percentile was identified as the most effective in minimizing noise and enhancing key features such as coral reefs, islands, and other shallow water habitats. The 15th percentile was chosen as a trade off between the desire to select darker pixels that typically correspond to clearer water, and very dark values (often occurring at the 10th percentile) corresponding to cloud shadows.

    The cloud masking predictor would often misinterpret very pale areas, such as cays and beaches as clouds. To overcome this limitation a dual-image compositing method was used. A primary composite was generated with cloud masks applied, and a secondary, composite with no cloud masking was layered beneath to fill in potential gaps (or “holes”) caused by the cloud masking mistakes

    Image noise level calculation:

    The noise level for each image in this dataset is calculated to ensure high-quality composites by minimizing the inclusion of noisy images. This process begins by creating a water mask using the Normalized Difference Water Index (NDWI) derived from the NIR and Green bands. High reflectance areas in the NIR and SWIR bands, indicative of sun-glint, are identified and masked by the water mask to focus on water areas affected by sun-glint. The proportion of high sun-glint pixels within these water areas is calculated and amplified to compute a noise index. If no water pixels are detected, a high noise index value is assigned.

    In any set of satellite images, some will be taken under favourable conditions (low wind, low sun-glint, and minimal cloud cover), while others will be affected by high sun-glint or cloud. Combining multiple images into a composite reduces noise by averaging out these fluctuations.

    When all images have the same noise level, increasing the number of images in the composite reduces the overall noise. However, in practice, there is a mix of high and low noise images. The optimal composite is created by including as many low-noise images as possible while excluding high-noise ones. The challenge lies in the determining the acceptable noise threshold for a given scene as some areas are more cloudy and sun glint affected than others.

    To address this, we rank the available Sentinel 2 images for each scene by their noise index, from lowest to highest. The goal is to determine the ideal number of images (N) to include in the composite to minimize overall noise. For each N, we use the lowest noise images and estimate the final composite noise based on the noise index. This is repeated for all values of N up to a maximum of 200 images, and we select the N that results in the lowest noise.

    This approach has some limitations. It estimates noise based on sun glint and residual clouds (after cloud masking) using NIR bands, without accounting for image turbidity. The final composite noise is not directly measured as this would be computationally expensive. It is instead estimated by dividing the average noise of the selected images by the square root of the number of images. We found this method tends to underestimate the ideal image count, so we adjusted the noise estimates, scaling them by the inverse of their ranking, to favor larger sets of images. The algorithm is not fully optimized, and further refinement is needed to improve accuracy.

    Full details of the algorithm can be found in https://github.com/eatlas/AU_NESP-MaC-3-17_AIMS_S2-comp/blob/main/src/utilities/noise_predictor.py

    Sun glint removal and atmospheric correction:

    Sun glint was removed from the images using the infrared B8 band to estimate the reflection off the water from the sun glint. B8 penetrates water less than 0.5 m and so in water areas it only detects reflections off the surface of the water. The sun glint detected by B8 correlates very highly with the sun glint experienced by the visible channels (B2, B3 and B4) and so the sun glint in these channels can be removed by subtracting B8 from these channels.

    Eric Lawrey developed this algorithm by fine tuning the value of the scaling between the B8 channel and each individual visible channel (B2, B3 and B4) so that the maximum level of sun glint would be removed. This work was based on a representative set of images, trying to determine a set of values that represent a good compromise across different water surface

  15. h

    Airplanes_Detection_Satellite_Images

    • huggingface.co
    Updated Aug 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marcos García (2025). Airplanes_Detection_Satellite_Images [Dataset]. https://huggingface.co/datasets/mrcsgh/Airplanes_Detection_Satellite_Images
    Explore at:
    Dataset updated
    Aug 31, 2025
    Authors
    Marcos García
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset Summary

    More than 250 satellite images taken from Google Maps for airplanes detection.

      Uses
    

    The data was collected for educational purposes. I learned how to label images with labelImg, and it can also be used to test pre-trained models like YOLO.

      Dataset Structure
    

    The dataset contains 272 labeled images divided into training and validation sets. It also includes the configuration file dataset.yaml.

      Dataset Creation
    
    
    
    
    
      1. Image Capture… See the full description on the dataset page: https://huggingface.co/datasets/mrcsgh/Airplanes_Detection_Satellite_Images.
    
  16. D

    NSW Landuse 2017 v1.5

    • data.nsw.gov.au
    arcgis rest service +4
    Updated May 9, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NSW Department of Climate Change, Energy, the Environment and Water (2025). NSW Landuse 2017 v1.5 [Dataset]. https://data.nsw.gov.au/data/dataset/nsw-landuse-2017-v1p5-f0ed-clone-a95d
    Explore at:
    arcgis rest service, pdf, wms, zip, wmtsAvailable download formats
    Dataset updated
    May 9, 2025
    Dataset provided by
    NSW Department of Climate Change, Energy, the Environment and Water
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    New South Wales
    Description

    The 2017 Landuse captures how the landscape in NSW is being used for food production, forestry, nature conservation, infrastructure and urban development. It can be used to monitor changes in the landscape and identify impacts on biodiversity values and individual ecosystems.

    The NSW 2017 Landuse mapping is dated September 2017.

    This is version 1.5 of the dataset, published December 2023.

    Version 1.5 of the 2017 Landuse incorporates the following updates:

    Previous Versions *Version 1.4 internal update (not published) * Version 1.3 internal update (not published) * Version 1.2 published 24 June 2020 - Fine scale update to Greater Sydney Metropolitan Area * Version 1 published August 2019

    The 2017 Landuse is based on Aerial imagery and Satellite imagery available for NSW. These include, but not limited to; digital aerial imagery (ADS) captured by NSW Department of Customer Service (DCS), high resolution urban (Conurbation) digital aerial imagery captured on behalf of DCS, SPOT 5, 6 & 7(Airbus), Planet™, Sentinel 2 (European Space Agency) and LANDSAT (NASA) Satellite Imagery. Mapping also includes commercially available imagery from Nearmap™ and Google Earth™, along with Google Street View™.

    Mapping takes into consideration ancillary datasets such as tenure such as National Parks and State forests, cadastre, roads parcels, land zoning, topographic information and Google Maps, in conjunction with visual interpretation and field validation of patterns and features on the ground.

    The 2017 Landuse was captured on screen using ARC GIS (Geographical Information Software) at a scale of 1:8,000 scale (or better) and features are mapped down to 2 hectares in size. Exceptions were made for targeted Landuse classes such as horticulture, intensive animal husbandry and urban environments, which were mapped at a finer scale.

    The 2017 Landuse has complete coverage of NSW. It also includes updates to the fine scale Horticulture mapping for the east coast of NSW - Newcastle to the Queensland boarder and Murray-Riverina Region. This horticultural mapping includes operations to the commodity level based on field work and high-resolution imagery interpretation.

    Landuse classes assigned are based on activities that have occurred in the last 5-10 years that may be part of a rotational practice. Time-series LANDSAT information has been used in conjunction with more recent Satellite Imagery to determine whether grasslands have been disturbed or subject to ongoing land management activities over the past 30 years.

    The 2017 Landuse was captured on screen using ARC GIS (Geographical Information Software) at a scale of 1:8,000 scale (or better) and features are mapped down to 2 hectares in size. Exceptions were made for targeted Landuse classes such as horticulture, intensive animal husbandry and urban environments (including Greater Sydney Metropolitan region), which were mapped at a finer scale.

    The reliability scale of the dataset is 1:10,000.

    Mapping has been subject to a peer review and quality assurance process.

    Land use information has been captured in accordance with standards set by the Australian Collaborative Land Use Mapping Program (ACLUMP) and using the Australian Land Use and Management ALUM Classification Version 8. The ALUM classification is based upon the modified Baxter & Russell classification and presented according to the specifications contained in http://www.agriculture.gov.au/abares/aclump/land-use/alum-classification.

    This product will be incorporated in the National Catchment scale land use product 2020 that will be available as a 50m raster - Australian Bureau of Agricultural and Resource Economics and Sciences (ABARES) http://www.agriculture.gov.au/abares/aclump/land-use/data-download

    The Department of Planning, Industry and Environment (DPIE) will continue to complete land use mapping at approximately 5-year intervals.

    The 2017 Landuse product is considered as a benchmark product that can be used for Landuse change reporting. Ongoing improvements to the 2017 Landuse product will be undertaken to correct errors or additional improvements to the mapping.

  17. HRPlanesv2 - High Resolution Satellite Imagery for Aircraft Detection

    • zenodo.org
    zip
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dilsad Unsal; Dilsad Unsal (2025). HRPlanesv2 - High Resolution Satellite Imagery for Aircraft Detection [Dataset]. http://doi.org/10.5281/zenodo.7331974
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Dilsad Unsal; Dilsad Unsal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The HRPlanesv2 dataset contains 2120 VHR Google Earth images. To further improve experiment results, images of airports from many different regions with various uses (civil/military/joint) selected and labeled. A total of 14,335 aircrafts have been labelled. Each image is stored as a ".jpg" file of size 4800 x 2703 pixels and each label is stored as YOLO ".txt" format. Dataset has been split in three parts as 70% train, %20 validation and test. The aircrafts in the images in the train and validation datasets have a percentage of 80 or more in size.

  18. Real and Synthetic Overhead Images of Wind Turbines in the US

    • figshare.com
    zip
    Updated May 16, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Duke Bass Connections Deep Learning for Rare Energy Infrastructure 2020-2021 (2021). Real and Synthetic Overhead Images of Wind Turbines in the US [Dataset]. http://doi.org/10.6084/m9.figshare.14551464.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 16, 2021
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Duke Bass Connections Deep Learning for Rare Energy Infrastructure 2020-2021
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    United States
    Description

    OverviewThis dataset contains real overhead images of wind turbines in the US collected through the National Agriculture Imagery Plan (NAIP), as well as synthetic overhead images of wind turbines created to be similar to the real images. All of these images are 608x608. For more details on the methodology and data, please read the sections below, or look at our website: Locating Energy Infrastructure with Deep Learning (duke-bc-dl-for-energy-infrastructure.github.io).Real DataThe real data consists of images.zip and labels.zip. There are 1,742 images in images.zip, and for each image in this folder, there is a corresponding label with the same name, but a different extension. Some images do not have labels, meaning there are no wind turbines in those images. Many of these overhead images of wind turbines were collected from Power Plant Satellite Imagery Dataset (figshare.com) and then hand labeled. Others were collected using Google Earth Engine or EarthOnDemand and then labeled. All of the images collected are from the National Agricultural Imagery Program (NAIP), and all are 608x608 pixels. The labels are in YOLOv3 format, meaning each line in the text file corresponds with one wind turbine. Each line is formatted as: class x_center y_center width height. Since there is only one class, class is always zero, and the x, y, width, and height are relative to the size of the image and are between 0-1.The image_locations.csv file contains the latitude and longitude for each image. It also contains the image's geographic domain that we defined. Our data comes from what we defined as four regions - Northeast (NE), Eastern Midwest (EM), Northwest (NW), and Southwest (SW), and these are included in the csv file for each image. These regions are defined by groups of states, so any data in WA, ID, or MT would be in the Northwest region.Synthetic DataThe synthetic data consists of synthetic_images.zip and synthetic_labels.zip. These images and labels were automatically generated using CityEngine. Again, all images are 608x608, and the format of the labels is the same. There are 943 images total, and at least 200 images for each of the four geographic domains that we defined in the US (Northwest, Southwest, Eastern Midwest, Northeast). The generation of these images consisted of the software selecting a background image, then generating 3D models of turbines on top of that background image, and then positioning a simulated camera overhead to capture an image. The background images were collected nearby the locations of the testing images.ExperimentationOur Duke Bass Connections 2020-2021 team performed many experiments using this data to test if the synthetic imagery could help the performance of our object detection model. We designed experiments where we would have a baseline dataset of just real imagery, train and test an object detection model on it, and then add in synthetic imagery into the dataset, train the object detection model on the new dataset, and then compare it's performance with the baseline. For more information on the experiments and methodology, please visit our website here: Locating Energy Infrastructure with Deep Learning (duke-bc-dl-for-energy-infrastructure.github.io).

  19. G

    High Resolution Digital Elevation Model (HRDEM) - CanElevation Series

    • open.canada.ca
    • catalogue.arctic-sdi.org
    esri rest, geotif +5
    Updated Jun 17, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natural Resources Canada (2025). High Resolution Digital Elevation Model (HRDEM) - CanElevation Series [Dataset]. https://open.canada.ca/data/en/dataset/957782bf-847c-4644-a757-e383c0057995
    Explore at:
    shp, geotif, html, pdf, esri rest, json, kmzAvailable download formats
    Dataset updated
    Jun 17, 2025
    Dataset provided by
    Natural Resources Canada
    License

    Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
    License information was derived automatically

    Description

    The High Resolution Digital Elevation Model (HRDEM) product is derived from airborne LiDAR data (mainly in the south) and satellite images in the north. The complete coverage of the Canadian territory is gradually being established. It includes a Digital Terrain Model (DTM), a Digital Surface Model (DSM) and other derived data. For DTM datasets, derived data available are slope, aspect, shaded relief, color relief and color shaded relief maps and for DSM datasets, derived data available are shaded relief, color relief and color shaded relief maps. The productive forest line is used to separate the northern and the southern parts of the country. This line is approximate and may change based on requirements. In the southern part of the country (south of the productive forest line), DTM and DSM datasets are generated from airborne LiDAR data. They are offered at a 1 m or 2 m resolution and projected to the UTM NAD83 (CSRS) coordinate system and the corresponding zones. The datasets at a 1 m resolution cover an area of 10 km x 10 km while datasets at a 2 m resolution cover an area of 20 km by 20 km. In the northern part of the country (north of the productive forest line), due to the low density of vegetation and infrastructure, only DSM datasets are generally generated. Most of these datasets have optical digital images as their source data. They are generated at a 2 m resolution using the Polar Stereographic North coordinate system referenced to WGS84 horizontal datum or UTM NAD83 (CSRS) coordinate system. Each dataset covers an area of 50 km by 50 km. For some locations in the north, DSM and DTM datasets can also be generated from airborne LiDAR data. In this case, these products will be generated with the same specifications as those generated from airborne LiDAR in the southern part of the country. The HRDEM product is referenced to the Canadian Geodetic Vertical Datum of 2013 (CGVD2013), which is now the reference standard for heights across Canada. Source data for HRDEM datasets is acquired through multiple projects with different partners. Since data is being acquired by project, there is no integration or edgematching done between projects. The tiles are aligned within each project. The product High Resolution Digital Elevation Model (HRDEM) is part of the CanElevation Series created in support to the National Elevation Data Strategy implemented by NRCan. Collaboration is a key factor to the success of the National Elevation Data Strategy. Refer to the “Supporting Document” section to access the list of the different partners including links to their respective data.

  20. g

    Vietnam Special Economic Zone Satellite Imagery Dataset

    • search.gesis.org
    Updated Aug 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tafese, Tevin; Lay, Jann; Tran, Van (2025). Vietnam Special Economic Zone Satellite Imagery Dataset [Dataset]. https://search.gesis.org/research_data/SDN-10.7802-2762
    Explore at:
    Dataset updated
    Aug 15, 2025
    Dataset provided by
    GESIS search
    German Institute for Global and Area Studies (GIGA)
    Authors
    Tafese, Tevin; Lay, Jann; Tran, Van
    License

    https://www.gesis.org/en/institute/data-usage-termshttps://www.gesis.org/en/institute/data-usage-terms

    Area covered
    Vietnam
    Description

    The dataset contains detailed geographic information on more than 600 Special Economic Zones (SEZs) in Vietnam. Specifically, it captures the exact location (latitude and longitude) as well as changes in the built-up area of each of SEZ over time. The change in built-up area over time is calculated from manually drawn polygons around the built-up area of each SEZ on historical satellite imagery from Google Earth Pro. We provide access to these polygons for each SEZ in the KLM file ‘Vietnam SEZ Polygons.klm’ and the built-up area over time for each SEZ in the STATA file ‘Vietnam SEZ Built-up Areas.dta’. We also provide access to additional metadata on each SEZ – including zone type, focus sector, and year of approval/establishment – in the Excel file ‘Vietnam SEZ Metadata.xlsx’.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Sean Sloan; Raiyan Talkhani; Tao Huang; Jayden Engert; William Laurance (2025). Satellite images and road-reference data for AI-based road mapping in Equatorial Asia [Dataset]. http://doi.org/10.5061/dryad.bvq83bkg7

Satellite images and road-reference data for AI-based road mapping in Equatorial Asia

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jul 27, 2025
Dataset provided by
Dryad Digital Repository
Authors
Sean Sloan; Raiyan Talkhani; Tao Huang; Jayden Engert; William Laurance
Time period covered
Jan 1, 2023
Area covered
Asia
Description

1.     INTRODUCTION For the purposes of training AI-based models to identify (map) road features in rural/remote tropical regions on the basis of true-colour satellite imagery, and subsequently testing the accuracy of these AI-derived road maps, we produced a dataset of 8904 satellite image ‘tiles’ and their corresponding known road features across Equatorial Asia (Indonesia, Malaysia, Papua New Guinea). 2.     FURTHER INFORMATION The following is a summary of our data. Fuller details on these data and their underlying methodology are given in the corresponding article, under consideration by the journal Remote Sensing as of September 2023: Sloan, S., Talkhani, R.R., Huang, T., Engert, J., Laurance, W.F. (2023) Mapping remote roads using artificial intelligence and satellite imagery. Under consideration by Remote Sensing. Correspondence regarding these data can be directed to: Sean Sloan Department of Geography, Vancouver Island University, Nanaimo, B.C, Canada sean.sloan@viu.ca;  ..., 1.     INPUT 200 SATELLITE IMAGES

The main dataset shared here was derived from a set of 200 input satellite images, also provided here. These 200 images are effectively ‘screenshots’ (i.e., reduced-resolution copies) of high-resolution true-colour satellite imagery (~0.5-1m pixel resolution) observed using the Elvis Elevation and Depth spatial data portal (https://elevation.fsdf.org.au/), which here is functionally equivalent to the more familiar Google Earth. Each of these original images was initially acquired at a resolution of 1920x886 pixels. Actual image resolution was coarser than the native high-resolution imagery. Visual inspection of these 200 images suggests a pixel resolution of ~5 meters, given the number of pixels required to span features of familiar scale, such as roads and roofs, as well as the ready discrimination of specific land uses, vegetation types, etc. These 200 images generally spanned either forest-agricultural mosaics or intact forest landscapes with limi..., , # Satellite images and road-reference data for AI-based road mapping in Equatorial Asia

https://doi.org/10.5061/dryad.bvq83bkg7

1. INTRODUCTION For the purposes of training AI-based models to identify (map) road features in rural/remote tropical regions on the basis of true-colour satellite imagery, and subsequently testing the accuracy of these AI-derived road maps, we produced a dataset of 8904 satellite image ‘tiles’ and their corresponding known road features across Equatorial Asia (Indonesia, Malaysia, Papua New Guinea).   2. FURTHER INFORMATION The following is a summary of our data. Fuller details on these data and their underlying methodology are given in the corresponding article, under consideration by the journal Remote Sensing as of September 2023:  Sloan, S., Talkhani, R.R., Huang, T., Engert, J., Laurance, W.F. (2023) Mapping remote roads using artificial intelligence and satellite imagery. Under consideration by...

Search
Clear search
Close search
Google apps
Main menu