Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
validation
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
This image service contains high resolution satellite imagery for selected regions throughout the Yukon. Imagery is 1m pixel resolution, or better. Imagery was supplied by the Government of Yukon, and the Canadian Department of National Defense. All the imagery in this service is licensed. If you have any questions about Yukon government satellite imagery, please contact Geomatics.Help@gov.yk.can. This service is managed by Geomatics Yukon.
Satellite images are essentially the eyes in the sky. Some of the recent satellites, such as WorldView-3, provide images with a spatial resolution of *** meters. This satellite with a revisit time of under ** hours can scan a new image of the exact location with every revisit.
Spatial resolution explained Spatial resolution is the size of the physical dimension that can be represented on a pixel of the image. Or in other words, spatial resolution is a measure of the smallest object that the sensor can resolve measured in meters. Generally, spatial resolution can be divided into three categories:
– Low resolution: over 60m/pixel. (useful for regional perspectives such as monitoring larger forest areas)
– Medium resolution: 10‒30m/pixel. (Useful for monitoring crop fields or smaller forest patches)
– High to very high resolution: ****‒5m/pixel. (Useful for monitoring smaller objects like buildings, narrow streets, or vehicles)
Based on the application of the imagery for the final product, a choice can be made on the resolution, as labor intensity from person-hours to computing power required increases with the resolution of the imagery.
What is this dataset?
Nearly 10,000 km² of free high-resolution and matched low-resolution satellite imagery of unique locations which ensure stratified representation of all types of land-use across the world: from agriculture to ice caps, from forests to multiple urbanization densities.
Those locations are also enriched with typically under-represented locations in ML datasets: sites of humanitarian interest, illegal mining sites, and settlements of persons at risk.
Each high-resolution image (1.5 m/pixel) comes with multiple temporally-matched low-resolution images from the freely accessible lower-resolution Sentinel-2 satellites (10 m/pixel).
We accompany this dataset with a paper, datasheet for datasets and an open-source Python package to: rebuild or extend the WorldStrat dataset, train and infer baseline algorithms, and learn with abundant tutorials, all compatible with the popular EO-learn toolbox.
Why make this?
We hope to foster broad-spectrum applications of ML to satellite imagery, and possibly develop the same power of analysis allowed by costly private high-resolution imagery from free public low-resolution Sentinel2 imagery. We illustrate this specific point by training and releasing several highly compute-efficient baselines on the task of Multi-Frame Super-Resolution.
Licences
This data set contains high-resolution QuickBird imagery and geospatial data for the entire Barrow QuickBird image area (156.15° W - 157.07° W, 71.15° N - 71.41° N) and Barrow B4 Quadrangle (156.29° W - 156.89° W, 71.25° N - 71.40° N), for use in Geographic Information Systems (GIS) and remote sensing software. The original QuickBird data sets were acquired by DigitalGlobe from 1 to 2 August 2002, and consist of orthorectified satellite imagery. Federal Geographic Data Committee (FGDC)-compliant metadata for all value-added data sets are provided in text, HTML, and XML formats. Accessory layers include: 1:250,000- and 1:63,360-scale USGS Digital Raster Graphic (DRG) mosaic images (GeoTIFF format); 1:250,000- and 1:63,360-scale USGS quadrangle index maps (ESRI Shapefile format); an index map for the 62 QuickBird tiles (ESRI Shapefile format); and a simple polygon layer of the extent of the Barrow QuickBird image area and the Barrow B4 quadrangle area (ESRI Shapefile format). Unmodified QuickBird data comprise 62 data tiles in Universal Transverse Mercator (UTM) Zone 4 in GeoTIFF format. Standard release files describing the QuickBird data are included, along with the DigitalGlobe license agreement and product handbooks. The baseline geospatial data support education, outreach, and multi-disciplinary research of environmental change in Barrow, which is an area of focused scientific interest. Data are provided on four DVDs. This product is available only to investigators funded specifically from the National Science Foundation (NSF), Office of Polar Programs (OPP), Arctic Sciences Section. An NSF OPP award number must be provided when ordering this data.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The satellite image of Canada is a composite of several individual satellite images form the Advanced Very High Resolution Radiometre (AVHRR) sensor on board various NOAA Satellites. The colours reflect differences in the density of vegetation cover: bright green for dense vegetation in humid southern regions; yellow for semi-arid and for mountainous regions; brown for the north where vegetation cover is very sparse; and white for snow and ice. An inset map shows a satellite image mosaic of North America with 35 land cover classes, based on data from the SPOT satellite VGT (vegetation) sensor.
Satellite sensor artifacts can negatively impact the interpretation of satellite data. One such artifact is linear features in imagery which can be caused by a variety of sensor issues and can present as either wide, consistent features called banding, or as narrow, inconsistent features called striping. This study used high-resolution data from DigitalGlobe's WorldView-3 satellite collected at Lake Okeechobee, Florida, on 30 August 2017. Primarily designed as a land sensor, this study investigated the impact of vertical artifacts on both at-sensor radiance and a spectral index for an aquatic target. This dataset is not publicly accessible because: NGA Nextview license agreements prohibit the distribution of original data files from WorldView due to copyright. It can be accessed through the following means: National Geospatial Intelligence Agency contract details prevent distribution of Maxar data. Questions regarding Nextvew can be sent so NGANextView_License@nga.mil. Questions regarding the NASA Commercial Data Buy can be sent to yvonne.ivey@nasa.gov. Format: high-resolution data from DigitalGlobe's WorldView-3 satellite. This dataset is associated with the following publication: Coffer, M., P. Whitman, B. Schaeffer, V. Hill, R. Zimmerman, W. Salls, M. Lebrasse, and D. Graybill. Vertical artifacts in high-resolution WorldView-2 and WorldView-3 satellite imagery of aquatic systems. INTERNATIONAL JOURNAL OF REMOTE SENSING. Taylor & Francis, Inc., Philadelphia, PA, USA, 43(4): 1199-1225, (2022).
Declassified satellite images provide an important worldwide record of land-surface change. With the success of the first release of classified satellite photography in 1995, images from U.S. military intelligence satellites KH-7 and KH-9 were declassified in accordance with Executive Order 12951 in 2002. The data were originally used for cartographic information and reconnaissance for U.S. intelligence agencies. Since the images could be of historical value for global change research and were no longer critical to national security, the collection was made available to the public. Keyhole (KH) satellite systems KH-7 and KH-9 acquired photographs of the Earth’s surface with a telescopic camera system and transported the exposed film through the use of recovery capsules. The capsules or buckets were de-orbited and retrieved by aircraft while the capsules parachuted to earth. The exposed film was developed and the images were analyzed for a range of military applications. The KH-7 surveillance system was a high resolution imaging system that was operational from July 1963 to June 1967. Approximately 18,000 black-and-white images and 230 color images are available from the 38 missions flown during this program. Key features for this program were larger area of coverage and improved ground resolution. The cameras acquired imagery in continuous lengthwise sweeps of the terrain. KH-7 images are 9 inches wide, vary in length from 4 inches to 500 feet long, and have a resolution of 2 to 4 feet. The KH-9 mapping program was operational from March 1973 to October 1980 and was designed to support mapping requirements and exact positioning of geographical points for the military. This was accomplished by using image overlap for stereo coverage and by using a camera system with a reseau grid to correct image distortion. The KH-9 framing cameras produced 9 x 18 inch imagery at a resolution of 20-30 feet. Approximately 29,000 mapping images were acquired from 12 missions. The original film sources are maintained by the National Archives and Records Administration (NARA). Duplicate film sources held in the USGS EROS Center archive are used to produce digital copies of the imagery.
Cotton root rot is a century-old cotton disease that now can be effectively controlled with Topguard Terra fungicide. Because this disease tends to occur in the same general areas within fields in recurring years, site-specific application of the fungicide only to infested areas can be as effective as and considerably more economical than uniform application. The overall objective of this research was to demonstrate how site-specific fungicide application could be implemented based on historical remote sensing imagery and using variable-rate technology. Procedures were developed for creating binary prescription maps from historical airborne and high-resolution satellite imagery. Two different variable-rate liquid control systems were adapted to two existing cotton planters, respectively, for site-specific fungicide application at planting. One system was used for site-specific application on multiple fields in 2015 and 2016 near Edroy, Texas, and the other system was used on multiple fields in both years near San Angelo, Texas. Airborne multispectral imagery taken during the two growing seasons was used to monitor the performance of the site-specific treatments. Results based on prescription maps derived from historical airborne and satellite imagery of two fields in 2015 and one field in 2016 are reported in this article. Two years of field experiments showed that the prescription maps and the variable-rate systems performed well and that site-specific fungicide treatments effectively controlled cotton root rot. Reduction in fungicide use was 41%, 43%, and 63% for the three fields, respectively. The methodologies and results of this research will provide cotton growers, crop consultants, and agricultural dealers with practical guidelines for implementing site-specific fungicide application using historical imagery and variable-rate technology for effective management of cotton root rot. Resources in this dataset: Resource Title: A ground picture of cotton root rot File Name: IMG_0124.JPG Resource Description: A cotton root rot-infested area in a cotton field near Edroy, TX. Resource Title: An aerial image of a cotton field File Name: Color-infrared image of a field.jpg Resource Description: Aerial color-infrared (CIR) image of a cotton field infested with cotton root rot. Resource Title: As-applied fungicide application data File Name: Jim Ermis-Farm 1-Field 11 Fungicide Application.csv Resource Description: As-applied fungicide application rates for variable rate application of Topguard to a cotton field infested with cotton rot
https://artefacts.ceda.ac.uk/licences/specific_licences/msg.pdfhttps://artefacts.ceda.ac.uk/licences/specific_licences/msg.pdf
The Meteosat Second Generation (MSG) satellites, operated by EUMETSAT (The European Organisation for the Exploitation of Meteorological Satellites), provide almost continuous imagery to meteorologists and researchers in Europe and around the world. These include visible, infra-red, water vapour, High Resolution Visible (HRV) images and derived cloud top height, cloud top temperature, fog, snow detection and volcanic ash products. These images are available for a range of geographical areas.
This dataset contains high resolution visible images from MSG satellites over the UK area. Imagery available from March 2005 onwards at a frequency of 15 minutes (some are hourly) and are at least 24 hours old.
The geographic extent for images within this datasets is available via the linked documentation 'MSG satellite imagery product geographic area details'. Each MSG imagery product area can be referenced from the third and fourth character of the image product name giving in the filename. E.g. for EEAO11 the corresponding geographic details can be found under the entry for area code 'AO' (i.e West Africa).
Suggested use: Use tiled Map Service for large scale mapping when high resolution color imagery is needed.A web app to view tile and block metadata such as year, sensor, and cloud cover can be found here. CoverageState of AlaskaProduct TypeTile CacheImage BandsRGBSpatial Resolution50cmAccuracy5m CE90 or betterCloud Cover<10% overallOff Nadir Angle<30 degreesSun Elevation>30 degreesWMS version of this data: https://geoportal.alaska.gov/arcgis/services/ahri_2020_rgb_cache/MapServer/WMSServer?request=GetCapabilities&service=WMSWMTS version of this data:https://geoportal.alaska.gov/arcgis/rest/services/ahri_2020_rgb_cache/MapServer/WMTS/1.0.0/WMTSCapabilities.xml
The data are 475 thematic land cover raster’s at 2m resolution. Land cover classification was to the land cover classes: Tree (1), Water (2), Barren (3), Other Vegetation (4) and Ice & Snow (8). Cloud cover and Shadow were sometimes coded as Cloud (5) and Shadow (6), however for any land cover application would be considered NoData. Some raster’s may have Cloud and Shadow pixels coded or recoded to NoData already. Commercial high-resolution satellite data was used to create the classifications. Usable image data for the target year (2010) was acquired for 475 of the 500 primary sample locations, with 90% of images acquired within ±2 years of the 2010 target. The remaining 25 of the 500 sample blocks had no usable data so were not able to be mapped. Tabular data is included with the raster classifications indicating the specific high-resolution sensor and date of acquisition for source imagery as well as the stratum to which that sample block belonged. Methods for this classification are described in Pengra et al. (2015). A 1-stage cluster sampling design was used where 500 (475 usable), 5 km x 5 km sample blocks were the primary sampling units (note; the nominal size was 5km x 5km blocks, but some have deviations in dimensions due only partial coverage of the sample block with usable imagery). Sample blocks were selected using stratified random sampling within a sample frame stratified by a modification of the Köppen Climate/Vegetation classification and population density (Olofsson et al., 2012). Secondary sampling units are each of the classified 2m pixels of the raster. This design satisfies the criteria that define a probability sampling design and thus serves as the basis to support rigorous design-based statistical inference (Stehman, 2000).
https://data.linz.govt.nz/license/attribution-4-0-international/https://data.linz.govt.nz/license/attribution-4-0-international/
This dataset provides a seamless cloud-free 10m resolution satellite imagery layer of the New Zealand mainland and offshore islands.
The imagery was captured by the European Space Agency Sentinel-2 satellites between September 2020 - April 2021.
Technical specifications:
This is a visual product only. The data has been downsampled from 12-bits to 8-bits, and the original values of the images have been modified for visualisation purposes.
This data set contains a time series of snow depth maps and related intermediary snow-on and snow-off DEMs for Grand Mesa and the Banded Peak Ranch areas of Colorado derived from very-high-resolution (VHR) satellite stereo images and lidar point cloud data. Two of the snow depth maps coincide temporally with the 2017 NASA SnowEx Grand Mesa field campaign, providing a comparison between the satellite derived snow depth and in-situ snow depth measurements. The VHR stereo images were acquired each year between 2016 and 2022 during the approximate timing of peak snow depth by the Maxar WorldView-2, WorldView-3, and CNES/Airbus Pléiades-HR 1A and 1B satellites, while lidar data was sourced from the USGS 3D Elevation Program.
World Imagery provides one meter or better satellite and aerial imagery in many parts of the world and lower resolution satellite imagery worldwide. The map includes 15m TerraColor imagery at small and mid-scales (~1:591M down to ~1:72k) and 2.5m SPOT Imagery (~1:288k to ~1:72k) for the world. The map features 0.5m resolution imagery in the continental United States and parts of Western Europe from DigitalGlobe. Additional DigitalGlobe sub-meter imagery is featured in many parts of the world. In the United States, 1 meter or better resolution NAIP imagery is available in some areas. In other parts of the world, imagery at different resolutions has been contributed by the GIS User Community. In select communities, very high resolution imagery (down to 0.03m) is available down to ~1:280 scale. You can contribute your imagery to this map and have it served by Esri via the Community Maps Program. View the list of Contributors for the World Imagery Map.CoverageView the links below to learn more about recent updates and map coverage:What's new in World ImageryWorld coverage mapCitationsThis layer includes imagery provider, collection date, resolution, accuracy, and source of the imagery. With the Identify tool in ArcGIS Desktop or the ArcGIS Online Map Viewer you can see imagery citations. Citations returned apply only to the available imagery at that location and scale. You may need to zoom in to view the best available imagery. Citations can also be accessed in the World Imagery with Metadata web map.UseYou can add this layer to the ArcGIS Online Map Viewer, ArcGIS Desktop, or ArcGIS Pro. To view this layer with a useful reference overlay, open the Imagery Hybrid web map. A similar raster web map, Imagery with Labels, is also available.FeedbackHave you ever seen a problem in the Esri World Imagery Map that you wanted to report? You can use the Imagery Map Feedback web map to provide comments on issues. The feedback will be reviewed by the ArcGIS Online team and considered for one of our updates.
QuickBird high resolution optical products are available as part of the Maxar Standard Satellite Imagery products from the QuickBird, WorldView-1/-2/-3/-4, and GeoEye-1 satellites. All details about the data provision, data access conditions and quota assignment procedure are described into the Terms of Applicability available in Resources section.
In particular, QuickBird offers archive panchromatic products up to 0.60 m GSD resolution and 4-Bands Multispectral products up to 2.4 m GSD resolution.
Band Combination Data Processing Level Resolution Panchromatic and 4-bands Standard(2A)/View Ready Standard (OR2A) 15 cm HD, 30 cm HD, 30 cm, 40 cm, 50/60 cm View Ready Stereo 30 cm, 40 cm, 50/60 cm Map-Ready (Ortho) 1:12,000 Orthorectified 15 cm HD, 30 cm HD, 30 cm, 40 cm, 50/60 cm
4-Bands being an option from:
4-Band Multispectral (BLUE, GREEN, RED, NIR1) 4-Band Pan-sharpened (BLUE, GREEN, RED, NIR1) 4-Band Bundle (PAN, BLUE, GREEN, RED, NIR1) 3-Bands Natural Colour (pan-sharpened BLUE, GREEN, RED) 3-Band Colored Infrared (pan-sharpened GREEN, RED, NIR1) Natural Colour / Coloured Infrared (3-Band pan-sharpened) Native 30 cm and 50/60 cm resolution products are processed with MAXAR HD Technology to generate respectively the 15 cm HD and 30 cm HD products: the initial special resolution (GSD) is unchanged but the HD technique intelligently increases the number of pixels and improves the visual clarity achieving aesthetically refined imagery with precise edges and well reconstructed details.
High resolution orthorectified images combine the image characteristics of an aerial photograph with the geometric qualities of a map. An orthoimage is a uniform-scale image where corrections have been made for feature displacement such as building tilt and for scale variations caused by terrain relief, sensor geometry, and camera tilt. A mathematical equation based on ground control points, sensor calibration information, and a digital elevation model is applied to each pixel to rectify the image to obtain the geometric qualities of a map.
A digital orthoimage may be created from several photographs mosaicked to form the final image. The source imagery may be black-and-white, natural color, or color infrared with a pixel resolution of 1-meter or finer. With orthoimagery, the resolution refers to the distance on the ground represented by each pixel.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
For the purposes of training AI-based models to identify (map) road features in rural/remote tropical regions on the basis of true-colour satellite imagery, and subsequently testing the accuracy of these AI-derived road maps, we produced a dataset of 8904 satellite image ‘tiles’ and their corresponding known road features across Equatorial Asia (Indonesia, Malaysia, Papua New Guinea). Methods
The main dataset shared here was derived from a set of 200 input satellite images, also provided here. These 200 images are effectively ‘screenshots’ (i.e., reduced-resolution copies) of high-resolution true-colour satellite imagery (~0.5-1m pixel resolution) observed using the Elvis Elevation and Depth spatial data portal (https://elevation.fsdf.org.au/), which here is functionally equivalent to the more familiar Google Earth. Each of these original images was initially acquired at a resolution of 1920x886 pixels. Actual image resolution was coarser than the native high-resolution imagery. Visual inspection of these 200 images suggests a pixel resolution of ~5 meters, given the number of pixels required to span features of familiar scale, such as roads and roofs, as well as the ready discrimination of specific land uses, vegetation types, etc. These 200 images generally spanned either forest-agricultural mosaics or intact forest landscapes with limited human intervention. Sloan et al. (2023) present a map indicating the various areas of Equatorial Asia from which these images were sourced.
IMAGE NAMING CONVENTION
A common naming convention applies to satellite images’ file names:
XX##.png
where:
XX – denotes the geographical region / major island of Equatorial Asia of the image, as follows: ‘bo’ (Borneo), ‘su’ (Sumatra), ‘sl’ (Sulawesi), ‘pn’ (Papua New Guinea), ‘jv’ (java), ‘ng’ (New Guinea [i.e., Papua and West Papua provinces of Indonesia])
INTERPRETING ROAD FEATURES IN THE IMAGES For each of the 200 input satellite images, its road was visually interpreted and manually digitized to create a reference image dataset by which to train, validate, and test AI road-mapping models, as detailed in Sloan et al. (2023). The reference dataset of road features was digitized using the ‘pen tool’ in Adobe Photoshop. The pen’s ‘width’ was held constant over varying scales of observation (i.e., image ‘zoom’) during digitization. Consequently, at relatively small scales at least, digitized road features likely incorporate vegetation immediately bordering roads. The resultant binary (Road / Not Road) reference images were saved as PNG images with the same image dimensions as the original 200 images.
IMAGE TILES AND REFERENCE DATA FOR MODEL DEVELOPMENT
The 200 satellite images and the corresponding 200 road-reference images were both subdivided (aka ‘sliced’) into thousands of smaller image ‘tiles’ of 256x256 pixels each. Subsequent to image subdivision, subdivided images were also rotated by 90, 180, or 270 degrees to create additional, complementary image tiles for model development. In total, 8904 image tiles resulted from image subdivision and rotation. These 8904 image tiles are the main data of interest disseminated here. Each image tile entails the true-colour satellite image (256x256 pixels) and a corresponding binary road reference image (Road / Not Road).
Of these 8904 image tiles, Sloan et al. (2023) randomly selected 80% for model training (during which a model ‘learns’ to recognize road features in the input imagery), 10% for model validation (during which model parameters are iteratively refined), and 10% for final model testing (during which the final accuracy of the output road map is assessed). Here we present these data in two folders accordingly:
'Training’ – contains 7124 image tiles used for model training in Sloan et al. (2023), i.e., 80% of the original pool of 8904 image tiles. ‘Testing’– contains 1780 image tiles used for model validation and model testing in Sloan et al. (2023), i.e., 20% of the original pool of 8904 image tiles, being the combined set of image tiles for model validation and testing in Sloan et al. (2023).
IMAGE TILE NAMING CONVENTION A common naming convention applies to image tiles’ directories and file names, in both the ‘training’ and ‘testing’ folders: XX##_A_B_C_DrotDDD where
XX – denotes the geographical region / major island of Equatorial Asia of the original input 1920x886 pixel image, as follows: ‘bo’ (Borneo), ‘su’ (Sumatra), ‘sl’ (Sulawesi), ‘pn’ (Papua New Guinea), ‘jv’ (java), ‘ng’ (New Guinea [i.e., Papua and West Papua provinces of Indonesia])
A, B, C and D – can all be ignored. These values, which are one of 0, 256, 512, 768, 1024, 1280, 1536, and 1792, are effectively ‘pixel coordinates’ in the corresponding original 1920x886-pixel input image. They were recorded within the names of image tiles’ sub-directories and file names merely to ensure that names/directory were uniquely named)
rot – implies an image rotation. Not all image tiles are rotated, so ‘rot’ will appear only occasionally.
DDD – denotes the degree of image-tile rotation, e.g., 90, 180, 270. Not all image tiles are rotated, so ‘DD’ will appear only occasionally.
Note that the designator ‘XX##’ is directly equivalent to the filenames of the corresponding 1920x886-pixel input satellite images, detailed above. Therefore, each image tiles can be ‘matched’ with its parent full-scale satellite image. For example, in the ‘training’ folder, the subdirectory ‘Bo12_0_0_256_256’ indicates that its image tile therein (also named ‘Bo12_0_0_256_256’) would have been sourced from the full-scale image ‘Bo12.png’.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of annotated high-resolution aerial imagery of roof materials in Bonn, Germany, in the Ultralytics YOLO instance segmentation dataset format. Aerial imagery was sourced from OpenAerialMap, specifically from the Maxar Open Data Program. Roof material labels and building outlines were sourced from OpenStreetMap. Images and labels are split into training, validation, and test sets, meant for future machine learning models to be trained upon, for both building segmentation and roof type classification.The dataset is intended for applications such as informing studies on thermal efficiency, roof durability, heritage conservation, or socioeconomic analyses. There are six roof material types: roof tiles, tar paper, metal, concrete, gravel, and glass.Note: The data is in a .zip due to file upload limits. Please find a more detailed dataset description in the README.md
Yukon high resolution satellite imagery is distributed from the Government of Yukon imagery repository. This is a dynamic service containing satellite imagery for locations in the Yukon, Canada.
This data is hosted in Yukon Albers equal area projection. It can be viewed and queried in the GeoYukon application: https://mapservices.gov.yk.ca/GeoYukon.
For more information contact geomatics.help@yukon.ca.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
validation