Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE
After 2022-01-25, Sentinel-2 scenes with PROCESSING_BASELINE '04.00' or above have their DN (value) range shifted by 1000. The HARMONIZED collection shifts data in newer scenes to be in the same range as in older scenes. Sentinel-2 is a wide-swath, high-resolution, multi-spectral imaging mission supporting Copernicus Land Monitoring studies, including the monitoring of vegetation, soil and water cover, as well as observation of inland waterways and coastal areas. The Sentinel-2 data contain 13 UINT16 spectral bands representing TOA reflectance scaled by 10000. See the Sentinel-2 User Handbook for details. QA60 is a bitmask band that contained rasterized cloud mask polygons until Feb 2022, when these polygons stopped being produced. Starting in February 2024, legacy-consistent QA60 bands are constructed from the MSK_CLASSI cloud classification bands. For more details, see the full explanation of how cloud masks are computed.. Each Sentinel-2 product (zip archive) may contain multiple granules. Each granule becomes a separate Earth Engine asset. EE asset ids for Sentinel-2 assets have the following format: COPERNICUS/S2/20151128T002653_20151128T102149_T56MNN. Here the first numeric part represents the sensing date and time, the second numeric part represents the product generation date and time, and the final 6-character string is a unique granule identifier indicating its UTM grid reference (see MGRS). The Level-2 data produced by ESA can be found in the collection COPERNICUS/S2_SR. For datasets to assist with cloud and/or cloud shadow detection, see COPERNICUS/S2_CLOUD_PROBABILITY and GOOGLE/CLOUD_SCORE_PLUS/V1/S2_HARMONIZED. For more details on Sentinel-2 radiometric resolution, see this page.
Are you looking to identify B2B leads to promote your business, product, or service? Outscraper Google Maps Scraper might just be the tool you've been searching for. This powerful software enables you to extract business data directly from Google's extensive database, which spans millions of businesses across countless industries worldwide.
Outscraper Google Maps Scraper is a tool built with advanced technology that lets you scrape a myriad of valuable information about businesses from Google's database. This information includes but is not limited to, business names, addresses, contact information, website URLs, reviews, ratings, and operational hours.
Whether you are a small business trying to make a mark or a large enterprise exploring new territories, the data obtained from the Outscraper Google Maps Scraper can be a treasure trove. This tool provides a cost-effective, efficient, and accurate method to generate leads and gather market insights.
By using Outscraper, you'll gain a significant competitive edge as it allows you to analyze your market and find potential B2B leads with precision. You can use this data to understand your competitors' landscape, discover new markets, or enhance your customer database. The tool offers the flexibility to extract data based on specific parameters like business category or geographic location, helping you to target the most relevant leads for your business.
In a world that's growing increasingly data-driven, utilizing a tool like Outscraper Google Maps Scraper could be instrumental to your business' success. If you're looking to get ahead in your market and find B2B leads in a more efficient and precise manner, Outscraper is worth considering. It streamlines the data collection process, allowing you to focus on what truly matters – using the data to grow your business.
https://outscraper.com/google-maps-scraper/
As a result of the Google Maps scraping, your data file will contain the following details:
Query Name Site Type Subtypes Category Phone Full Address Borough Street City Postal Code State Us State Country Country Code Latitude Longitude Time Zone Plus Code Rating Reviews Reviews Link Reviews Per Scores Photos Count Photo Street View Working Hours Working Hours Old Format Popular Times Business Status About Range Posts Verified Owner ID Owner Title Owner Link Reservation Links Booking Appointment Link Menu Link Order Links Location Link Place ID Google ID Reviews ID
If you want to enrich your datasets with social media accounts and many more details you could combine Google Maps Scraper with Domain Contact Scraper.
Domain Contact Scraper can scrape these details:
Email Facebook Github Instagram Linkedin Phone Twitter Youtube
The Google Satellite Embedding dataset is a global, analysis-ready collection of learned geospatial embeddings. Each 10-meter pixel in this dataset is a 64-dimensional representation, or "embedding vector," that encodes temporal trajectories of surface conditions at and around that pixel as measured by various Earth observation instruments and datasets, over a …
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Ships Google Earth is a dataset for object detection tasks - it contains Ships annotations for 794 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
For the purposes of training AI-based models to identify (map) road features in rural/remote tropical regions on the basis of true-colour satellite imagery, and subsequently testing the accuracy of these AI-derived road maps, we produced a dataset of 8904 satellite image ‘tiles’ and their corresponding known road features across Equatorial Asia (Indonesia, Malaysia, Papua New Guinea). Methods
The main dataset shared here was derived from a set of 200 input satellite images, also provided here. These 200 images are effectively ‘screenshots’ (i.e., reduced-resolution copies) of high-resolution true-colour satellite imagery (~0.5-1m pixel resolution) observed using the Elvis Elevation and Depth spatial data portal (https://elevation.fsdf.org.au/), which here is functionally equivalent to the more familiar Google Earth. Each of these original images was initially acquired at a resolution of 1920x886 pixels. Actual image resolution was coarser than the native high-resolution imagery. Visual inspection of these 200 images suggests a pixel resolution of ~5 meters, given the number of pixels required to span features of familiar scale, such as roads and roofs, as well as the ready discrimination of specific land uses, vegetation types, etc. These 200 images generally spanned either forest-agricultural mosaics or intact forest landscapes with limited human intervention. Sloan et al. (2023) present a map indicating the various areas of Equatorial Asia from which these images were sourced.
IMAGE NAMING CONVENTION
A common naming convention applies to satellite images’ file names:
XX##.png
where:
XX – denotes the geographical region / major island of Equatorial Asia of the image, as follows: ‘bo’ (Borneo), ‘su’ (Sumatra), ‘sl’ (Sulawesi), ‘pn’ (Papua New Guinea), ‘jv’ (java), ‘ng’ (New Guinea [i.e., Papua and West Papua provinces of Indonesia])
INTERPRETING ROAD FEATURES IN THE IMAGES For each of the 200 input satellite images, its road was visually interpreted and manually digitized to create a reference image dataset by which to train, validate, and test AI road-mapping models, as detailed in Sloan et al. (2023). The reference dataset of road features was digitized using the ‘pen tool’ in Adobe Photoshop. The pen’s ‘width’ was held constant over varying scales of observation (i.e., image ‘zoom’) during digitization. Consequently, at relatively small scales at least, digitized road features likely incorporate vegetation immediately bordering roads. The resultant binary (Road / Not Road) reference images were saved as PNG images with the same image dimensions as the original 200 images.
IMAGE TILES AND REFERENCE DATA FOR MODEL DEVELOPMENT
The 200 satellite images and the corresponding 200 road-reference images were both subdivided (aka ‘sliced’) into thousands of smaller image ‘tiles’ of 256x256 pixels each. Subsequent to image subdivision, subdivided images were also rotated by 90, 180, or 270 degrees to create additional, complementary image tiles for model development. In total, 8904 image tiles resulted from image subdivision and rotation. These 8904 image tiles are the main data of interest disseminated here. Each image tile entails the true-colour satellite image (256x256 pixels) and a corresponding binary road reference image (Road / Not Road).
Of these 8904 image tiles, Sloan et al. (2023) randomly selected 80% for model training (during which a model ‘learns’ to recognize road features in the input imagery), 10% for model validation (during which model parameters are iteratively refined), and 10% for final model testing (during which the final accuracy of the output road map is assessed). Here we present these data in two folders accordingly:
'Training’ – contains 7124 image tiles used for model training in Sloan et al. (2023), i.e., 80% of the original pool of 8904 image tiles. ‘Testing’– contains 1780 image tiles used for model validation and model testing in Sloan et al. (2023), i.e., 20% of the original pool of 8904 image tiles, being the combined set of image tiles for model validation and testing in Sloan et al. (2023).
IMAGE TILE NAMING CONVENTION A common naming convention applies to image tiles’ directories and file names, in both the ‘training’ and ‘testing’ folders: XX##_A_B_C_DrotDDD where
XX – denotes the geographical region / major island of Equatorial Asia of the original input 1920x886 pixel image, as follows: ‘bo’ (Borneo), ‘su’ (Sumatra), ‘sl’ (Sulawesi), ‘pn’ (Papua New Guinea), ‘jv’ (java), ‘ng’ (New Guinea [i.e., Papua and West Papua provinces of Indonesia])
A, B, C and D – can all be ignored. These values, which are one of 0, 256, 512, 768, 1024, 1280, 1536, and 1792, are effectively ‘pixel coordinates’ in the corresponding original 1920x886-pixel input image. They were recorded within the names of image tiles’ sub-directories and file names merely to ensure that names/directory were uniquely named)
rot – implies an image rotation. Not all image tiles are rotated, so ‘rot’ will appear only occasionally.
DDD – denotes the degree of image-tile rotation, e.g., 90, 180, 270. Not all image tiles are rotated, so ‘DD’ will appear only occasionally.
Note that the designator ‘XX##’ is directly equivalent to the filenames of the corresponding 1920x886-pixel input satellite images, detailed above. Therefore, each image tiles can be ‘matched’ with its parent full-scale satellite image. For example, in the ‘training’ folder, the subdirectory ‘Bo12_0_0_256_256’ indicates that its image tile therein (also named ‘Bo12_0_0_256_256’) would have been sourced from the full-scale image ‘Bo12.png’.
AID is a new large-scale aerial image dataset, by collecting sample images from Google Earth imagery. Note that although the Google Earth images are post-processed using RGB renderings from the original optical aerial images, it has proven that there is no significant difference between the Google Earth images with the real optical aerial images even in the pixel-level land use/cover mapping. Thus, the Google Earth images can also be used as aerial images for evaluating scene classification algorithms.
The new dataset is made up of the following 30 aerial scene types: airport, bare land, baseball field, beach, bridge, center, church, commercial, dense residential, desert, farmland, forest, industrial, meadow, medium residential, mountain, park, parking, playground, pond, port, railway station, resort, river, school, sparse residential, square, stadium, storage tanks and viaduct. All the images are labelled by the specialists in the field of remote sensing image interpretation, and some samples of each class are shown in Fig.1. In all, the AID dataset has a number of 10000 images within 30 classes.
The images in AID are actually multi-source, as Google Earth images are from different remote imaging sensors. This brings more challenges for scene classification than the single source images like UC-Merced dataset. Moreover, all the sample images per each class in AID are carefully chosen from different countries and regions around the world, mainly in China, the United States, England, France, Italy, Japan, Germany, etc., and they are extracted at different time and seasons under different imaging conditions, which increases the intra-class diversities of the data.
https://brightdata.com/licensehttps://brightdata.com/license
The Google Maps dataset is ideal for getting extensive information on businesses anywhere in the world. Easily filter by location, business type, and other factors to get the exact data you need. The Google Maps dataset includes all major data points: timestamp, name, category, address, description, open website, phone number, open_hours, open_hours_updated, reviews_count, rating, main_image, reviews, url, lat, lon, place_id, country, and more.
The Digital Geomorphic-GIS Map of Gulf Islands National Seashore (5-meter accuracy and 1-foot resolution 2006-2007 mapping), Mississippi and Florida is composed of GIS data layers and GIS tables, and is available in the following GRI-supported GIS data formats: 1.) a 10.1 file geodatabase (guis_geomorphology.gdb), a 2.) Open Geospatial Consortium (OGC) geopackage, and 3.) 2.2 KMZ/KML file for use in Google Earth, however, this format version of the map is limited in data layers presented and in access to GRI ancillary table information. The file geodatabase format is supported with a 1.) ArcGIS Pro map file (.mapx) file (guis_geomorphology.mapx) and individual Pro layer (.lyrx) files (for each GIS data layer), as well as with a 2.) 10.1 ArcMap (.mxd) map document (guis_geomorphology.mxd) and individual 10.1 layer (.lyr) files (for each GIS data layer). The OGC geopackage is supported with a QGIS project (.qgz) file. Upon request, the GIS data is also available in ESRI 10.1 shapefile format. Contact Stephanie O'Meara (see contact information below) to acquire the GIS data in these GIS data formats. In addition to the GIS data and supporting GIS files, three additional files comprise a GRI digital geologic-GIS dataset or map: 1.) A GIS readme file (guis_geology_gis_readme.pdf), 2.) the GRI ancillary map information document (.pdf) file (guis_geomorphology.pdf) which contains geologic unit descriptions, as well as other ancillary map information and graphics from the source map(s) used by the GRI in the production of the GRI digital geologic-GIS data for the park, and 3.) a user-friendly FAQ PDF version of the metadata (guis_geomorphology_metadata_faq.pdf). Please read the guis_geology_gis_readme.pdf for information pertaining to the proper extraction of the GIS data and other map files. Google Earth software is available for free at: https://www.google.com/earth/versions/. QGIS software is available for free at: https://www.qgis.org/en/site/. Users are encouraged to only use the Google Earth data for basic visualization, and to use the GIS data for any type of data analysis or investigation. The data were completed as a component of the Geologic Resources Inventory (GRI) program, a National Park Service (NPS) Inventory and Monitoring (I&M) Division funded program that is administered by the NPS Geologic Resources Division (GRD). For a complete listing of GRI products visit the GRI publications webpage: For a complete listing of GRI products visit the GRI publications webpage: https://www.nps.gov/subjects/geology/geologic-resources-inventory-products.htm. For more information about the Geologic Resources Inventory Program visit the GRI webpage: https://www.nps.gov/subjects/geology/gri,htm. At the bottom of that webpage is a "Contact Us" link if you need additional information. You may also directly contact the program coordinator, Jason Kenworthy (jason_kenworthy@nps.gov). Source geologic maps and data used to complete this GRI digital dataset were provided by the following: U.S. Geological Survey. Detailed information concerning the sources used and their contribution the GRI product are listed in the Source Citation section(s) of this metadata record (guis_geomorphology_metadata.txt or guis_geomorphology_metadata_faq.pdf). Users of this data are cautioned about the locational accuracy of features within this dataset. Based on the source map scale of 1:26,000 and United States National Map Accuracy Standards features are within (horizontally) 13.2 meters or 43.3 feet of their actual location as presented by this dataset. Users of this data should thus not assume the location of features is exactly where they are portrayed in Google Earth, ArcGIS, QGIS or other software used to display this dataset. All GIS and ancillary tables were produced as per the NPS GRI Geology-GIS Geodatabase Data Model v. 2.3. (available at: https://www.nps.gov/articles/gri-geodatabase-model.htm).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Fast flood extent monitoring with SAR change detection using Google Earth Engine This dataset develops a tool for near real-time flood monitoring through a novel combining of multi-temporal and multi-source remote sensing data. We use a SAR change detection and thresholding method, and apply sensitivity analytics and thresholding calibration, using SAR-based and optical-based indices in a format that is streamlined, reproducible, and geographically agile. We leverage the massive repository of satellite imagery and planetary-scale geospatial analysis tools of GEE to devise a flood inundation extent model that is both scalable and replicable. The flood extents from the 2021 Hurricane Ida and the 2017 Hurricane Harvey were selected to test the approach. The methodology provides a fast, automatable, and geographically reliable tool for assisting decision-makers and emergency planners using near real-time multi-temporal satellite SAR data sets. GEE code was developed by Ebrahim Hamidi and reviewed by Brad G. Peter; Figures were created by Brad G. Peter. This tool accompanies a publication Hamidi et al., 2023: E. Hamidi, B. G. Peter, D. F. Muñoz, H. Moftakhari and H. Moradkhani, "Fast Flood Extent Monitoring with SAR Change Detection Using Google Earth Engine," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2023.3240097. GEE input datasets: Methodology flowchart: Sensitivity Analysis: GEE code (muti-source and multi-temporal flood monitoring): https://code.earthengine.google.com/7f4942ab0c73503e88287ad7e9187150 The threshold sensitivity analysis is automated in the below GEE code: https://code.earthengine.google.com/a3fbfe338c69232a75cbcd0eb6bc0c8e The above scripts can be run independently. The threshold automation code identifies the optimal threshold values for use in the flood monitoring procedure. GEE code for Hurricane Harvey, east of Houston Java script: // Study Area Boundaries var bounds = /* color: #d63000 */ee.Geometry.Polygon( [[[-94.5214452285728, 30.165244882083663], [-94.5214452285728, 29.56024879238989], [-93.36650748443218, 29.56024879238989], [-93.36650748443218, 30.165244882083663]]], null, false); // [before_start,before_end,after_start,after_end,k_ndfi,k_ri,k_diff,mndwi_threshold] var params = ['2017-06-01','2017-06-15','2017-08-01','2017-09-10',1.0,0.25,0.8,0.4] // SAR Input Data var before_start = params[0] var before_end = params[1] var after_start = params[2] var after_end = params[3] var polarization = "VH" var pass_direction = "ASCENDING" // k Coeficient Values for NDFI, RI and DII SAR Indices (Flooded Pixel Thresholding; Equation 4) var k_ndfi = params[4] var k_ri = params[5] var k_diff = params[6] // MNDWI flooded pixels Threshold Criteria var mndwi_threshold = params[7] // Datasets ----------------------------------- var dem = ee.Image("USGS/3DEP/10m").select('elevation') var slope = ee.Terrain.slope(dem) var swater = ee.Image('JRC/GSW1_0/GlobalSurfaceWater').select('seasonality') var collection = ee.ImageCollection('COPERNICUS/S1_GRD') .filter(ee.Filter.eq('instrumentMode', 'IW')) .filter(ee.Filter.listContains('transmitterReceiverPolarisation', polarization)) .filter(ee.Filter.eq('orbitProperties_pass', pass_direction)) .filter(ee.Filter.eq('resolution_meters', 10)) .filterBounds(bounds) .select(polarization) var before = collection.filterDate(before_start, before_end) var after = collection.filterDate(after_start, after_end) print("before", before) print("after", after) // Generating Reference and Flood Multi-temporal SAR Data ------------------------ // Mean Before and Min After ------------------------ var mean_before = before.mean().clip(bounds) var min_after = after.min().clip(bounds) var max_after = after.max().clip(bounds) var mean_after = after.mean().clip(bounds) Map.addLayer(mean_before, {min: -29.264204107025904, max: -8.938093778644141, palette: []}, "mean_before",0) Map.addLayer(min_after, {min: -29.29334290990966, max: -11.928313976797138, palette: []}, "min_after",1) // Flood identification ------------------------ // NDFI ------------------------ var ndfi = mean_before.abs().subtract(min_after.abs()) .divide(mean_before.abs().add(min_after.abs())) var ndfi_filtered = ndfi.focal_mean({radius: 50, kernelType: 'circle', units: 'meters'}) // NDFI Normalization ----------------------- var ndfi_min = ndfi_filtered.reduceRegion({ reducer: ee.Reducer.min(), geometry: bounds, scale: 10, maxPixels: 1e13 }) var ndfi_max = ndfi_filtered.reduceRegion({ reducer: ee.Reducer.max(), geometry: bounds, scale: 10, maxPixels: 1e13 }) var ndfi_rang = ee.Number(ndfi_max.get('VH')).subtract(ee.Number(ndfi_min.get('VH'))) var ndfi_subtctMin = ndfi_filtered.subtract(ee.Number(ndfi_min.get('VH'))) var ndfi_norm = ndfi_subtctMin.divide(ndfi_rang) Map.addLayer(ndfi_norm, {min: 0.3862747346632676, max: 0.7632898395906615}, "ndfi_norm",0) var histogram = ui.Chart.image.histogram({ image: ndfi_norm, region: bounds, scale: 10, maxPixels: 1e13 })...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This archive contains native resolution and super resolution (SR) Landsat imagery, derivative lake shorelines, and previously-published lake shorelines derived airborne remote sensing, used here for comparison. Landsat images are from 1985 (Landsat 5) and 2017 (Landsat 8) and are cropped to study areas used in the corresponding paper and converted to 8-bit format. SR images were created using the model of Lezine et al (2021a, 2021b), which outputs imagery at 10x-finer resolution, and they have the same extent and bit depth as the native resolution scenes included. Reference shoreline datasets are from Kyzivat et al. (2019a and 2019b) for the year 2017 and Walter Anthony et al. (2021a, 2021b) for Fairbanks, AK, USA in 1985. All derived and comparison shoreline datasets are cropped to the same extent, filtered to a common minimum lake size (40 m2 for 2017; 13 m2 for 1985), and smoothed via 10 m morphological closing. The SR-derived lakes were determined to have F-1 scores of 0.75 (2017 data) and 0.60 (1985 data) as compared to reference lakes for lakes larger than 500 m2, and accuracy is worse for smaller lakes. More details are in the forthcoming accompanying publication.
All raster images are in cloud-optimized geotiff (COG) format (.tif) with file naming shown in Table 1. Vector shoreline datasets are in ESRI shapefile format (.shp, .dbf, etc.), and file names use the abbreviations LR for low resolution, SR for high resolution, and GT for “ground truth” comparison airborne-derived datasets.
Landsat-5 and Landsat-8 images courtesy of the U.S. Geological Survey
For an interactive map demo of these datasets via Google Earth Engine Apps, visit: https://ekyzivat.users.earthengine.app/view/super-resolution-demo
Table 1: File naming scheme based on region, with some regions requiring two-scene mosaics.
Region
Landsat ID
Mosaic name
Yukon Flats Basin
LC08_L2SP_068014_20170708_20200903_02_T1
LC08_20170708_yflats_cog.tif
“
LC08_L2SP_068013_20170708_20201015_02_T1
“
Old Crow Flats
LC08_L2SP_067012_20170903_20200903_02_T1
-
Mackenzie River Delta
LC08_L2SP_064011_20170728_20200903_02_T1
LC08_20170728_inuvik_cog.tif
“
LC08_L2SP_064012_20170728_20200903_02_T1
“
Canadian Shield Margin
LC08_L2SP_050015_20170811_20200903_02_T1
LC08_20170811_cshield-margin_cog.tif
“
LC08_L2SP_048016_20170829_20200903_02_T1
“
Canadian Shield near Baker Creek
LC08_L2SP_046016_20170831_20200903_02_T1
-
Canadian Shield near Daring Lake
LC08_L2SP_045015_20170723_20201015_02_T1
-
Peace-Athabasca Delta
LC08_L2SP_043019_20170810_20200903_02_T1
-
Prairie Potholes North 1
LC08_L2SP_041021_20170812_20200903_02_T1
LC08_20170812_potholes-north1_cog.tif
“
LC08_L2SP_041022_20170812_20200903_02_T1
“
Prairie Potholes North 2
LC08_L2SP_038023_20170823_20200903_02_T1
-
Prairie Potholes South
LC08_L2SP_031027_20170907_20200903_02_T1
-
Fairbanks
LT05_L2SP_070014_19850831_20200918_02_T1
-
References:
Kyzivat, E. D., Smith, L. C., Pitcher, L. H., Fayne, J. V., Cooley, S. W., Cooper, M. G., Topp, S. N., Langhorst, T., Harlan, M. E., Horvat, C., Gleason, C. J., & Pavelsky, T. M. (2019b). A high-resolution airborne color-infrared camera water mask for the NASA ABoVE campaign. Remote Sensing, 11(18), 2163. https://doi.org/10.3390/rs11182163
Kyzivat, E.D., L.C. Smith, L.H. Pitcher, J.V. Fayne, S.W. Cooley, M.G. Cooper, S. Topp, T. Langhorst, M.E. Harlan, C.J. Gleason, and T.M. Pavelsky. 2019a. ABoVE: AirSWOT Water Masks from Color-Infrared Imagery over Alaska and Canada, 2017. ORNL DAAC, Oak Ridge, Tennessee, USA. https://doi.org/10.3334/ORNLDAAC/1707
Ekaterina M. D. Lezine, Kyzivat, E. D., & Smith, L. C. (2021a). Super-resolution surface water mapping on the Canadian shield using planet CubeSat images and a generative adversarial network. Canadian Journal of Remote Sensing, 47(2), 261–275. https://doi.org/10.1080/07038992.2021.1924646
Ekaterina M. D. Lezine, Kyzivat, E. D., & Smith, L. C. (2021b). Super-resolution surface water mapping on the canadian shield using planet CubeSat images and a generative adversarial network. Canadian Journal of Remote Sensing, 47(2), 261–275. https://doi.org/10.1080/07038992.2021.1924646
Walter Anthony, K.., Lindgren, P., Hanke, P., Engram, M., Anthony, P., Daanen, R. P., Bondurant, A., Liljedahl, A. K., Lenz, J., Grosse, G., Jones, B. M., Brosius, L., James, S. R., Minsley, B. J., Pastick, N. J., Munk, J., Chanton, J. P., Miller, C. E., & Meyer, F. J. (2021a). Decadal-scale hotspot methane ebullition within lakes following abrupt permafrost thaw. Environ. Res. Lett, 16, 35010. https://doi.org/10.1088/1748-9326/abc848
Walter Anthony, K., and P. Lindgren. 2021b. ABoVE: Historical Lake Shorelines and Areas near Fairbanks, Alaska, 1949-2009. ORNL DAAC, Oak Ridge, Tennessee, USA. https://doi.org/10.3334/ORNLDAAC/1859
The MOD11A1 V6.1 product provides daily land surface temperature (LST) and emissivity values in a 1200 x 1200 kilometer grid. The temperature value is derived from the MOD11_L2 swath product. Above 30 degrees latitude, some pixels may have multiple observations where the criteria for clear-sky are met. When this occurs, the pixel value is the average of all qualifying observations. Provided along with both the day-time and night-time surface temperature bands and their quality indicator layers are MODIS bands 31 and 32 and six observation layers. Documentation: User's Guide Algorithm Theoretical Basis Document (ATBD) General Documentation
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Region of Interest (ROI) is comprised of the Belgium, the Netherlands and Luxembourg
We use the communes adminitrative division which is standardized across Europe by EUROSTAT at: https://ec.europa.eu/eurostat/web/gisco/geodata/reference-data/administrative-units-statistical-units This is roughly equivalent to the notion municipalities in most countries.
From the link above, communes definition are taken from COMM_RG_01M_2016_4326.shp and country borders are taken from NUTS_RG_01M_2021_3035.shp.
images: Sentinel2 RGB from 2020-01-01 to 2020-31-12 filtered out pixels with clouds acoording to QA60 band following the example given in GEE dataset info page at: see https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S2_SR_HARMONIZED
see also https://github.com/rramosp/geetiles/blob/main/geetiles/defs/sentinel2rgbmedian2020.py
labels: Global Human Settlement Layers, Population Grid 2015
labels range from 0 to 31, with the following meaning:
label value original value in GEE dataset
0 0
1 1-10
2 11-20
3 21-30
...
31 >=291
see https://developers.google.com/earth-engine/datasets/catalog/JRC_GHSL_P2016_POP_GPW_GLOBE_V1
see also https://github.com/rramosp/geetiles/blob/main/geetiles/defs/humanpop2015.py
_aschips.geojson the image chips geometries along with label proportions for easy visualization with QGIS, GeoPandas, etc.
_communes.geojson the communes geometries with their label prortions for easy visualization with QGIS, GeoPandas, etc.
splits.csv contains two splits of image chips in train, test, val - with geographical bands at 45° angles in nw-se direction - the same as above reorganized to that all chips within the same commune fall within the same split.
data/ a pickle file for each image chip containing a dict with - the 100x100 RGB sentinel 2 chip image - the 100x100 chip level lavels - the label proportions of the chip - the aggregated label proportions of the commune the chip belongs to
We sampled Google Earth aerial images to get a representative and globally distributed dataset of treeline locations. Google Earth images are available to everyone, but may not be automatically downloaded and processed according to Google's license terms. Since we only wanted to detect tree individuals, we evaluated the aerial images manually by hand.
Â
Doing so, we scaled Google Earth’s GUI interface to a buffer size of approximately 6000 m from a perspective of 100 m (+/- 20 m) above Earth’s surface. Within this buffer zone, we took coordinates and elevation of the highest realized treeline locations. In some remote areas of Russia and Canada, individual trees were not identifiable due to insufficient image resolution. If this was the case, no treeline was sampled, unless we detected another visible treeline within the 6,000 m buffer and took this next highest treeline. We did not apply an automated image processing approach. We calculated mass elevation effect as the distance to t...
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
SEPAL (https://sepal.io/) is a free and open source cloud computing platform for geo-spatial data access and processing. It empowers users to quickly process large amounts of data on their computer or mobile device. Users can create custom analysis ready data using freely available satellite imagery, generate and improve land use maps, analyze time series, run change detection and perform accuracy assessment and area estimation, among many other functionalities in the platform. Data can be created and analyzed for any place on Earth using SEPAL.
https://data.apps.fao.org/catalog/dataset/9c4d7c45-7620-44c4-b653-fbe13eb34b65/resource/63a3efa0-08ab-4ad6-9d4a-96af7b6a99ec/download/cambodia_mosaic_2020.png" alt="alt text" title="Figure 1: Best pixel mosaic of Landsat 8 data for 2020 over Cambodia">
SEPAL reaches over 5000 users in 180 countries for the creation of custom data products from freely available satellite data. SEPAL was developed as a part of the Open Foris suite, a set of free and open source software platforms and tools that facilitate flexible and efficient data collection, analysis and reporting. SEPAL combines and integrates modern geospatial data infrastructures and supercomputing power available through Google Earth Engine and Amazon Web Services with powerful open-source data processing software, such as R, ORFEO, GDAL, Python and Jupiter Notebooks. Users can easily access the archive of satellite imagery from NASA, the European Space Agency (ESA) as well as high spatial and temporal resolution data from Planet Labs and turn such images into data that can be used for reporting and better decision making.
National Forest Monitoring Systems in many countries have been strengthened by SEPAL, which provides technical government staff with computing resources and cutting edge technology to accurately map and monitor their forests. The platform was originally developed for monitoring forest carbon stock and stock changes for reducing emissions from deforestation and forest degradation (REDD+). The application of the tools on the platform now reach far beyond forest monitoring by providing different stakeholders access to cloud based image processing tools, remote sensing and machine learning for any application. Presently, users work on SEPAL for various applications related to land monitoring, land cover/use, land productivity, ecological zoning, ecosystem restoration monitoring, forest monitoring, near real time alerts for forest disturbances and fire, flood mapping, mapping impact of disasters, peatland rewetting status, and many others.
The Hand-in-Hand initiative enables countries that generate data through SEPAL to disseminate their data widely through the platform and to combine their data with the numerous other datasets available through Hand-in-Hand.
https://data.apps.fao.org/catalog/dataset/9c4d7c45-7620-44c4-b653-fbe13eb34b65/resource/868e59da-47b9-4736-93a9-f8d83f5731aa/download/probability_classification_over_zambia.png" alt="alt text" title="Figure 2: Image classification module for land monitoring and mapping. Probability classification over Zambia">
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The SEN12TP dataset (Sentinel-1 and -2 imagery, timely paired) contains 2319 scenes of Sentinel-1 radar and Sentinel-2 optical imagery together with elevation and land cover information of 1236 distinct ROIs taken between 28 March 2017 and 31 December 2020. Each scene has a size of 20km x 20km at 10m pixel spacing. The time difference between optical and radar images is at most 12h, but for almost all scenes it is around 6h since the orbits of Sentinel-1 and -2 are shifted like that. Next to the \(\sigma^\circ\) radar backscatter also the radiometric terrain corrected \(\gamma^\circ\) radar backscatter is calculated and included. \(\gamma^\circ\) values are calculated using the volumetric model presented by Vollrath et. al 2020.
The uncompressed dataset has a size of 222 GB and is split spatially into a train (~90%) and a test set (~10%). For easier download the train set is split into four separate zip archives.
Please cite the following paper when using the dataset, in which the design and creation is detailed:
T. Roßberg and M. Schmitt. A globally applicable method for NDVI estimation from Sentinel-1 SAR backscatter using a deep neural network and the SEN12TP dataset. PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science, 2023. https://doi.org/10.1007/s41064-023-00238-y.
The file sen12tp-metadata.json
includes metadata of the selected scenes. It includes for each scene the geometry, an ID for the ROI and the scene, the climate and land cover information used when sampling the central point, the timestamps (in ms) when the Sentinel-1 and -2 image was taken, the month of the year, and the EPSG code of the local UTM Grid (e.g. EPSG:32643 - WGS 84 / UTM zone 43N).
Naming scheme: The images are contained in directories called {roi_id}_{scene_id}, as for some unique regions image pairs of multiple dates are included. In each directory are six files for the different modalities with the naming {scene_id}_{modality}.tif. Multiple modalities are included: radar backscatter and multispectral optical images, the elevation as DSM (digital surface model) and different land cover maps.
name | Modality | GEE collection |
---|---|---|
s1 | Sentinel-1 radar backscatter | COPERNICUS/S1_GRD |
s2 | Sentinel-2 Level-2A (Bottom of atmosphere, BOA) multispectral optical data with added cloud probability band | COPERNICUS/S2_SR COPERNICUS/S2_CLOUD_PROBABILITY |
dsm | 30m digital surface model | JAXA/ALOS/AW3D30/V3_2 |
worldcover | land cover, 10m resolution | ESA/WorldCover/v100 |
The following bands are included in the tif files, for an further explanation see the documentation on GEE. All bands are resampled to 10m resolution and reprojected to the coordinate reference system of the Sentinel-2 image.
Modality | Band count | Band names in tif file | Notes |
s1 | 5 | VV_sigma0, VH_sigma0, VV_gamma0flat, VH_gamma0flat, incAngle | VV/VH_sigma0 are the \(\sigma^\circ\) values, VV/VH_gamma0flat are the radiometric terrain corrected \(\gamma^\circ\) backscatter values incAngle is the incident angle |
s2 | 13 | B1, B2, B3, B4, B5, B7, B7, B8, B8A, B9, B11, B12, cloud_probability | multispectral optical bands and the probability that a pixel is cloudy, calculated with the sentinel2-cloud-detector library optical reflectances are bottom of atmosphere (BOA) reflectances calculated using sen2cor |
dsm | 1 | DSM | Height above sea level. Signed 16 bits. Elevation (in meter) converted from the ellipsoidal height based on ITRF97 and GRS80, using EGM96†1 geoid model. |
worldcover | 1 | Map | Landcover class |
Checking the file integrity
After downloading and decompression the file integrity can be checked using the provided file of md5 checksum.
Under Linux: md5sum --check --quiet md5sums.txt
References:
Vollrath, Andreas, Adugna Mullissa, Johannes Reiche (2020). "Angular-Based Radiometric Slope Correction for Sentinel-1 on Google Earth Engine". In: Remote Sensing 12.1, Art no. 1867. https://doi.org/10.3390/rs12111867.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PLEASE NOTE: \r \r _ GEEBAM is an interim product and there is no ground truthing or assessment of accuracy. Fire Extent and Severity Mapping (FESM) data should be used for accurate information on fire severity and loss of biomass in relation to bushfires._\r \r The intention of this dataset was to provide a rapid assessment of fire impact.\r \r In collaboration with the University of NSW, the NSW Department of Planning Infrastructure and Environment (DPIE) Remote Sensing and Landscape Science team has developed a rapid mapping approach to find out where wildfires in NSW have affected vegetation. We call it the Google Earth Engine Burnt Area Map (GEEBAM) and it relies on Sentinel 2 satellite imagery. The product output is a TIFF image with a resolution of 15m.\r Burnt Area Classes: \r \r 1. Little change observed between pre and post fire \r \r 2. Canopy unburnt - A green canopy within the fire ground that may act as refugia for native fauna, may be affected by fire \r \r 3. Canopy partially affected - A mix of burnt and unburnt canopy vegetation \r \r 4. Canopy fully affected -The canopy and understorey are most likely burnt\r \r Using GEEBAM at a local scale requires visual interpretation with reference to satellite imagery. This will ensure the best results for each fire or vegetation class. \r \r Important Note: GEEBAM is an interim product and there is no ground truthing or assessment of accuracy. It is updated fortnightly.\r \r Please see Google Earth Engine Burnt Area Factsheet\r
This dataset contains atmospherically corrected surface reflectance and land surface temperature derived from the data produced by the Landsat 7 ETM+ sensor. These images contain 4 visible and near-infrared (VNIR) bands and 2 short-wave infrared (SWIR) bands processed to orthorectified surface reflectance, and one thermal infrared (TIR) band processed to orthorectified surface temperature. They also contain intermediate bands used in calculation of the ST products, as well as QA bands. Landsat 7 SR products are created with the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) algorithm (version 3.4.0). All Collection 2 ST products are created with a single-channel algorithm jointly created by the Rochester Institute of Technology (RIT) and National Aeronautics and Space Administration (NASA) Jet Propulsion Laboratory (JPL). Strips of collected data are packaged into overlapping "scenes" covering approximately 170km x 183km using a standardized reference grid. Some assets have only SR data, in which case ST bands are present but empty. For assets with both ST and SR bands, 'PROCESSING_LEVEL' is set to 'L2SP'. For assets with only SR bands, 'PROCESSING_LEVEL' is set to 'L2SR'. Additional documentation and usage examples. Data provider notes: Data products must contain both optical and thermal data to be successfully processed to surface temperature, as ASTER NDVI is required to temporally adjust the ASTER GED product to the target Landsat scene. Therefore, night time acquisitions cannot be processed to surface temperature. A known error exists in the surface temperature retrievals relative to clouds and possibly cloud shadows. The characterization of these issues has been documented by Cook et al., (2014). ASTER GED contains areas of missing mean emissivity data required for successful ST product generation. If there is missing ASTER GED information, there will be missing ST data in those areas. The ASTER GED dataset is created from all clear-sky pixels of ASTER scenes acquired from 2000 through 2008. While this dataset has a global spatial extent, there are areas missing mean emissivity information due to persistent cloud contamination in the ASTER measurements. The USGS further screens unphysical values (emissivity < 0.6) in ASTER GED to remove any emissivity underestimation due to undetected clouds. For any given pixel with no ASTER GED input or unphysical emissivity value, the resulting Landsat ST products have missing pixels. The missing Landsat ST pixels will be consistent through time (1982-present) given the static nature of ASTER GED mean climatology data. For more information refer to landsat-collection-2-surface-temperature-data-gaps-due-missing Note that Landsat 7's orbit has been drifting to an earlier acquisition time since 2017.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Spatiotemporal patterns of global forest net primary productivity (NPP) are pivotal for us to understand the interaction between the climate and the terrestrial carbon cycle. In this study, we use Google Earth Engine (GEE), which is a powerful cloud platform, to study the dynamics of the global forest NPP with remote sensing and climate datasets. In contrast with traditional analyses that divide forest areas according to geographical location or climate types to retrieve general conclusions, we categorize forest regions based on their NPP levels. Nine categories of forests are obtained with the self-organizing map (SOM) method, and eight relative factors are considered in the analysis. We found that although forests can achieve higher NPP with taller, denser and more broad-leaved trees, the influence of the climate is stronger on the NPP; for the high-NPP categories, precipitation shows a weak or negative correlation with vegetation greenness, while lacking water may correspond to decrease in productivity for low-NPP categories. The low-NPP categories responded mainly to the La Niña event with an increase in the NPP, while the NPP of the high-NPP categories increased at the onset of the El Niño event and decreased soon afterwards when the warm phase of the El Niño-Southern Oscillation (ENSO) wore off. The influence of the ENSO changes correspondingly with different NPP levels, which infers that the pattern of climate oscillation and forest growth conditions have some degree of synchronization. These findings may facilitate the understanding of global forest NPP variation from a different perspective.
The MODIS Surface Reflectance products provide an estimate of the surface spectral reflectance as it would be measured at ground level in the absence of atmospheric scattering or absorption. Low-level data are corrected for atmospheric gases and aerosols. MOD09GQ version 6.1 provides bands 1 and 2 at a 250m resolution in a daily gridded L2G product in the Sinusoidal projection, including a QC and five observation layers. This product is meant to be used in conjunction with the MOD09GA where important quality and viewing geometry information is stored. Documentation: User's Guide Algorithm Theoretical Basis Document (ATBD) General Documentation
Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE