Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.
Explore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.
EXPLORE TIMELAPSEThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.
EXPLORE DATASETSThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.
EXPLORE THE APIUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.
LEARN ABOUT THE CODE EDITORScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.
SEE CASE STUDIESAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Spatiotemporal patterns of global forest net primary productivity (NPP) are pivotal for us to understand the interaction between the climate and the terrestrial carbon cycle. In this study, we use Google Earth Engine (GEE), which is a powerful cloud platform, to study the dynamics of the global forest NPP with remote sensing and climate datasets. In contrast with traditional analyses that divide forest areas according to geographical location or climate types to retrieve general conclusions, we categorize forest regions based on their NPP levels. Nine categories of forests are obtained with the self-organizing map (SOM) method, and eight relative factors are considered in the analysis. We found that although forests can achieve higher NPP with taller, denser and more broad-leaved trees, the influence of the climate is stronger on the NPP; for the high-NPP categories, precipitation shows a weak or negative correlation with vegetation greenness, while lacking water may correspond to decrease in productivity for low-NPP categories. The low-NPP categories responded mainly to the La Niña event with an increase in the NPP, while the NPP of the high-NPP categories increased at the onset of the El Niño event and decreased soon afterwards when the warm phase of the El Niño-Southern Oscillation (ENSO) wore off. The influence of the ENSO changes correspondingly with different NPP levels, which infers that the pattern of climate oscillation and forest growth conditions have some degree of synchronization. These findings may facilitate the understanding of global forest NPP variation from a different perspective.
The Land Use/Cover Area frame Survey (LUCAS) in the European Union (EU) was set up to provide statistical information. It represents a triennial in-situ landcover and land-use data-collection exercise that extends over the whole of the EU's territory. LUCAS collects information on land cover and land use, agro-environmental variables, soil, and grassland. The surveys also provide spatial information to analyse the mutual influences between agriculture, environment, and countryside, such as irrigation and land management. The dataset presented here is the harmonized version of all yearly LUCAS surveys with a total of 106 attributes. Each point's location is using the fields 'th_lat' and 'th_lon', that is, the LUCAS theoretical location (THLOC), as prescribed by the LUCAS grid. For more information please see Citations. Note that not every field is present for every year - see the "Years" section in property descriptions. The text "C1 (Instructions)" in the table schema descriptions refers to this document. See also the 2018 LUCAS polygons dataset.
This dataset contains maps of the location and temporal distribution of surface water from 1984 to 2019 and provides statistics on the extent and change of those water surfaces. For more information see the associated journal article: High-resolution mapping of global surface water and its long-term changes (Nature, 2016) and the online Data Users Guide. These data were generated using 4,185,439 scenes from Landsat 5, 7, and 8 acquired between 16 March 1984 and 31 December 2019. Each pixel was individually classified into water / non-water using an expert system and the results were collated into a monthly history for the entire time period and two epochs (1984-1999, 2000-2019) for change detection. This mapping layers product consists of 1 image containing 7 bands. It maps different facets of the spatial and temporal distribution of surface water over the last 35 years. Areas where water has never been detected are masked.
This archived Paleoclimatology Study is available from the NOAA National Centers for Environmental Information (NCEI), under the World Data Service (WDS) for Paleoclimatology. The associated NCEI study type is Tree Ring. The data include parameters of tree ring with a geographic location of Arkansas, United States Of America. The time period coverage is from 350 to -53 in calendar years before present (BP). See metadata information for parameter and study location details. Please cite this study when using the data.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Fast flood extent monitoring with SAR change detection using Google Earth Engine This dataset develops a tool for near real-time flood monitoring through a novel combining of multi-temporal and multi-source remote sensing data. We use a SAR change detection and thresholding method, and apply sensitivity analytics and thresholding calibration, using SAR-based and optical-based indices in a format that is streamlined, reproducible, and geographically agile. We leverage the massive repository of satellite imagery and planetary-scale geospatial analysis tools of GEE to devise a flood inundation extent model that is both scalable and replicable. The flood extents from the 2021 Hurricane Ida and the 2017 Hurricane Harvey were selected to test the approach. The methodology provides a fast, automatable, and geographically reliable tool for assisting decision-makers and emergency planners using near real-time multi-temporal satellite SAR data sets. GEE code was developed by Ebrahim Hamidi and reviewed by Brad G. Peter; Figures were created by Brad G. Peter. This tool accompanies a publication Hamidi et al., 2023: E. Hamidi, B. G. Peter, D. F. Muñoz, H. Moftakhari and H. Moradkhani, "Fast Flood Extent Monitoring with SAR Change Detection Using Google Earth Engine," in IEEE Transactions on Geoscience and Remote Sensing, doi: 10.1109/TGRS.2023.3240097. GEE input datasets: Methodology flowchart: Sensitivity Analysis: GEE code (muti-source and multi-temporal flood monitoring): https://code.earthengine.google.com/7f4942ab0c73503e88287ad7e9187150 The threshold sensitivity analysis is automated in the below GEE code: https://code.earthengine.google.com/a3fbfe338c69232a75cbcd0eb6bc0c8e The above scripts can be run independently. The threshold automation code identifies the optimal threshold values for use in the flood monitoring procedure. GEE code for Hurricane Harvey, east of Houston Java script: // Study Area Boundaries var bounds = /* color: #d63000 */ee.Geometry.Polygon( [[[-94.5214452285728, 30.165244882083663], [-94.5214452285728, 29.56024879238989], [-93.36650748443218, 29.56024879238989], [-93.36650748443218, 30.165244882083663]]], null, false); // [before_start,before_end,after_start,after_end,k_ndfi,k_ri,k_diff,mndwi_threshold] var params = ['2017-06-01','2017-06-15','2017-08-01','2017-09-10',1.0,0.25,0.8,0.4] // SAR Input Data var before_start = params[0] var before_end = params[1] var after_start = params[2] var after_end = params[3] var polarization = "VH" var pass_direction = "ASCENDING" // k Coeficient Values for NDFI, RI and DII SAR Indices (Flooded Pixel Thresholding; Equation 4) var k_ndfi = params[4] var k_ri = params[5] var k_diff = params[6] // MNDWI flooded pixels Threshold Criteria var mndwi_threshold = params[7] // Datasets ----------------------------------- var dem = ee.Image("USGS/3DEP/10m").select('elevation') var slope = ee.Terrain.slope(dem) var swater = ee.Image('JRC/GSW1_0/GlobalSurfaceWater').select('seasonality') var collection = ee.ImageCollection('COPERNICUS/S1_GRD') .filter(ee.Filter.eq('instrumentMode', 'IW')) .filter(ee.Filter.listContains('transmitterReceiverPolarisation', polarization)) .filter(ee.Filter.eq('orbitProperties_pass', pass_direction)) .filter(ee.Filter.eq('resolution_meters', 10)) .filterBounds(bounds) .select(polarization) var before = collection.filterDate(before_start, before_end) var after = collection.filterDate(after_start, after_end) print("before", before) print("after", after) // Generating Reference and Flood Multi-temporal SAR Data ------------------------ // Mean Before and Min After ------------------------ var mean_before = before.mean().clip(bounds) var min_after = after.min().clip(bounds) var max_after = after.max().clip(bounds) var mean_after = after.mean().clip(bounds) Map.addLayer(mean_before, {min: -29.264204107025904, max: -8.938093778644141, palette: []}, "mean_before",0) Map.addLayer(min_after, {min: -29.29334290990966, max: -11.928313976797138, palette: []}, "min_after",1) // Flood identification ------------------------ // NDFI ------------------------ var ndfi = mean_before.abs().subtract(min_after.abs()) .divide(mean_before.abs().add(min_after.abs())) var ndfi_filtered = ndfi.focal_mean({radius: 50, kernelType: 'circle', units: 'meters'}) // NDFI Normalization ----------------------- var ndfi_min = ndfi_filtered.reduceRegion({ reducer: ee.Reducer.min(), geometry: bounds, scale: 10, maxPixels: 1e13 }) var ndfi_max = ndfi_filtered.reduceRegion({ reducer: ee.Reducer.max(), geometry: bounds, scale: 10, maxPixels: 1e13 }) var ndfi_rang = ee.Number(ndfi_max.get('VH')).subtract(ee.Number(ndfi_min.get('VH'))) var ndfi_subtctMin = ndfi_filtered.subtract(ee.Number(ndfi_min.get('VH'))) var ndfi_norm = ndfi_subtctMin.divide(ndfi_rang) Map.addLayer(ndfi_norm, {min: 0.3862747346632676, max: 0.7632898395906615}, "ndfi_norm",0) var histogram = ui.Chart.image.histogram({ image: ndfi_norm, region: bounds, scale: 10, maxPixels: 1e13 })...
Sentinel2GlobalLULC is a deep learning-ready dataset of RGB images from the Sentinel-2 satellites designed for global land use and land cover (LULC) mapping. Sentinel2GlobalLULC v2.1 contains 194,877 images in GeoTiff and JPEG format corresponding to 29 broad LULC classes. Each image has 224 x 224 pixels at 10 m spatial resolution and was produced by assigning the 25th percentile of all available observations in the Sentinel-2 collection between June 2015 and October 2020 in order to remove atmospheric effects (i.e., clouds, aerosols, shadows, snow, etc.). A spatial purity value was assigned to each image based on the consensus across 15 different global LULC products available in Google Earth Engine (GEE). Our dataset is structured into 3 main zip-compressed folders, an Excel file with a dictionary for class names and descriptive statistics per LULC class, and a python script to convert RGB GeoTiff images into JPEG format. The first folder called "Sentinel2LULC_GeoTiff.zip" contains 29 zip-compressed subfolders where each one corresponds to a specific LULC class with hundreds to thousands of GeoTiff Sentinel-2 RGB images. The second folder called "Sentinel2LULC_JPEG.zip" contains 29 zip-compressed subfolders with a JPEG formatted version of the same images provided in the first main folder. The third folder called "Sentinel2LULC_CSV.zip" includes 29 zip-compressed CSV files with as many rows as provided images and with 12 columns containing the following metadata (this same metadata is provided in the image filenames): Land Cover Class ID: is the identification number of each LULC class Land Cover Class Short Name: is the short name of each LULC class Image ID: is the identification number of each image within its corresponding LULC class Pixel purity Value: is the spatial purity of each pixel for its corresponding LULC class calculated as the spatial consensus across up to 15 land-cover products GHM Value: is the spatial average of the Global Human Modification index (gHM) for each image Latitude: is the latitude of the center point of each image Longitude: is the longitude of the center point of each image Country Code: is the Alpha-2 country code of each image as described in the ISO 3166 international standard. To understand the country codes, we recommend the user to visit the following website where they present the Alpha-2 code for each country as described in the ISO 3166 international standard:https: //www.iban.com/country-codes Administrative Department Level1: is the administrative level 1 name to which each image belongs Administrative Department Level2: is the administrative level 2 name to which each image belongs Locality: is the name of the locality to which each image belongs Number of S2 images : is the number of found instances in the corresponding Sentinel-2 image collection between June 2015 and October 2020, when compositing and exporting its corresponding image tile For seven LULC classes, we could not export from GEE all images that fulfilled a spatial purity of 100% since there were millions of them. In this case, we exported a stratified random sample of 14,000 images and provided an additional CSV file with the images actually contained in our dataset. That is, for these seven LULC classes, we provide these 2 CSV files: A CSV file that contains all exported images for this class A CSV file that contains all images available for this class at spatial purity of 100%, both the ones exported and the ones not exported, in case the user wants to export them. These CSV filenames end with "including_non_downloaded_images". To clearly state the geographical coverage of images available in this dataset, we included in the version v2.1, a compressed folder called "Geographic_Representativeness.zip". This zip-compressed folder contains a csv file for each LULC class that provides the complete list of countries represented in that class. Each csv file has two columns, the first one gives the country code and the second one gives the number of images provided in that country for that LULC class. In addition to these 29 csv files, we provided another csv file that maps each ISO Alpha-2 country code to its original full country name. © Sentinel2GlobalLULC Dataset by Yassir Benhammou, Domingo Alcaraz-Segura, Emilio Guirado, Rohaifa Khaldi, Boujemâa Achchab, Francisco Herrera & Siham Tabik is marked with Attribution 4.0 International (CC-BY 4.0)
This data set contains shapefiles of termini traces from 294 Greenland glaciers, derived using a deep learning algorithm (AutoTerm) applied to satellite imagery. The model functions as a pipeline, imputing publicly availably satellite imagery from Google Earth Engine (GEE) and outputting shapefiles of glacial termini positions for each image. Also available are supplementary data, including temporal coverage of termini traces, time series data of termini variations, and updated land, ocean, and ice masks derived from the Greenland Ice Sheet Mapping Project (GrIMP) ice masks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An accurate global impervious surface map at a resolution of 30-m for 2015 by combining Landsat-8 OLI optical images, Sentinel-1 SAR images and VIIRS NTL images based on the Google Earth Engine (GEE) platform.
This dataset provides geospatial location data and scripts used to analyze the relationship between MODIS-derived NDVI and solar and sensor angles in a pinyon-juniper ecosystem in Grand Canyon National Park. The data are provided in support of the following publication: "Solar and sensor geometry, not vegetation response, drive satellite NDVI phenology in widespread ecosystems of the western United States". The data and scripts allow users to replicate, test, or further explore results. The file GrcaScpnModisCellCenters.csv contains locations (latitude-longitude) of all the 250-m MODIS (MOD09GQ) cell centers associated with the Grand Canyon pinyon-juniper ecosystem that the Southern Colorado Plateau Network (SCPN) is monitoring through its land surface phenology and integrated upland monitoring programs. The file SolarSensorAngles.csv contains MODIS angle measurements for the pixel at the phenocam location plus a random 100 point subset of pixels within the GRCA-PJ ecosystem. The script files (folder: 'Code') consist of 1) a Google Earth Engine (GEE) script used to download MODIS data through the GEE javascript interface, and 2) a script used to calculate derived variables and to test relationships between solar and sensor angles and NDVI using the statistical software package 'R'. The file Fig_8_NdviSolarSensor.JPG shows NDVI dependence on solar and sensor geometry demonstrated for both a single pixel/year and for multiple pixels over time. (Left) MODIS NDVI versus solar-to-sensor angle for the Grand Canyon phenocam location in 2018, the year for which there is corresponding phenocam data. (Right) Modeled r-squared values by year for 100 randomly selected MODIS pixels in the SCPN-monitored Grand Canyon pinyon-juniper ecosystem. The model for forward-scatter MODIS-NDVI is log(NDVI) ~ solar-to-sensor angle. The model for back-scatter MODIS-NDVI is log(NDVI) ~ solar-to-sensor angle + sensor zenith angle. Boxplots show interquartile ranges; whiskers extend to 10th and 90th percentiles. The horizontal line marking the average median value for forward-scatter r-squared (0.835) is nearly indistinguishable from the back-scatter line (0.833). The dataset folder also includes supplemental R-project and packrat files that allow the user to apply the workflow by opening a project that will use the same package versions used in this study (eg, .folders Rproj.user, and packrat, and files .RData, and PhenocamPR.Rproj). The empty folder GEE_DataAngles is included so that the user can save the data files from the Google Earth Engine scripts to this location, where they can then be incorporated into the r-processing scripts without needing to change folder names. To successfully use the packrat information to replicate the exact processing steps that were used, the user should refer to packrat documentation available at https://cran.r-project.org/web/packages/packrat/index.html and at https://www.rdocumentation.org/packages/packrat/versions/0.5.0. Alternatively, the user may also use the descriptive documentation phenopix package documentation, and description/references provided in the associated journal article to process the data to achieve the same results using newer packages or other software programs.
This code creates a daily F-MOD_V product as outlined in the “Comprehensive Accuracy Assessment of MODIS Daily Snow Cover Products and Gap Filling Methods” paper published in (TBD). This code can be copy-pasted into the Google Earth Engine JavaScript code editor (https://code.earthengine.google.com/) to create a gap filled snow cover dataset.
Top of Atmosphere (TOA) reflectance data in bands from the USGS Landsat 5 and Landsat 8 satellites were accessed via Google Earth Engine. CANUE staff used Google Earth Engine functions to create cloud free mean growing season composites, and mask water features, then export the resulting band data. Growing season is defined as May 1st through August 31st. NDVI indices were then calculated as (band 4 - Band 3)/(Band 4 Band 3) for Landsat 5 data, and as (band 5 - band 4)/(band 5 Band 4) for Landsat 8 data. No data were available for 2012, due to decommissioning of Landsat 5 in 2011 prior to the start of Landsat 8 in 2013. No cross-calibration between the sensors was performed, please be aware there may be small bias differences between NDVI values calculated using Landsat 5 and Landsat 8. Final NDVI metrics were linked to all 6-digit DMTI Spatial single link postal code locations in Canada, and for surrounding areas within 100m, 250m, 500m, and 1km.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The SEN12TP dataset (Sentinel-1 and -2 imagery, timely paired) contains 2319 scenes of Sentinel-1 radar and Sentinel-2 optical imagery together with elevation and land cover information of 1236 distinct ROIs taken between 28 March 2017 and 31 December 2020. Each scene has a size of 20km x 20km at 10m pixel spacing. The time difference between optical and radar images is at most 12h, but for almost all scenes it is around 6h since the orbits of Sentinel-1 and -2 are shifted like that. Next to the \(\sigma^\circ\) radar backscatter also the radiometric terrain corrected \(\gamma^\circ\) radar backscatter is calculated and included. \(\gamma^\circ\) values are calculated using the volumetric model presented by Vollrath et. al 2020.
The uncompressed dataset has a size of 222 GB and is split spatially into a train (~90%) and a test set (~10%). For easier download the train set is split into four separate zip archives.
Please cite the following paper when using the dataset, in which the design and creation is detailed:
T. Roßberg and M. Schmitt. A globally applicable method for NDVI estimation from Sentinel-1 SAR backscatter using a deep neural network and the SEN12TP dataset. PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science, 2023. https://doi.org/10.1007/s41064-023-00238-y.
The file sen12tp-metadata.json
includes metadata of the selected scenes. It includes for each scene the geometry, an ID for the ROI and the scene, the climate and land cover information used when sampling the central point, the timestamps (in ms) when the Sentinel-1 and -2 image was taken, the month of the year, and the EPSG code of the local UTM Grid (e.g. EPSG:32643 - WGS 84 / UTM zone 43N).
Naming scheme: The images are contained in directories called {roi_id}_{scene_id}, as for some unique regions image pairs of multiple dates are included. In each directory are six files for the different modalities with the naming {scene_id}_{modality}.tif. Multiple modalities are included: radar backscatter and multispectral optical images, the elevation as DSM (digital surface model) and different land cover maps.
name | Modality | GEE collection |
---|---|---|
s1 | Sentinel-1 radar backscatter | COPERNICUS/S1_GRD |
s2 | Sentinel-2 Level-2A (Bottom of atmosphere, BOA) multispectral optical data with added cloud probability band | COPERNICUS/S2_SR COPERNICUS/S2_CLOUD_PROBABILITY |
dsm | 30m digital surface model | JAXA/ALOS/AW3D30/V3_2 |
worldcover | land cover, 10m resolution | ESA/WorldCover/v100 |
The following bands are included in the tif files, for an further explanation see the documentation on GEE. All bands are resampled to 10m resolution and reprojected to the coordinate reference system of the Sentinel-2 image.
Modality | Band count | Band names in tif file | Notes |
s1 | 5 | VV_sigma0, VH_sigma0, VV_gamma0flat, VH_gamma0flat, incAngle | VV/VH_sigma0 are the \(\sigma^\circ\) values, VV/VH_gamma0flat are the radiometric terrain corrected \(\gamma^\circ\) backscatter values incAngle is the incident angle |
s2 | 13 | B1, B2, B3, B4, B5, B7, B7, B8, B8A, B9, B11, B12, cloud_probability | multispectral optical bands and the probability that a pixel is cloudy, calculated with the sentinel2-cloud-detector library optical reflectances are bottom of atmosphere (BOA) reflectances calculated using sen2cor |
dsm | 1 | DSM | Height above sea level. Signed 16 bits. Elevation (in meter) converted from the ellipsoidal height based on ITRF97 and GRS80, using EGM96†1 geoid model. |
worldcover | 1 | Map | Landcover class |
Checking the file integrity
After downloading and decompression the file integrity can be checked using the provided file of md5 checksum.
Under Linux: md5sum --check --quiet md5sums.txt
References:
Vollrath, Andreas, Adugna Mullissa, Johannes Reiche (2020). "Angular-Based Radiometric Slope Correction for Sentinel-1 on Google Earth Engine". In: Remote Sensing 12.1, Art no. 1867. https://doi.org/10.3390/rs12111867.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Machine learning algorithms have been widely adopted in the monitoring ecosystem. British Columbia suffers from grassland degradation but the province does not have an accurate spatial database for effective grassland management. Moreover, computational power and storage space remain two of the limiting factors in developing the database. In this study, we leverage supervised machine learning algorithms using the Google Earth Engine to better annual grassland inventory through an automated process. The pilot study was conducted over the Rocky Mountain district. We compared two different classification algorithms: the Random forest, and the Support vector machine. Training data was sampled through stratified and grided sampling. 19 predictor variables were chosen from Sentinel-1 and Sentinel-2 imageries and relevant topological derivatives, spectral indices, and textural indices using a wrapper-based feature selection method. The resultant map was post-processed to remove land features that were confounded with grasslands. Random forest was chosen as the prototype because the algorithm predicted features relevant to the project’s scope at relatively higher accuracy (67% - 86%) than its counterparts (50% - 76%). The prototype was good at delineating the boundaries between treed and non-treed areas and ferreting out opened patches among closed forests. These opened patches are usually disregarded by the VRI but they are deemed essential to grassland stewardship and wildlife ecologists. The prototype demonstrated the feasibility of automating grassland delineation by a Random forest classifier using the Google Earth Engine. Furthermore, grassland stewards can use the product to identify monitoring and restoration areas strategically in the future.
The International Best Track Archive for Climate Stewardship (IBTrACS) provides location and intensity for global tropical cyclones. The data span from the 1840s to present, generally providing data at 3-hour intervals. While the best track data is focused on position and intensity (maximum sustained wind speed or minimum central pressure), other parameters are provided by some agencies (e.g., radius of maximum winds, environmental pressure, radius of hurricane force winds, etc.) and are likewise provided in IBTrACS. Files are available subset by Basin or time period, where basins include: East Pacific, North Atlantic, North Indian, South Atlantic, South Indian, South Pacific, and the West Pacific.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Stereospondyls underwent a global radiation in the Early Triassic, including an abundance of small-bodied taxa, which are otherwise rare throughout the Mesozoic. Lapillopsidae is one such clade and is presently known only from Australia and India. This clade’s phylogenetic position, initially interpreted as micropholid dissorophoids and later as early diverging stereospondyls, remains uncertain. Although the latter interpretation is now widely accepted, lapillopsids’ specific relationship to other Early Triassic clades remains unresolved; in particular, recent work suggested that Lapillopsidae nests within Lydekkerinidae. Here we describe Rhigerpeton isbelli, gen. et sp. nov., based on a partial skull from the lower Fremouw Formation of Antarctica that is diagnosed by a combination of features shared with at least some lapillopsids, such as a longitudinal ridge on the dorsal surface of the tabular, and features not found in lapillopsids but shared with some lydekkerinids, such as the retention of pterygoid denticles and a parachoanal tooth row (as in Lydekkerina, for example). A series of phylogenetic analyses confirm the lapillopsid affinities of R. isbelli but provide conflicting results regarding the polyphyly and/or paraphyly of Lydekkerinidae with respect to lapillopsids. The position of Lapillopsidae within Temnospondyli is highly sensitive to taxon sampling of other predominantly Early Triassic temnospondyls. The occurrence of a lapillopsid in Antarctica brings the documented temnospondyl diversity more in line with historically well-sampled portions of southern Pangea, but robust biogeographic comparisons remain hindered by the inability to resolve many historic Antarctic temnospondyl records to the finer taxonomic scales needed for robust biostratigraphy.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset tracks annual distribution of students across grade levels in Gee Compass Academy
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This JavaScript code has been developed to retrieve NDSI_Snow_Cover from MODIS version 6 for SNOTEL sites using the Google Earth Engine platform. To successfully run the code, you should have a Google Earth Engine account. An input file, called NWM_grid_Western_US_polygons_SNOTEL_ID.zip, is required to run the code. This input file includes 1 km grid cells of the NWM containing SNOTEL sites. You need to upload this input file to the Assets tap in the Google Earth Engine code editor. You also need to import the MOD10A1.006 Terra Snow Cover Daily Global 500m collection to the Google Earth Engine code editor. You may do this by searching for the product name in the search bar of the code editor.
The JavaScript works for s specified time range. We found that the best period is a month, which is the maximum allowable time range to do the computation for all SNOTEL sites on Google Earth Engine. The script consists of two main loops. The first loop retrieves data for the first day of a month up to day 28 through five periods. The second loop retrieves data from day 28 to the beginning of the next month. The results will be shown as graphs on the right-hand side of the Google Earth Engine code editor under the Console tap. To save results as CSV files, open each time-series by clicking on the button located at each graph's top right corner. From the new web page, you can click on the Download CSV button on top.
Here is the link to the script path: https://code.earthengine.google.com/?scriptPath=users%2Figarousi%2Fppr2-modis%3AMODIS-monthly
Then, run the Jupyter Notebook (merge_downloaded_csv_files.ipynb) to merge the downloaded CSV files that are stored for example in a folder called output/from_GEE into one single CSV file which is merged.csv. The Jupyter Notebook then applies some preprocessing steps and the final output is NDSI_FSCA_MODIS_C6.csv.
This dataset provides information about the number of properties, residents, and average property values for Gee Avenue cross streets in Gloucester, MA.
This dataset contains the BA mortality data derived from GEE data for the Beachie Creek, Lionshead, and Holiday Farm fires. Data is currently split into seven classes (0% BA mortality, 0-10%, 10-25%, 25-50%, 50-75%, 75-90%, 90-100%).
Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.
Explore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.
EXPLORE TIMELAPSEThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.
EXPLORE DATASETSThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.
EXPLORE THE APIUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.
LEARN ABOUT THE CODE EDITORScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.
SEE CASE STUDIES