Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Geospatial raster data and vector data created in the frame of the study "Mapping Arctic Lake Ice Backscatter Anomalies using Sentinel-1 Time Series on Google Earth Engine" submitted to the journal "Remote Sensing" and Python code to reproduce the results.
In addition to the full repository (Supplement_to_RS_Arctic_Lake_Ice_Backscatter_Anomalies.zip), two reduced alternatives of this repository are available due to large file size of the full repository:
Supplement_to_RS_Arctic_Lake_Ice_Backscatter_Anomalies_without_IW_result_data.zip contains the same data and Python scripts as the full repository, but results based on IW data and tiled EW delta sigma0 images directly exported from Google Earth Engine have been removed. The merged data (from tiled EW delta sigma0 images) and all other results deduced thereof are included.
Supplement_to_RS_Arctic_Lake_Ice_Backscatter_Anomalies_scripts_and_reference_data_only.zip contains only the Python scripts and reference data. The directory structure was retained for better reproducibility.
Please see the associated README-files for details.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
GEE-TED: A tsetse ecological distribution model for Google Earth Engine Please refer to the associated publication: Fox, L., Peter, B.G., Frake, A.N. and Messina, J.P., 2023. A Bayesian maximum entropy model for predicting tsetse ecological distributions. International Journal of Health Geographics, 22(1), p.31. https://link.springer.com/article/10.1186/s12942-023-00349-0 Description GEE-TED is a Google Earth Engine (GEE; Gorelick et al. 2017) adaptation of a tsetse ecological distribution (TED) model developed by DeVisser et al. (2010), which was designed for use in ESRI's ArcGIS. TED uses time-series climate and land-use/land-cover (LULC) data to predict the probability of tsetse presence across space based on species habitat preferences (in this case Glossina Morsitans). Model parameterization includes (1) day and night temperatures (MODIS Land Surface Temperature; MOD11A2), (2) available moisture/humidity using a vegetation index as a proxry (MODIS NDVI; MOD13Q1), (3) LULC (MODIS Land Cover Type 1; MCD12Q1), (4) year selections, and (5) fly movement rate (meters/16-days). TED has also been used as a basis for the development of an agent-based model by Lin et al. (2015) and in a cost-benefit analysis of tsetse control in Tanzania by Yang et al. (2017). Parameterization in Fox et al. (2023): Suitable LULC types and climate thresholds used here are specific to Glossina Morsitans in Kenya and are based on the parameterization selections in DeVisser et al. (2010) and DeVisser and Messina (2009). Suitable temperatures range from 17–40°C during the day and 10–40°C at night and available moisture is characterized as NDVI > 0.39. Suitable LULC comprises predominantly woody vegetation; a complete list of suitable categories is available in DeVisser and Messina (2009). In the Fox et al. (Forthcoming) publication, two versions of MCD12Q1 were used to assess suitable LULC types: Versions 051 and 006. The GeoTIFF supplied in this dataset entry (GEE-TED_Kenya_2016-2017.tif) uses the aforementioned parameters to show the probable tsetse distribution across Kenya for the years 2016-2017. A static graphic of this GEE-TED output is shown below and an interactive version can be viewed at: https://cartoscience.users.earthengine.app/view/gee-ted. Figure associated with Fox et al. (2023) GEE code The code supplied below is generalizable across geographies and species; however, it is highly recommended that parameterization is given considerable attention to produce reliable results. Note that output visualization on-the-fly will take some time and it is recommended that results be exported as an asset within GEE or exported as a GeoTIFF. Note: Since completing the Fox et al. (2023) manuscript, GEE has removed Version 051 per NASA's deprecation of the product. The current release of GEE-TED now uses only MCD12Q1 Version 006; however, alternative LULC data selections can be used with minimal modification to the code. // Input options var tempMin = 10 // Temperature thresholds in degrees Celsius var tempMax = 40 var ndviMin = 0.39 // NDVI thresholds; proxy for available moisture/humidity var ndviMax = 1 var movement = 500 // Fly movement rate in meters/16-days var startYear = 2008 // The first 2 years will be used for model initialization var endYear = 2019 // Computed probability is based on startYear+2 to endYear var country = 'KE' // Country codes - https://en.wikipedia.org/wiki/List_of_FIPS_country_codes var crs = 'EPSG:32737' // See https://epsg.io/ for appropriate country UTM zone var rescale = 250 // Output spatial resolution var labelSuffix = '02052020' // For file export labeling only //[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17] MODIS/006/MCD12Q1 var lulcOptions006 = [1,1,1,1,1,1,1,1,1, 0, 1, 0, 0, 0, 0, 0, 0] // 1 = suitable 0 = unsuitable // No more input required ------------------------------ // var region = ee.FeatureCollection("USDOS/LSIB_SIMPLE/2017") .filterMetadata('country_co', 'equals', country) // Input parameter modifications var tempMinMod = (tempMin+273.15)/0.02 var tempMaxMod = (tempMax+273.15)/0.02 var ndviMinMod = ndviMin*10000 var ndviMaxMod = ndviMax*10000 var ndviResolution = 250 var movementRate = movement+(ndviResolution/2) // Loading image collections var lst = ee.ImageCollection('MODIS/006/MOD11A2').select('LST_Day_1km', 'LST_Night_1km') .filter(ee.Filter.calendarRange(startYear,endYear,'year')) var ndvi = ee.ImageCollection('MODIS/006/MOD13Q1').select('NDVI') .filter(ee.Filter.calendarRange(startYear,endYear,'year')) var lulc006 = ee.ImageCollection('MODIS/006/MCD12Q1').select('LC_Type1') // Lulc mode and boolean reclassification var lulcMask = lulc006.mode().remap([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17],lulcOptions006) .eq(1).rename('remapped').clip(region) // Merge NDVI and LST image collections var combined = ndvi.combine(lst, true) var combinedList = combined.toList(10000) // Boolean reclassifications (suitable/unsuitable) for day/night temperatures and ndvi var con =...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
L'outil Hakai Google Earth Engine Kelp (outil GEEK) a été développé dans le cadre d'une collaboration entre l'Institut Hakai, l'Université de Victoria et le ministère des Pêches et des Océans pour tirer parti des capacités de cloud computing pour analyser l'imagerie satellite Landsat (30 m) afin d'extraire l'étendue de la canopée et du varech. La méthodologie originale est décrite dans Nijland et al. 2019*.
Remarque : Ce jeu de données est conçu comme une « lecture seule », car nous continuons à améliorer les résultats. Il vise à démontrer l'utilité de l'archive Landsat pour cartographier le varech. Ces données sont visibles sur la carte Web GEEK disponible ici.
Ce package de données contient deux jeux de données :
Etendue annuelle maximale estivale du varech formant la canopée (1984 - 2019) en tant que rasters. Etendue maximale décennale du varech formant la canopée (1984 - 1990, 1991 - 2000, 2001 - 2010, 2011 - 2020)
Ce jeu de données a été généré à la suite de modifications apportées aux méthodologies GEEK originales. Les paramètres utilisés pour générer les rasters étaient des scènes d'images avec :
Plage de mois Imagescene = 1er mai - 30 septembre Clouds maximum dans la scène = 80% Marée maximale = 3,2 m (+0,5 MWL des marées de la côte centrale selon les méthodes KIM-1) Marée minimale = 0 m Tampon de rivage appliqué au masque de terrain = 1 pixel (30 m) NDVI* minimum (pour qu'un pixel individuel soit classé comme varech) = -0,05 Nombre minimum de fois qu'un pixel de varech individuel doit être détecté en tant que varech au cours d'une seule année = 30 % de toutes les détections d'une année donnée K moyenne minimale (moyenne du NDVI pour tous les pixels à un emplacement donné détecté comme varech) = -0,05 * NDVI = indice de végétation de différence normalisée.
Ces paramètres ont été choisis sur la base d'une évaluation de la précision à l'aide d'une étendue de varech dérivée d'images WorldView-2 (2 m) de juillet 2014 et août 2014. Ces données ont été rééchantillonnées à 30 m. Bien que de nombreuses itérations exécutées pour l'outil aient donné des résultats très similaires, des paramètres ont été sélectionnés qui ont maximisé la précision du varech pour la comparaison de 2014.
Les résultats de l'évaluation de la précision ont été les suivants : Erreur de commission de 50 % Erreur d'omission de 25 %
En termes simples, les méthodes actuelles conduisent à un niveau élevé de « faux positifs », mais elles capturent avec précision l'étendue du varech par rapport au jeu de données de validation. Cette erreur peut être attribuée à la sensibilité de l'utilisation d'un seul NDVI pour détecter le varech. Nous observons des variations des seuils NDVI à la fois au sein d'une seule scène et entre les scènes.
L'objectif du jeu de données de séries chronologiques est censé prendre en compte une partie de cette erreur, car les pixels détectés seulement un par décennie sont supprimés.
Ce jeu de données fait partie du programme de cartographie de l'habitat de Hakai. L'objectif principal du programme de cartographie de l'habitat de Hakai est de générer des inventaires spatiaux des habitats côtiers, d'étudier comment ces habitats évoluent au fil du temps et les moteurs de ce changement.
*Nijland, W., Reshitnyk, L. et Rubidge, E. (2019). Télédétection par satellite de varech formant une canopée sur un littoral complexe : une nouvelle procédure utilisant les archives d'images Landsat. Télédétection de l'environnement, 220, 41-50. doi:10.1016/j.rse.2018.10.032
After 2022-01-25, Sentinel-2 scenes with PROCESSING_BASELINE '04.00' or above have their DN (value) range shifted by 1000. The HARMONIZED collection shifts data in newer scenes to be in the same range as in older scenes. Sentinel-2 is a wide-swath, high-resolution, multi-spectral imaging mission supporting Copernicus Land Monitoring studies, including the …
In 2023, Google Maps was the most downloaded map and navigation app in the United States, despite being a standard pre-installed app on Android smartphones. Waze followed, with 9.89 million downloads in the examined period. The app, which comes with maps and the possibility to access information on traffic via users reports, was developed in 2006 by the homonymous Waze company, acquired by Google in 2013.
Usage of navigation apps in the U.S. As of 2021, less than two in 10 U.S. adults were using a voice assistant in their cars, in order to place voice calls or follow voice directions to a destination. Navigation apps generally offer the possibility for users to download maps to access when offline. Native iOS app Apple Maps, which does not offer this possibility, was by far the navigation app with the highest data consumption, while Google-owned Waze used only 0.23 MB per 20 minutes.
Usage of navigation apps worldwide In July 2022, Google Maps was the second most popular Google-owned mobile app, with 13.35 million downloads from global users during the examined month. In China, the Gaode Map app, which is operated along with other navigation services by the Alibaba owned AutoNavi, had approximately 730 million monthly active users as of September 2022.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Forest cover is rapidly changing at the global scale as a result of land-use change (principally deforestation in many tropical regions and afforestation in many temperate regions) and climate change. However, a detailed map of global forest gain is still lacking at fine spatial and temporal resolutions. In this study, we developed a new automatic framework to map annual forest gain across the globe, based on Landsat time series, the LandTrendr algorithm and the Google Earth Engine (GEE) platform. First, samples of stable forest collected based on the Global Forest Change product (GFC) were used to determine annual Normalized Burn Ratio (NBR) thresholds for forest gain detection. Secondly, with the NBR time-series from 1982 to 2020 and LandTrendr algorithm, we produced dataset of global forest gain year from 1984 to 2020 based on a set of decision rules. Our results reveal that large areas of forest gain occurred in China, Russia, Brazil and North America, and the vast majority of the global forest gain has occurred since 2000. The new dataset was consistent in both spatial extent and years of forest gain with data from field inventories and alternative remote sensing products. Our dataset is valuable for policy-relevant research on the net impact of forest cover change on the global carbon cycle and provides an efficient and transferable approach for monitoring other types of land cover dynamics.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A Google Earth Engine implementation of the Floodwater Depth Estimation Tool (FwDET) This is a Google Earth Engine implementation of the Floodwater Depth Estimation Tool (FwDET) developed by the Surface Dynamics and Modeling Lab at the University of Alabama that calculates flood depth using a flood extent layer and a digital elevation model. This research is made possible by the CyberSeed Program at the University of Alabama. Project name: WaterServ: A Cyberinfrastructure for Analysis, Visualization and Sharing of Hydrological Data. GitHub Repository (ArcMap and QGIS implementations): https://github.com/csdms-contrib/fwdet Cohen, S., A. Raney, D. Munasinghe, J.D. Loftis J, A. Molthan, J. Bell, L. Rogers, J. Galantowicz, G.R. Brakenridge7, A.J. Kettner, Y. Huang, Y. Tsang, (2019). The Floodwater Depth Estimation Tool (FwDET v2.0) for Improved Remote Sensing Analysis of Coastal Flooding. Natural Hazards and Earth System Sciences, 19, 2053–2065. https://doi.org/10.5194/nhess-19-2053-2019 Cohen, S., G. R. Brakenridge, A. Kettner, B. Bates, J. Nelson, R. McDonald, Y. Huang, D. Munasinghe, and J. Zhang (2018), Estimating Floodwater Depths from Flood Inundation Maps and Topography, Journal of the American Water Resources Association, 54 (4), 847–858. https://doi.org/10.1111/1752-1688.12609 Sample products and data availability: https://sdml.ua.edu/models/fwdet/ https://sdml.ua.edu/michigan-flood-may-2020/ https://cartoscience.users.earthengine.app/view/fwdet-gee-mi https://alabama.app.box.com/s/31p8pdh6ngwqnbcgzlhyk2gkbsd2elq0 GEE implementation output: fwdet_gee_brazos.tif ArcMap implementation output (see Cohen et al. 2019): fwdet_v2_brazos.tif iRIC validation layer (see Nelson et al. 2010): iric_brazos_hydraulic_model_validation.tif Brazos River inundation polygon access in GEE: var brazos = ee.FeatureCollection('users/cartoscience/FwDET-GEE-Public/Brazos_River_Inundation_2016') Nelson, J.M., Shimizu, Y., Takebayashi, H. and McDonald, R.R., 2010. The international river interface cooperative: public domain software for river modeling. In 2nd Joint Federal Interagency Conference, Las Vegas, June (Vol. 27). Google Earth Engine Code /* ---------------------------------------------------------------------------------------------------------------------- # FwDET-GEE calculates floodwater depth from a floodwater extent layer and a DEM Authors: Brad G. Peter, Sagy Cohen, Ronan Lucey, Dinuke Munasinghe, Austin Raney Emails: bpeter@ua.edu, sagy.cohen@ua.edu, ronan.m.lucey@nasa.gov, dsmunasinghe@crimson.ua.edu, aaraney@crimson.ua.edu Organizations: BP, SC, DM, AR - University of Alabama; RL - University of Alabama in Huntsville Last Modified: 10/08/2020 To cite this code use: Peter, Brad; Cohen, Sagy; Lucey, Ronan; Munasinghe, Dinuke; Raney, Austin, 2020, "A Google Earth Engine implementation of the Floodwater Depth Estimation Tool (FwDET-GEE)", https://doi.org/10.7910/DVN/JQ4BCN, Harvard Dataverse, V2 ------------------------------------------------------------------------------------------------------------------------- This is a Google Earth Engine implementation of the Floodwater Depth Estimation Tool (FwDETv2.0) [1] developed by the Surface Dynamics and Modeling Lab at the University of Alabama that calculates flood depth using a flood extent layer and a digital elevation model. This research is made possible by the CyberSeed Program at the University of Alabama. Project name: WaterServ: A Cyberinfrastructure for Analysis, Visualization and Sharing of Hydrological Data. GitHub Repository (ArcMap and QGIS implementations): https://github.com/csdms-contrib/fwdet ------------------------------------------------------------------------------------------------------------------------- How to run this code with your flood extent GEE asset: User of this script will need to update path to flood extent (line 32 or 33) and select from the processing options. Available DEM options (1) are USGS/NED (U.S.) and USGS/SRTMGL1_003 (global). Other options include (2) running the elevation outlier filtering algorithm, (3) adding water body data to the inundation extent, (4) add a water body data layer uploaded by the user rather than using the JRC global surface water data, (5) masking out regular water body data, (6) masking out 0 m depths, (7) choosing whether or not to export, (8) exporting additional data layers, and (9) setting an export file name. The simpleVis option (10) bypasses the time consuming processes and is meant for visualization only; set this option to false to complete the entire process and enable exporting. ------------------------------------------------------------------------------------------------------------------------- ••••••••••••••••••••••••••••••••••••••••••• USER OPTIONS •••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• Load flood extent layer | Flood extent layer must be uploaded to GEE first as an asset. If the flood extent is a shapefile, upload as a FeatureCollection; otherwise, if the flood extent layer is a raster, upload it as an image. A raster layer may be required if the flood extent is a highly complex geometry -------------------------------------- */ var flood = ee.FeatureCollection('users/username/folder/flood_extent') // comment out this line if using an Image // var flood = ee.Image('users/username/folder/flood_extent') // comment out this line if using a FeatureCollection var waterExtent = ee.FeatureCollection('users/username/folder/water_extent') // OPTIONAL comment out this line if using an Image // var waterExtent = ee.Image('users/username/folder/water_extent') // OPTIONAL comment out this line if using a FeatureCollection // Processing options - refer to the directions above /*1*/ var demSource = 'USGS/NED' // 'USGS/NED' or 'USGS/SRTMGL1_003' /*2*/ var outlierTest = 'TRUE' // 'TRUE' (default) or 'FALSE' /*3*/ var addWater = 'TRUE' // 'TRUE' (default) or 'FALSE' /*4*/ var userWater = 'FALSE' // 'TRUE' or 'FALSE' (default) /*5*/ var maskWater = 'FALSE' // 'TRUE' or 'FALSE' (default) /*6*/ var maskZero = 'FALSE' // 'TRUE' or 'FALSE' (default) /*7*/ var exportLayer = 'TRUE' // 'TRUE' (default) or 'FALSE' /*8*/ var exportAll = 'FALSE' // 'TRUE' or 'FALSE' (default) /*9*/ var outputName = 'FwDET_GEE' // text string for naming export file /*10*/ var simpleVis = 'FALSE' // 'TRUE' or 'FALSE' (default) // ••••••••••••••••••••••••••••••••• NO USER INPUT BEYOND THIS POINT •••••••••••••••••••••••••••••••••••••••••••••••••••• // Create buffer around flood area to use for clipping other layers var area = flood.geometry().bounds().buffer(1000).bounds() // Load DEM and grab projection info var dem = ee.Image(demSource).select('elevation').clip(area) // [2,3] var projection = dem.projection() var resolution = projection.nominalScale().getInfo() // Load global surface water layer var jrc = ee.Image('JRC/GSW1_1/GlobalSurfaceWater').select('occurrence').clip(area) // [4] var water_image = jrc // User uploaded flood extent layer // Identify if a raster or vector layer is being used and proceed with appropriate process if ( flood.name() == 'FeatureCollection' ) { var addProperty = function(feature) { return feature.set('val',0); }; var flood_image = flood.map(addProperty).reduceToImage(['val'],ee.Reducer.first()) .rename('flood') } else { var flood_image = flood.multiply(0) } // Optional user uploaded water extent layer if ( userWater == 'TRUE' ) { // Identify if a raster or vector layer is being used and proceed with appropriate process if ( waterExtent.name() == 'FeatureCollection' ) { var addProperty = function(feature) { return feature.set('val',0); }; var water_image = waterExtent.map(addProperty).reduceToImage(['val'],ee.Reducer.first()) .rename('flood') } else { var water_image = waterExtent.multiply(0) } } // Add water bodies to flood extent if 'TRUE' is selected if ( addWater == 'TRUE' ) { var w = water_image.reproject(projection) var waterFill = flood_image.mask().where(w.gt(0),1) flood_image = waterFill.updateMask(waterFill.eq(1)).multiply(0) } // Change processing options if 'TRUE' is selected if ( simpleVis == 'FALSE' ) { flood_image = flood_image.reproject(projection) } else { outlierTest = 'FALSE' exportLayer = 'FALSE' } // Run the outlier filtering process if 'TRUE' is selected if ( outlierTest == 'TRUE' ) { // Outlier detection and filling on complete DEM using the modified z-score and a median filter [5] var kernel = ee.Kernel.fixed(3,3,[[1,1,1],[1,1,1],[1,1,1]]) var kernel_weighted = ee.Kernel.fixed(3,3,[[1,1,1],[1,0,1],[1,1,1]]) var median = dem.focal_median({kernel:kernel}).reproject(projection) var median_weighted = dem.focal_median({kernel:kernel_weighted}).reproject(projection) var diff = dem.subtract(median) var mzscore = diff.multiply(0.6745).divide(diff.abs().focal_median({kernel:kernel}).reproject(projection)) var fillDEM = dem.where(mzscore.gt(3.5),median_weighted) // Outlier detection and filling on the flood extent border pixels var expand = flood_image.focal_max({kernel: ee.Kernel.square({ radius: projection.nominalScale(), units: 'meters' })}).reproject(projection) var demMask = fillDEM.updateMask(flood_image.mask().eq(0)) var boundary = demMask.add(expand) var medianBoundary = boundary.focal_median({kernel:kernel}).reproject(projection) var medianWeightedBoundary = boundary.focal_median({kernel:kernel_weighted}).reproject(projection) var diffBoundary = boundary.subtract(medianBoundary) var mzscoreBoundary = diffBoundary.multiply(0.6745).divide(diffBoundary.abs().focal_median({kernel:kernel}).reproject(projection)) var fill =
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Find alternative fueling stations near an address or ZIP code or along a route in the United States. Enter a state to see a station count. ## Data Collection Methods ## The data in the Alternative Fueling Station Locator are gathered and verified through a variety of methods. The National Renewable Energy Laboratory (NREL) obtains information about new stations from trade media, Clean Cities coordinators, an Add a Station form on the Alternative Fuels Data Center (AFDC) website, and through collaborating with infrastructure equipment and fuel providers. NREL regularly compares its station data with those of other relevant trade organizations and websites. Differences in methodologies and inclusion criteria may result in slight differences between NREL's database and those maintained by other organizations. NREL also collaborates with alternative fuel industry groups to maintain the data. NREL and its data collection subcontractor are currently collaborating with natural gas, electric drive, biodiesel, ethanol, and propane industry groups to establish best practices for identifying new stations in the most-timely manner possible and to develop a more rigorous network for the future. ## Station Update Schedule ## Existing stations in the database are contacted at least once a year on an established schedule to verify they are still operational and dispensing the fuel specified. Based on an established data collection schedule, the database is updated once a month with the exception of electric vehicle supply equipment (EVSE) data, which are updated twice a month. Stations that are no longer operational or no longer provide alternative fuel are removed from the database on a monthly basis or as they are identified. ## Mapping and Counting Methods ## Each point on the map is counted as one station in the station count. A station appears as one point on the map, regardless of the number of fuel dispensers or charging outlets at that location. Station addresses are geocoded and mapped using an automatic geocoding application. The geocoding application returns the most accurate location based on the provided address. Station locations may also be provided by external sources (e.g., station operators) and/or verified in a geographic information system (GIS) tool like Google Earth, Google Maps, or Google StreetView. This information is considered highly accurate, and these coordinates override any information generated using the geocoding application. ## Notes about Specific Station Types ## ### Private Stations ### Stations with an Access of "Private - Fleet customers only" may allow other entities to fuel through a business-to-business arrangement. For more information, fleet customers should refer to the information listed in the details section for that station to contact the station directly. ### Biodiesel Stations ### The Alternative Fueling Station Locator only includes stations offering biodiesel blends of 20% (B20) and above. ### Electric Vehicle Supply Equipment (EVSE) ### An electric charging station, or EVSE, appears as one point on the map, regardless of the number of charging outlets at that location. The number and type of charging outlets available are displayed as additional details when the station location is selected. Each point on the map is counted as one station in the station count. To see a total count of EVSE for all outlets available, go to the Alternative Fueling Station Counts by State table. Residential EVSE locations are not included in the Alternative Fueling Station Locator. ## Liquefied Petroleum Gas (Propane) Stations ### Because many propane stations serve customers other than drivers and fleets, NREL collaborated with the industry to effectively represent the differences. Each propane station is designated as a 'primary' or 'secondary' service type. Both types are able to fuel vehicles. However, locations with a 'primary' designation offer vehicle services and fuel priced specifically for use in vehicles. The details page for each station lists its service designation.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Snow, ice, and permafrost constitute ‘frozen commons,’ or common-pool resources that are collectively used and managed. This study examines the state and uses of snow and ice commons in two remote communities: Bayanzürkh in Khövsgöl Aimag, Mongolia, and McGrath, Alaska. Regional climate analyses indicate air temperatures warmed more than 2.2°C in both locales since the mid-twentieth century, compounding their similar accessibility challenges and dependence on natural resources. Warming affects transit timing and duration over snow and ice, impacting Mongolian herding and Alaskan subsistence hunting. Snow cover duration in both communities was calculated by classifying MODIS Snow Cover imagery utilizing the Google Earth Engine JavaScript API. Annual lake and river ice breakup timing and safe travel days were quantified from winter MODIS imagery from 2002 to 2023 for Lake Khövsgöl in Mongolia and the Kuskokwim River near McGrath, Alaska. Snow and ice duration did not significantly change over the 21 years examined. Relatively high map accuracies allowed discussion of interannual variability impacts on subsistence, transportation, and tourism. Daily snow and ice mapping in Google Earth Engine is a cost-effective and rapid method for quantifying environmental change impacting frozen commons, and therefore a tool for community decision-making and communication.
Used within the Travellers Road Information Portal Interactive Map to convey transportation related information in both official languages. Contains information of any event known in advance, including maintenance, construction, or special events, etc. This data is best viewed using Google Earth or similar Keyhole Markup Language (KML) compatible software. For instructions on how to use Google Earth, read the Google Earth tutorial .
The MOD13Q1 V6.1 product provides a Vegetation Index (VI) value at a per pixel basis. There are two primary vegetation layers. The first is the Normalized Difference Vegetation Index (NDVI) which is referred to as the continuity index to the existing National Oceanic and Atmospheric Administration-Advanced Very High Resolution Radiometer (NOAA-AVHRR) derived NDVI. The second vegetation layer is the Enhanced Vegetation Index (EVI) that minimizes canopy background variations and maintains sensitivity over dense vegetation conditions. The EVI also uses the blue band to remove residual atmosphere contamination caused by smoke and sub-pixel thin cloud clouds. The MODIS NDVI and EVI products are computed from atmospherically corrected bi-directional surface reflectances that have been masked for water, clouds, heavy aerosols, and cloud shadows. Documentation: User's Guide Algorithm Theoretical Basis Document (ATBD) General Documentation
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains Sentinel 2 and Landsat 8 cloud free composite satellite images of the Coral Sea reef areas and some parts of the Great Barrier Reef. It also contains raw depth contours derived from the satellite imagery. This dataset was developed as the base information for mapping the boundaries of reefs and coral cays in the Coral Sea. It is likely that the satellite imagery is useful for numerous other applications. The full source code is available and can be used to apply these techniques to other locations.
This dataset contains two sets of raw satellite derived bathymetry polygons for 5 m, 10 m and 20 m depths based on both the Landsat 8 and Sentinel 2 imagery. These are intended to be post-processed using clipping and manual clean up to provide an estimate of the top structure of reefs. This dataset also contains select scenes on the Great Barrier Reef and Shark bay in Western Australia that were used to calibrate the depth contours. Areas in the GBR were compared with the GA GBR30 2020 (Beaman, 2017) bathymetry dataset and the imagery in Shark bay was used to tune and verify the Satellite Derived Bathymetry algorithm in the handling of dark substrates such as by seagrass meadows. This dataset also contains a couple of small Sentinel 3 images that were used to check the presence of reefs in the Coral Sea outside the bounds of the Sentinel 2 and Landsat 8 imagery.
The Sentinel 2 and Landsat 8 imagery was prepared using the Google Earth Engine, followed by post processing in Python and GDAL. The processing code is available on GitHub (https://github.com/eatlas/CS_AIMS_Coral-Sea-Features_Img).
This collection contains composite imagery for Sentinel 2 tiles (59 in Coral Sea, 8 in GBR) and Landsat 8 tiles (12 in Coral Sea, 4 in GBR and 1 in WA). For each Sentinel tile there are 3 different colour and contrast enhancement styles intended to highlight different features. These include:
- TrueColour
- Bands: B2 (blue), B3 (green), B4 (red): True colour imagery. This is useful to identifying shallow features are and in mapping the vegetation on cays.
- DeepFalse
- Bands: B1 (ultraviolet), B2 (blue), B3 (green): False colour image that shows deep marine features to 50 - 60 m depth. This imagery exploits the clear waters of the Coral Sea to allow the ultraviolet band to provide a much deeper view of coral reefs than is typically achievable with true colour imagery. This imagery has a high level of contrast enhancement applied to the imagery and so it appears more noisy (in particular showing artefact from clouds) than the TrueColour styling.
- Shallow
- Bands: B5 (red edge), B8 (Near Infrared) , B11 (Short Wave infrared): This false colour imagery focuses on identifying very shallow and dry regions in the imagery. It exploits the property that the longer wavelength bands progressively penetrate the water less. B5 penetrates the water approximately 3 - 5 m, B8 approximately 0.5 m and B11 < 0.1 m. Features less than a couple of metres appear dark blue, dry areas are white. This imagery is intended to help identify coral cay boundaries.
For Landsat 8 imagery only the TrueColour
and DeepFalse
stylings were rendered.
All Sentinel 2 and Landsat 8 imagery has Satellite Derived Bathymetry (SDB) depth contours.
- Depth5m
- This corresponds to an estimate of the area above 5 m depth (Mean Sea Level).
- Depth10m
- This corresponds to an estimate of the area above 10 m depth (Mean Sea Level).
- Depth20m
- This corresponds to an estimate of the area above 20 m depth (Mean Sea Level).
For most Sentinel and some Landsat tiles there are two versions of the DeepFalse imagery based on different collections (dates). The R1 imagery are composites made up from the best available imagery while the R2 imagery uses the next best set of imagery. This splitting of the imagery is to allow two composites to be created from the pool of available imagery. This allows any mapped features to be checked against two images. Typically the R2 imagery will have more artefacts from clouds. In one Sentinel 2 tile a third image was created to help with mapping the reef platform boundary.
The satellite imagery was processed in tiles (approximately 100 x 100 km for Sentinel 2 and 200 x 200 km for Landsat 8) to keep each final image small enough to manage. These tiles were not merged into a single mosaic as it allowed better individual image contrast enhancement when mapping deep features. The dataset only covers the portion of the Coral Sea where there are shallow coral reefs and where their might have been potential new reef platforms indicated by existing bathymetry datasets and the AHO Marine Charts. The extent of the imagery was limited by those available through the Google Earth Engine.
# Methods:
The Sentinel 2 imagery was created using the Google Earth Engine. The core algorithm was:
1. For each Sentinel 2 tile, images from 2015 – 2021 were reviewed manually after first filtering to remove cloudy scenes. The allowable cloud cover was adjusted so that at least the 50 least cloud free images were reviewed. The typical cloud cover threshold was 1%. Where very few images were available the cloud cover filter threshold was raised to 100% and all images were reviewed. The Google Earth Engine image IDs of the best images were recorded, along with notes to help sort the images based on those with the clearest water, lowest waves, lowest cloud, and lowest sun glint. Images where there were no or few clouds over the known coral reefs were preferred. No consideration of tides was used in the image selection process. The collection of usable images were grouped into two sets that would be combined together into composite images. The best were added to the R1 composite, and the next best images into the R2 composite. Consideration was made as to whether each image would improve the resultant composite or make it worse. Adding clear images to the collection reduces the visual noise in the image allowing deeper features to be observed. Adding images with clouds introduces small artefacts to the images, which are magnified due to the high contrast stretching applied to the imagery. Where there were few images all available imagery was typically used.
2. Sunglint was removed from the imagery using estimates of the sunglint using two of the infrared bands (described in detail in the section on Sun glint removal and atmospheric correction).
3. A composite image was created from the best images by taking the statistical median of the stack of images selected in the previous stage, after masking out clouds and their shadows (described in detail later).
4. The brightness of the composite image was normalised so that all tiles would have a similar average brightness for deep water areas. This correction was applied to allow more consistent contrast enhancement. Note: this brightness adjustment was applied as a single offset across all pixels in the tile and so this does not correct for finer spatial brightness variations.
5. The contrast of the images was enhanced to create a series of products for different uses. The TrueColour
colour image retained the full range of tones visible, so that bright sand cays still retain detail. The DeepFalse
style was optimised to see features at depth and the Shallow
style provides access to far red and infrared bands for assessing shallow features, such as cays and island.
6. The various contrast enhanced composite images were exported from Google Earth Engine and optimised using Python and GDAL. This optimisation added internal tiling and overviews to the imagery. The depth polygons from each tile were merged into shapefiles covering the whole for each depth.
## Cloud Masking
Prior to combining the best images each image was processed to mask out clouds and their shadows.
The cloud masking uses the COPERNICUS/S2_CLOUD_PROBABILITY dataset developed by SentinelHub (Google, n.d.; Zupanc, 2017). The mask includes the cloud areas, plus a mask to remove cloud shadows. The cloud shadows were estimated by projecting the cloud mask in the direction opposite the angle to the sun. The shadow distance was estimated in two parts.
A low cloud mask was created based on the assumption that small clouds have a small shadow distance. These were detected using a 40% cloud probability threshold. These were projected over 400 m, followed by a 150 m buffer to expand the final mask.
A high cloud mask was created to cover longer shadows created by taller, larger clouds. These clouds were detected based on an 80% cloud probability threshold, followed by an erosion and dilation of 300 m to remove small clouds. These were then projected over a 1.5 km distance followed by a 300 m buffer.
The buffering was applied as the cloud masking would often miss significant portions of the edges of clouds and their shadows. The buffering allowed a higher percentage of the cloud to be excluded, whilst retaining as much of the original imagery as possible.
The parameters for the cloud masking (probability threshold, projection distance and buffer radius) were determined through trial and error on a small number of scenes. The algorithm used is significantly better than the default Sentinel 2 cloud masking and slightly better than the COPERNICUS/S2_CLOUD_PROBABILITY cloud mask because it masks out shadows, however there is potentially significant improvements that could be made to the method in the future.
Erosion, dilation and buffer operations were performed at a lower image resolution than the native satellite image resolution to improve the computational speed. The resolution of these operations were adjusted so that they were performed with approximately a 4 pixel resolution during these operations. This made the cloud mask
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/3.1/customlicense?persistentId=doi:10.7910/DVN/UFC6B5https://dataverse.harvard.edu/api/datasets/:persistentId/versions/3.1/customlicense?persistentId=doi:10.7910/DVN/UFC6B5
Web-based GIS for spatiotemporal crop climate niche mapping Interactive Google Earth Engine Application—Version 2, July 2020 https://cropniche.cartoscience.com https://cartoscience.users.earthengine.app/view/crop-niche Google Earth Engine Code /* ---------------------------------------------------------------------------------------------------------------------- # CropSuit-GEE Authors: Brad G. Peter (bpeter@ua.edu), Joseph P. Messina, and Zihan Lin Organizations: BGP, JPM - University of Alabama; ZL - Michigan State University Last Modified: 06/28/2020 To cite this code use: Peter, B. G.; Messina, J. P.; Lin, Z., 2019, "Web-based GIS for spatiotemporal crop climate niche mapping", https://doi.org/10.7910/DVN/UFC6B5, Harvard Dataverse, V1 ------------------------------------------------------------------------------------------------------------------------- This is a Google Earth Engine crop climate suitability geocommunication and map export tool designed to support agronomic development and deployment of improved crop system technologies. This content is made possible by the support of the American People provided to the Feed the Future Innovation Lab for Sustainable Intensification through the United States Agency for International Development (USAID). The contents are the sole responsibility of the authors and do not necessarily reflect the views of USAID or the United States Government. Program activities are funded by USAID under Cooperative Agreement No. AID-OAA-L-14-00006. ------------------------------------------------------------------------------------------------------------------------- Summarization of input options: There are 14 user options available. The first is a country of interest selection using a 2-digit FIPS code (link available below). This selection is used to produce a rectangular bounding box for export; however, other geometries can be selected with minimal modification to the code. Options 2 and 3 specify the complete temporal range for aggregation (averaged across seasons; single seasons may also be selected). Options 4–7 specify the growing season for calculating total seasonal rainfall and average season temperatures and NDVI (NDVI is for export only and is not used in suitability determination). Options 8–11 specify the climate parameters for the crop of interest (rainfall and temperature max/min). Option 12 enables masking to agriculture, 13 enables exporting of all data layers, and 14 is a text string for naming export files. ------------------------------------------------------------------------------------------------------------------------- ••••••••••••••••••••••••••••••••••••••••••• USER OPTIONS ••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••• */ // CHIRPS data availability: https://developers.google.com/earth-engine/datasets/catalog/UCSB-CHG_CHIRPS_PENTAD // MOD11A2 data availability: https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD11A2 var country = 'MI' // [1] https://en.wikipedia.org/wiki/List_of_FIPS_country_codes var startRange = 2001 // [2] var endRange = 2017 // [3] var startSeasonMonth = 11 // [4] var startSeasonDay = 1 // [5] var endSeasonMonth = 4 // [6] var endSeasonDay = 30 // [7] var precipMin = 750 // [8] var precipMax = 1200 // [9] var tempMin = 22 // [10] var tempMax = 32 // [11] var maskToAg = 'TRUE' // [12] 'TRUE' (default) or 'FALSE' var exportLayers = 'TRUE' // [13] 'TRUE' (default) or 'FALSE' var exportNameHeader = 'crop_suit_maize' // [14] text string for naming export file // ••••••••••••••••••••••••••••••••• NO USER INPUT BEYOND THIS POINT •••••••••••••••••••••••••••••••••••••••••••••••••••• // Access precipitation and temperature ImageCollections and a global countries FeatureCollection var region = ee.FeatureCollection('USDOS/LSIB_SIMPLE/2017') .filterMetadata('country_co','equals',country) var precip = ee.ImageCollection('UCSB-CHG/CHIRPS/PENTAD').select('precipitation') var temp = ee.ImageCollection('MODIS/006/MOD11A2').select(['LST_Day_1km','LST_Night_1km']) var ndvi = ee.ImageCollection('MODIS/006/MOD13Q1').select(['NDVI']) // Create layers for masking to agriculture and masking out water bodies var waterMask = ee.Image('UMD/hansen/global_forest_change_2015').select('datamask').eq(1) var agModis = ee.ImageCollection('MODIS/006/MCD12Q1').select('LC_Type1').mode() .remap([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17], [0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0]) var agGC = ee.Image('ESA/GLOBCOVER_L4_200901_200912_V2_3').select('landcover') .remap([11,14,20,30,40,50,60,70,90,100,110,120,130,140,150,160,170,180,190,200,210,220,230], [1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]) var cropland = ee.Image('USGS/GFSAD1000_V1').neq(0) var agMask = agModis.add(agGC).add(cropland).gt(0).eq(1) // Modify user input options for processing with raw data var years = ee.List.sequence(startRange,endRange) var bounds = region.geometry().bounds() var tMinMod = (tempMin+273.15)/0.02 var tMaxMod = (tempMax+273.15)/0.02 //...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
It comprises end-user discussions on Six similar topics related to the Google Maps application. A small dataset comprising user discussion about Google Maps application used for validating argumentation-based research approaches. A Python script for extracting end-user feedback from the Reddit forum by keeping the argumentative order of discussions (comment-reply).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Grasslands in British Columbia (BC) play a pivotal role in biodiversity, supporting over 30% of the region's endangered species. However, rapid urbanization and forest encroachment threaten these habitats. This study addresses the urgent need for an accurate, automated method for delineating and monitoring BC's grasslands by employing Object-Based Image Analysis (GEOBIA) within the Google Earth Engine platform, utilizing high-resolution Sentinel-2 satellite imagery. The approach innovates by integrating Superpixel Segmentation Based on Simple Non-Iterative Clustering (SNIC) with Random Forest classification, aimed at overcoming the mixed pixel effect prevalent in pixel-based methods. The methodology demonstrates a significant improvement in the accuracy of grassland delineation, achieving an overall classification accuracy of 96%. Specifically, the accuracy for grassland identification increased by 26.6% compared to the previous study, underscoring the effectiveness of GEOBIA for environmental monitoring. This advancement offers a promising tool for the conservation and management of grassland ecosystems in BC, suggesting a scalable model for similar ecological studies worldwide. The findings advocate for the adoption of GEOBIA in remote sensing practices, potentially transforming how grasslands are monitored and conserved, thereby contributing to the preservation of biodiversity.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Detecting Landscape Objects on Satellite Images with Artificial Intelligence In recent years, there has been a significant increase in the use of artificial intelligence (AI) for image recognition and object detection. This technology has proven to be useful in a wide range of applications, from self-driving cars to facial recognition systems. In this project, the focus lies on using AI to detect landscape objects in satellite images (aerial photography angle) with the goal to create an annotated map of The Netherlands with all the coordinates of the given landscape objects.
Background Information
Problem Statement One of the things that Naturalis does is conducting research into the distribution of wild bees (Naturalis, n.d.). For their research they use a model that predicts whether or not a certain species can occur at a given location. Representing the real world in a digital form, there is at the moment not yet a way to generate an inventory of landscape features such as presence of trees, ponds and hedges, with their precise location on the digital map. The current models rely on species observation data and climate variables, but it is expected that adding detailed physical landscape information could increase the prediction accuracy. Common maps do not contain this level of detail, but high-resolution satellite images do.
Possible opportunities Based on the problem statement, there is at the moment at Naturalis not a map that does contain the level of detail where detection of landscape elements could be made, according to their wishes. The idea emerged that it should be possible to use satellite images to find the locations of small landscape elements and produce an annotated map. Therefore, by refining the accuracy of the current prediction model, researchers can gain a profound understanding of wild bees in the Netherlands with the goal to take effective measurements to protect wild bees and their living environment.
Goal of project The goal of the project is to develop an artificial intelligence model for landscape detection on satellite images to create an annotated map of The Netherlands that would therefore increase the accuracy prediction of the current model that is used at Naturalis. The project aims to address the problem of a lack of detailed maps of landscapes that could revolutionize the way Naturalis conduct their research on wild bees. Therefore, the ultimate aim of the project in the long term is to utilize the comprehensive knowledge to protect both the wild bees population and their natural habitats in the Netherlands.
Data Collection Google Earth One of the main challenges of this project was the difficulty in obtaining a qualified dataset (with or without data annotation). Obtaining high-quality satellite images for the project presents challenges in terms of cost and time. The costs in obtaining high-quality satellite images of the Netherlands is 1,038,575 $ in total (for further details and information of the costs of satellite images. On top of that, the acquisition process for such images involves various steps, from the initial request to the actual delivery of the images, numerous protocols and processes need to be followed.
After conducting further research, the best possible solution was to use Google Earth as the primary source of data. While Google Earth is not allowed to be used for commercial or promotional purposes, this project is for research purposes only for Naturalis on their research of wild bees, hence the regulation does not apply in this case.
Used within the Travellers Road Information Portal Interactive Map to convey transportation related information in both official languages. Contains information about major construction projects, including restrictions and delays. This data is best viewed using Google Earth or similar Keyhole Markup Language (KML) compatible software. For instructions on how to use Google Earth, read the Google Earth tutorial .
MSZSI: Multi-Scale Zonal Statistics [AgriClimate] Inventory -------------------------------------------------------------------------------------- MSZSI is a data extraction tool for Google Earth Engine that aggregates time-series remote sensing information to multiple administrative levels using the FAO GAUL data layers. The code at the bottom of this page (metadata) can be pasted into the Google Earth Engine JavaScript code editor and ran at https://code.earthengine.google.com/. Input options: [1] Country of interest [2] Start and end year [3] Start and end month [4] Option to mask data to a specific land-use/land-cover type [5] Land-use/land-cover type code from CGLS LULC [6] Image collection for data aggregation [7] Desired band from the image collection [8] Statistics type for the zonal aggregations [9] Statistic to use for annual aggregation [10] Scaling options [11] Export folder and label suffix Output: Two CSVs containing zonal statistics for each of the FAO GAUL administrative level boundaries Output fields: system:index, 0-ADM0_CODE, 0-ADM0_NAME, 0-ADM1_CODE, 0-ADM1_NAME, 0-ADMN_CODE, 0-ADMN_NAME, 1-AREA_PERCENT_LULC, 1-AREA_SQM_LULC, 1-AREA_SQM_ZONE, 2-X_2001, 2-X_2002, 2-X_2003, ..., 2-X_2020, .geo PREPROCESSED DATA DOWNLOAD The datasets available for download contain zonal statistics at 2 administrative levels (FAO GAUL levels 1 and 2). Select countries from Southeast Asia and Sub-Saharan Africa (Cambodia, Indonesia, Lao PDR, Myanmar, Philippines, Thailand, Vietnam, Burundi, Kenya, Malawi, Mozambique, Rwanda, Tanzania, Uganda, Zambia, Zimbabwe) are included in the current version, with plans to extend the dataset to contain global metrics. Each zip file is described below and two example NDVI tables are available for preview. Key: [source, data, units, temporal range, aggregation, masking, zonal statistic, notes] Currently available: MSZSI-V2_V-NDVI-MEAN.tar: [NASA-MODIS, NDVI, index, 2001–2020, annual mean, agriculture, mean, n/a] MSZSI-V2_T-LST-DAY-MEAN.tar: [NASA-MODIS, LST Day, °C, 2001–2020, annual mean, agriculture, mean, n/a] MSZSI-V2_T-LST-NIGHT-MEAN.tar: [NASA-MODIS, LST Night, °C, 2001–2020, annual mean, agriculture, mean, n/a] MSZSI-V2_R-PRECIP-SUM.tar: [UCSB-CHG-CHIRPS, Precipitation, mm, 2001–2020, annual sum, agriculture, mean, n/a] MSZSI-V2_S-BDENS-MEAN.tar: [OpenLandMap, Bulk density, g/cm3, static, n/a, agriculture, mean, at depths 0-10-30-60-100-200] MSZSI-V2_S-ORGC-MEAN.tar: [OpenLandMap, Organic carbon, g/kg, static, n/a, agriculture, mean, at depths 0-10-30-60-100-200] MSZSI-V2_S-PH-MEAN.tar: [OpenLandMap, pH in H2O, pH, static, n/a, agriculture, mean, at depths 0-10-30-60-100-200] MSZSI-V2_S-WATER-MEAN.tar: [OpenLandMap, Soil water, % at 33kPa, static, n/a, agriculture, mean, at depths 0-10-30-60-100-200] MSZSI-V2_S-SAND-MEAN.tar: [OpenLandMap, Sand, %, static, n/a, agriculture, mean, at depths 0-10-30-60-100-200] MSZSI-V2_S-SILT-MEAN.tar: [OpenLandMap, Silt, %, static, n/a, agriculture, mean, at depths 0-10-30-60-100-200] MSZSI-V2_S-CLAY-MEAN.tar: [OpenLandMap, Clay, %, static, n/a, agriculture, mean, at depths 0-10-30-60-100-200] MSZSI-V2_E-ELEV-MEAN.tar: [MERIT, [elevation, slope, flowacc, HAND], [m, degrees, km2, m], static, n/a, agriculture, mean, n/a] Coming soon MSZSI-V2_C-STAX-MEAN.tar: [OpenLandMap, Soil taxonomy, category, static, n/a, agriculture, area sum, n/a] MSZSI-V2_C-LULC-MEAN.tar: [CGLS-LC100-V3, LULC, category, 2015–2019, mode, none, area sum, n/a] Data sources: https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD13Q1 https://developers.google.com/earth-engine/datasets/catalog/MODIS_006_MOD11A2 https://developers.google.com/earth-engine/datasets/catalog/UCSB-CHG_CHIRPS_PENTAD https://developers.google.com/earth-engine/datasets/catalog/OpenLandMap_SOL_SOL_BULKDENS-FINEEARTH_USDA-4A1H_M_v02 https://developers.google.com/earth-engine/datasets/catalog/OpenLandMap_SOL_SOL_ORGANIC-CARBON_USDA-6A1C_M_v02 https://developers.google.com/earth-engine/datasets/catalog/OpenLandMap_SOL_SOL_PH-H2O_USDA-4C1A2A_M_v02 https://developers.google.com/earth-engine/datasets/catalog/OpenLandMap_SOL_SOL_WATERCONTENT-33KPA_USDA-4B1C_M_v01 https://developers.google.com/earth-engine/datasets/catalog/OpenLandMap_SOL_SOL_CLAY-WFRACTION_USDA-3A1A1A_M_v02 https://developers.google.com/earth-engine/datasets/catalog/OpenLandMap_SOL_SOL_SAND-WFRACTION_USDA-3A1A1A_M_v02 https://developers.google.com/earth-engine/datasets/catalog/OpenLandMap_SOL_SOL_GRTGROUP_USDA-SOILTAX_C_v01 https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_Landcover_100m_Proba-V-C3_Global https://developers.google.com/earth-engine/datasets/catalog/MERIT_Hydro_v1_0_1 https://developers.google.com/earth-engine/datasets/catalog/FAO_GAUL_2015_level0 https://developers.google.com/earth-engine/datasets/catalo... Visit https://dataone.org/datasets/sha256%3A1844d916f64551cf0a8e0fe8d71474912d22e43d77c43c848aa8fac7e7e02f29 for complete metadata about this dataset.
ERA5-Land is a reanalysis dataset providing a consistent view of the evolution of land variables over several decades at an enhanced resolution compared to ERA5. ERA5-Land has been produced by replaying the land component of the ECMWF ERA5 climate reanalysis. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using the laws of physics. Reanalysis produces data that goes several decades back in time, providing an accurate description of the climate of the past. This dataset includes all 50 variables as available on CDS. ERA5-Land data is available from 1950 to three months from real-time. Please consult the ERA5-Land "Known Issues" section. In particular, note that three components of the total evapotranspiration have values swapped as follows: variable "Evaporation from bare soil" (mars parameter code 228101 (evabs)) has the values corresponding to the "Evaporation from vegetation transpiration" (mars parameter 228103 (evavt)), variable "Evaporation from open water surfaces excluding oceans (mars parameter code 228102 (evaow)) has the values corresponding to the "Evaporation from bare soil" (mars parameter code 228101 (evabs)), variable "Evaporation from vegetation transpiration" (mars parameter code 228103 (evavt)) has the values corresponding to the "Evaporation from open water surfaces excluding oceans" (mars parameter code 228102 (evaow)). The asset is a daily aggregate of ECMWF ERA5 Land hourly assets which includes both flow and non-flow bands. Flow bands are formed by collecting the first hour's data of the following day which holds aggregated sum of previous day and while the non-flow bands are created by averaging all hourly data of the day. The flow bands are labeled with the "_sum" identifier, which approach is different from the daily data produced by Copernicus Climate Data Store, where flow bands are averaged too. Daily aggregates have been pre-calculated to facilitate many applications requiring easy and fast access to the data. Precipitation and other flow (accumulated) bands might occasionally have negative values, which doesn't make physical sense. At other times their values might be excessively high. This problem is due to how the GRIB format saves data: it simplifies or "packs" the data into smaller, less precise numbers, which can introduce errors. These errors get worse when the data varies a lot. Because of this, when we look at the data for a whole day to compute daily totals, sometimes the highest amount of rainfall recorded at one time can seem larger than the total rainfall measured for the entire day. To learn more, Please see: "Why are there sometimes small negative precipitation accumulations"
Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE