Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
GEE-TED: A tsetse ecological distribution model for Google Earth Engine Please refer to the associated publication: Fox, L., Peter, B.G., Frake, A.N. and Messina, J.P., 2023. A Bayesian maximum entropy model for predicting tsetse ecological distributions. International Journal of Health Geographics, 22(1), p.31. https://link.springer.com/article/10.1186/s12942-023-00349-0 Description GEE-TED is a Google Earth Engine (GEE; Gorelick et al. 2017) adaptation of a tsetse ecological distribution (TED) model developed by DeVisser et al. (2010), which was designed for use in ESRI's ArcGIS. TED uses time-series climate and land-use/land-cover (LULC) data to predict the probability of tsetse presence across space based on species habitat preferences (in this case Glossina Morsitans). Model parameterization includes (1) day and night temperatures (MODIS Land Surface Temperature; MOD11A2), (2) available moisture/humidity using a vegetation index as a proxry (MODIS NDVI; MOD13Q1), (3) LULC (MODIS Land Cover Type 1; MCD12Q1), (4) year selections, and (5) fly movement rate (meters/16-days). TED has also been used as a basis for the development of an agent-based model by Lin et al. (2015) and in a cost-benefit analysis of tsetse control in Tanzania by Yang et al. (2017). Parameterization in Fox et al. (2023): Suitable LULC types and climate thresholds used here are specific to Glossina Morsitans in Kenya and are based on the parameterization selections in DeVisser et al. (2010) and DeVisser and Messina (2009). Suitable temperatures range from 17–40°C during the day and 10–40°C at night and available moisture is characterized as NDVI > 0.39. Suitable LULC comprises predominantly woody vegetation; a complete list of suitable categories is available in DeVisser and Messina (2009). In the Fox et al. (Forthcoming) publication, two versions of MCD12Q1 were used to assess suitable LULC types: Versions 051 and 006. The GeoTIFF supplied in this dataset entry (GEE-TED_Kenya_2016-2017.tif) uses the aforementioned parameters to show the probable tsetse distribution across Kenya for the years 2016-2017. A static graphic of this GEE-TED output is shown below and an interactive version can be viewed at: https://cartoscience.users.earthengine.app/view/gee-ted. Figure associated with Fox et al. (2023) GEE code The code supplied below is generalizable across geographies and species; however, it is highly recommended that parameterization is given considerable attention to produce reliable results. Note that output visualization on-the-fly will take some time and it is recommended that results be exported as an asset within GEE or exported as a GeoTIFF. Note: Since completing the Fox et al. (2023) manuscript, GEE has removed Version 051 per NASA's deprecation of the product. The current release of GEE-TED now uses only MCD12Q1 Version 006; however, alternative LULC data selections can be used with minimal modification to the code. // Input options var tempMin = 10 // Temperature thresholds in degrees Celsius var tempMax = 40 var ndviMin = 0.39 // NDVI thresholds; proxy for available moisture/humidity var ndviMax = 1 var movement = 500 // Fly movement rate in meters/16-days var startYear = 2008 // The first 2 years will be used for model initialization var endYear = 2019 // Computed probability is based on startYear+2 to endYear var country = 'KE' // Country codes - https://en.wikipedia.org/wiki/List_of_FIPS_country_codes var crs = 'EPSG:32737' // See https://epsg.io/ for appropriate country UTM zone var rescale = 250 // Output spatial resolution var labelSuffix = '02052020' // For file export labeling only //[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17] MODIS/006/MCD12Q1 var lulcOptions006 = [1,1,1,1,1,1,1,1,1, 0, 1, 0, 0, 0, 0, 0, 0] // 1 = suitable 0 = unsuitable // No more input required ------------------------------ // var region = ee.FeatureCollection("USDOS/LSIB_SIMPLE/2017") .filterMetadata('country_co', 'equals', country) // Input parameter modifications var tempMinMod = (tempMin+273.15)/0.02 var tempMaxMod = (tempMax+273.15)/0.02 var ndviMinMod = ndviMin*10000 var ndviMaxMod = ndviMax*10000 var ndviResolution = 250 var movementRate = movement+(ndviResolution/2) // Loading image collections var lst = ee.ImageCollection('MODIS/006/MOD11A2').select('LST_Day_1km', 'LST_Night_1km') .filter(ee.Filter.calendarRange(startYear,endYear,'year')) var ndvi = ee.ImageCollection('MODIS/006/MOD13Q1').select('NDVI') .filter(ee.Filter.calendarRange(startYear,endYear,'year')) var lulc006 = ee.ImageCollection('MODIS/006/MCD12Q1').select('LC_Type1') // Lulc mode and boolean reclassification var lulcMask = lulc006.mode().remap([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17],lulcOptions006) .eq(1).rename('remapped').clip(region) // Merge NDVI and LST image collections var combined = ndvi.combine(lst, true) var combinedList = combined.toList(10000) // Boolean reclassifications (suitable/unsuitable) for day/night temperatures and ndvi var con =...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
L'outil Hakai Google Earth Engine Kelp (outil GEEK) a été développé dans le cadre d'une collaboration entre l'Institut Hakai, l'Université de Victoria et le ministère des Pêches et des Océans pour tirer parti des capacités de cloud computing pour analyser l'imagerie satellite Landsat (30 m) afin d'extraire l'étendue de la canopée et du varech. La méthodologie originale est décrite dans Nijland et al. 2019*.
Remarque : Ce jeu de données est conçu comme une « lecture seule », car nous continuons à améliorer les résultats. Il vise à démontrer l'utilité de l'archive Landsat pour cartographier le varech. Ces données sont visibles sur la carte Web GEEK disponible ici.
Ce package de données contient deux jeux de données :
Etendue annuelle maximale estivale du varech formant la canopée (1984 - 2019) en tant que rasters. Etendue maximale décennale du varech formant la canopée (1984 - 1990, 1991 - 2000, 2001 - 2010, 2011 - 2020)
Ce jeu de données a été généré à la suite de modifications apportées aux méthodologies GEEK originales. Les paramètres utilisés pour générer les rasters étaient des scènes d'images avec :
Plage de mois Imagescene = 1er mai - 30 septembre Clouds maximum dans la scène = 80% Marée maximale = 3,2 m (+0,5 MWL des marées de la côte centrale selon les méthodes KIM-1) Marée minimale = 0 m Tampon de rivage appliqué au masque de terrain = 1 pixel (30 m) NDVI* minimum (pour qu'un pixel individuel soit classé comme varech) = -0,05 Nombre minimum de fois qu'un pixel de varech individuel doit être détecté en tant que varech au cours d'une seule année = 30 % de toutes les détections d'une année donnée K moyenne minimale (moyenne du NDVI pour tous les pixels à un emplacement donné détecté comme varech) = -0,05 * NDVI = indice de végétation de différence normalisée.
Ces paramètres ont été choisis sur la base d'une évaluation de la précision à l'aide d'une étendue de varech dérivée d'images WorldView-2 (2 m) de juillet 2014 et août 2014. Ces données ont été rééchantillonnées à 30 m. Bien que de nombreuses itérations exécutées pour l'outil aient donné des résultats très similaires, des paramètres ont été sélectionnés qui ont maximisé la précision du varech pour la comparaison de 2014.
Les résultats de l'évaluation de la précision ont été les suivants : Erreur de commission de 50 % Erreur d'omission de 25 %
En termes simples, les méthodes actuelles conduisent à un niveau élevé de « faux positifs », mais elles capturent avec précision l'étendue du varech par rapport au jeu de données de validation. Cette erreur peut être attribuée à la sensibilité de l'utilisation d'un seul NDVI pour détecter le varech. Nous observons des variations des seuils NDVI à la fois au sein d'une seule scène et entre les scènes.
L'objectif du jeu de données de séries chronologiques est censé prendre en compte une partie de cette erreur, car les pixels détectés seulement un par décennie sont supprimés.
Ce jeu de données fait partie du programme de cartographie de l'habitat de Hakai. L'objectif principal du programme de cartographie de l'habitat de Hakai est de générer des inventaires spatiaux des habitats côtiers, d'étudier comment ces habitats évoluent au fil du temps et les moteurs de ce changement.
*Nijland, W., Reshitnyk, L. et Rubidge, E. (2019). Télédétection par satellite de varech formant une canopée sur un littoral complexe : une nouvelle procédure utilisant les archives d'images Landsat. Télédétection de l'environnement, 220, 41-50. doi:10.1016/j.rse.2018.10.032
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Forest cover is rapidly changing at the global scale as a result of land-use change (principally deforestation in many tropical regions and afforestation in many temperate regions) and climate change. However, a detailed map of global forest gain is still lacking at fine spatial and temporal resolutions. In this study, we developed a new automatic framework to map annual forest gain across the globe, based on Landsat time series, the LandTrendr algorithm and the Google Earth Engine (GEE) platform. First, samples of stable forest collected based on the Global Forest Change product (GFC) were used to determine annual Normalized Burn Ratio (NBR) thresholds for forest gain detection. Secondly, with the NBR time-series from 1982 to 2020 and LandTrendr algorithm, we produced dataset of global forest gain year from 1984 to 2020 based on a set of decision rules. Our results reveal that large areas of forest gain occurred in China, Russia, Brazil and North America, and the vast majority of the global forest gain has occurred since 2000. The new dataset was consistent in both spatial extent and years of forest gain with data from field inventories and alternative remote sensing products. Our dataset is valuable for policy-relevant research on the net impact of forest cover change on the global carbon cycle and provides an efficient and transferable approach for monitoring other types of land cover dynamics.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
It comprises end-user discussions on Six similar topics related to the Google Maps application. A small dataset comprising user discussion about Google Maps application used for validating argumentation-based research approaches. A Python script for extracting end-user feedback from the Reddit forum by keeping the argumentative order of discussions (comment-reply).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A Google Earth Engine implementation of the Floodwater Depth Estimation Tool (FwDET) This is a Google Earth Engine implementation of the Floodwater Depth Estimation Tool (FwDET) developed by the Surface Dynamics and Modeling Lab at the University of Alabama that calculates flood depth using a flood extent layer and a digital elevation model. This research is made possible by the CyberSeed Program at the University of Alabama. Project name: WaterServ: A Cyberinfrastructure for Analysis, Visualization and Sharing of Hydrological Data. Please see the associated publications: 1. Peter, B.G., Cohen, S., Lucey, R., Munasinghe, D., Raney, A. and Brakenridge, G.R., 2020. Google Earth Engine Implementation of the Floodwater Depth Estimation Tool (FwDET-GEE) for rapid and large scale flood analysis. IEEE Geoscience and Remote Sensing Letters, 19, pp.1-5. https://ieeexplore.ieee.org/abstract/document/9242297 2. Cohen, S., Peter, B.G., Haag, A., Munasinghe, D., Moragoda, N., Narayanan, A. and May, S., 2022. Sensitivity of remote sensing floodwater depth calculation to boundary filtering and digital elevation model selections. Remote Sensing, 14(21), p.5313. https://github.com/csdms-contrib/fwdet 3. Cohen, S., A. Raney, D. Munasinghe, J.D. Loftis J, A. Molthan, J. Bell, L. Rogers, J. Galantowicz, G.R. Brakenridge7, A.J. Kettner, Y. Huang, Y. Tsang, (2019). The Floodwater Depth Estimation Tool (FwDET v2.0) for Improved Remote Sensing Analysis of Coastal Flooding. Natural Hazards and Earth System Sciences, 19, 2053–2065. https://doi.org/10.5194/nhess-19-2053-2019 4. Cohen, S., G. R. Brakenridge, A. Kettner, B. Bates, J. Nelson, R. McDonald, Y. Huang, D. Munasinghe, and J. Zhang (2018), Estimating Floodwater Depths from Flood Inundation Maps and Topography, Journal of the American Water Resources Association, 54 (4), 847–858. https://doi.org/10.1111/1752-1688.12609 Sample products and data availability: https://sdml.ua.edu/models/fwdet/ https://sdml.ua.edu/michigan-flood-may-2020/ https://cartoscience.users.earthengine.app/view/fwdet-gee-mi https://alabama.app.box.com/s/31p8pdh6ngwqnbcgzlhyk2gkbsd2elq0 GEE implementation output: fwdet_gee_brazos.tif ArcMap implementation output (see Cohen et al. 2019): fwdet_v2_brazos.tif iRIC validation layer (see Nelson et al. 2010): iric_brazos_hydraulic_model_validation.tif Brazos River inundation polygon access in GEE: var brazos = ee.FeatureCollection('users/cartoscience/FwDET-GEE-Public/Brazos_River_Inundation_2016') Nelson, J.M., Shimizu, Y., Takebayashi, H. and McDonald, R.R., 2010. The international river interface cooperative: public domain software for river modeling. In 2nd Joint Federal Interagency Conference, Las Vegas, June (Vol. 27). Google Earth Engine Code /* ---------------------------------------------------------------------------------------------------------------------- # FwDET-GEE calculates floodwater depth from a floodwater extent layer and a DEM Authors: Brad G. Peter, Sagy Cohen, Ronan Lucey, Dinuke Munasinghe, Austin Raney Emails: bpeter@ua.edu, sagy.cohen@ua.edu, ronan.m.lucey@nasa.gov, dsmunasinghe@crimson.ua.edu, aaraney@crimson.ua.edu Organizations: BP, SC, DM, AR - University of Alabama; RL - University of Alabama in Huntsville Last Modified: 10/08/2020 To cite this code use: Peter, Brad; Cohen, Sagy; Lucey, Ronan; Munasinghe, Dinuke; Raney, Austin, 2020, "A Google Earth Engine implementation of the Floodwater Depth Estimation Tool (FwDET-GEE)", https://doi.org/10.7910/DVN/JQ4BCN, Harvard Dataverse, V2 ------------------------------------------------------------------------------------------------------------------------- This is a Google Earth Engine implementation of the Floodwater Depth Estimation Tool (FwDETv2.0) [1] developed by the Surface Dynamics and Modeling Lab at the University of Alabama that calculates flood depth using a flood extent layer and a digital elevation model. This research is made possible by the CyberSeed Program at the University of Alabama. Project name: WaterServ: A Cyberinfrastructure for Analysis, Visualization and Sharing of Hydrological Data. GitHub Repository (ArcMap and QGIS implementations): https://github.com/csdms-contrib/fwdet ------------------------------------------------------------------------------------------------------------------------- How to run this code with your flood extent GEE asset: User of this script will need to update path to flood extent (line 32 or 33) and select from the processing options. Available DEM options (1) are USGS/NED (U.S.) and USGS/SRTMGL1_003 (global). Other options include (2) running the elevation outlier filtering algorithm, (3) adding water body data to the inundation extent, (4) add a water body data layer uploaded by the user rather than using the JRC global surface water data, (5) masking out regular water body data, (6) masking out 0 m depths, (7) choosing whether or not to export, (8) exporting additional data layers, and (9) setting an export file name....
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Find alternative fueling stations near an address or ZIP code or along a route in the United States. Enter a state to see a station count. ## Data Collection Methods ## The data in the Alternative Fueling Station Locator are gathered and verified through a variety of methods. The National Renewable Energy Laboratory (NREL) obtains information about new stations from trade media, Clean Cities coordinators, an Add a Station form on the Alternative Fuels Data Center (AFDC) website, and through collaborating with infrastructure equipment and fuel providers. NREL regularly compares its station data with those of other relevant trade organizations and websites. Differences in methodologies and inclusion criteria may result in slight differences between NREL's database and those maintained by other organizations. NREL also collaborates with alternative fuel industry groups to maintain the data. NREL and its data collection subcontractor are currently collaborating with natural gas, electric drive, biodiesel, ethanol, and propane industry groups to establish best practices for identifying new stations in the most-timely manner possible and to develop a more rigorous network for the future. ## Station Update Schedule ## Existing stations in the database are contacted at least once a year on an established schedule to verify they are still operational and dispensing the fuel specified. Based on an established data collection schedule, the database is updated once a month with the exception of electric vehicle supply equipment (EVSE) data, which are updated twice a month. Stations that are no longer operational or no longer provide alternative fuel are removed from the database on a monthly basis or as they are identified. ## Mapping and Counting Methods ## Each point on the map is counted as one station in the station count. A station appears as one point on the map, regardless of the number of fuel dispensers or charging outlets at that location. Station addresses are geocoded and mapped using an automatic geocoding application. The geocoding application returns the most accurate location based on the provided address. Station locations may also be provided by external sources (e.g., station operators) and/or verified in a geographic information system (GIS) tool like Google Earth, Google Maps, or Google StreetView. This information is considered highly accurate, and these coordinates override any information generated using the geocoding application. ## Notes about Specific Station Types ## ### Private Stations ### Stations with an Access of "Private - Fleet customers only" may allow other entities to fuel through a business-to-business arrangement. For more information, fleet customers should refer to the information listed in the details section for that station to contact the station directly. ### Biodiesel Stations ### The Alternative Fueling Station Locator only includes stations offering biodiesel blends of 20% (B20) and above. ### Electric Vehicle Supply Equipment (EVSE) ### An electric charging station, or EVSE, appears as one point on the map, regardless of the number of charging outlets at that location. The number and type of charging outlets available are displayed as additional details when the station location is selected. Each point on the map is counted as one station in the station count. To see a total count of EVSE for all outlets available, go to the Alternative Fueling Station Counts by State table. Residential EVSE locations are not included in the Alternative Fueling Station Locator. ## Liquefied Petroleum Gas (Propane) Stations ### Because many propane stations serve customers other than drivers and fleets, NREL collaborated with the industry to effectively represent the differences. Each propane station is designated as a 'primary' or 'secondary' service type. Both types are able to fuel vehicles. However, locations with a 'primary' designation offer vehicle services and fuel priced specifically for use in vehicles. The details page for each station lists its service designation.
Accurate land use land cover (LULC) maps that delineate built infrastructure are useful for numerous applications, from urban planning, humanitarian response, disaster management, to informing decision making for reducing human exposure to natural hazards, such as wildfire. Existing products lack sufficient spatial, temporal, and thematic resolution, omitting critical information needed to capture LULC trends accurately over time. Advancements in remote sensing imagery, open-source software and cloud computing offer opportunities to address these challenges. Using Google Earth Engine, we developed a novel built infrastructure detection method in semi-arid systems by applying a random forest classifier to a fusion of Sentinel-1 and Sentinel-2 time series. Our classifier performed well, differentiating three built environment types: residential, infrastructure, and paved, with overall accuracies ranging from 90 to 96%. Producer accuracies were highest for the infrastructure class (98–99%)..., , # Mapped built infrastructure (MBI)
These data are annual maps of built infrastructure, with six classes, spanning the Snake River Plain ecoregion in southern Idaho. These products are ready-to-use, and can be imported into any geospatial software for analyses. These data were generated from a fusion of Sentinel-1 radar and Sentinel-2 multispectral imagery. The final MBI products are annual raster data types, that is pixelated, categorical data with 6 categories or classes; 1. Residential, 2. Infrastructure, 3. Paved, 4. Agriculture, 5. Vegetation, and 6. Range/Scrub.
If a user wants to generate these products themselves, or reproduce these products for a similar area, then Google Earth Engine and QGIS is required. The user must have an account with Google Earth Engine (GEE), load the MBI scripts into their repository, and run the code. For applying this model outside of the Snake River Plain Level III ecoregion, new training data must be...,
Used within the Travellers Road Information Portal Interactive Map to convey transportation related information in both official languages. Contains information about major construction projects, including restrictions and delays. This data is best viewed using Google Earth or similar Keyhole Markup Language (KML) compatible software. For instructions on how to use Google Earth, read the Google Earth tutorial .
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Snow, ice, and permafrost constitute ‘frozen commons,’ or common-pool resources that are collectively used and managed. This study examines the state and uses of snow and ice commons in two remote communities: Bayanzürkh in Khövsgöl Aimag, Mongolia, and McGrath, Alaska. Regional climate analyses indicate air temperatures warmed more than 2.2°C in both locales since the mid-twentieth century, compounding their similar accessibility challenges and dependence on natural resources. Warming affects transit timing and duration over snow and ice, impacting Mongolian herding and Alaskan subsistence hunting. Snow cover duration in both communities was calculated by classifying MODIS Snow Cover imagery utilizing the Google Earth Engine JavaScript API. Annual lake and river ice breakup timing and safe travel days were quantified from winter MODIS imagery from 2002 to 2023 for Lake Khövsgöl in Mongolia and the Kuskokwim River near McGrath, Alaska. Snow and ice duration did not significantly change over the 21 years examined. Relatively high map accuracies allowed discussion of interannual variability impacts on subsistence, transportation, and tourism. Daily snow and ice mapping in Google Earth Engine is a cost-effective and rapid method for quantifying environmental change impacting frozen commons, and therefore a tool for community decision-making and communication.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Geospatial raster data and vector data created in the frame of the study "Mapping Arctic Lake Ice Backscatter Anomalies using Sentinel-1 Time Series on Google Earth Engine" submitted to the journal "Remote Sensing" and Python code to reproduce the results.
In addition to the full repository (Supplement_to_RS_Arctic_Lake_Ice_Backscatter_Anomalies.zip), two reduced alternatives of this repository are available due to large file size of the full repository:
Supplement_to_RS_Arctic_Lake_Ice_Backscatter_Anomalies_without_IW_result_data.zip contains the same data and Python scripts as the full repository, but results based on IW data and tiled EW delta sigma0 images directly exported from Google Earth Engine have been removed. The merged data (from tiled EW delta sigma0 images) and all other results deduced thereof are included.
Supplement_to_RS_Arctic_Lake_Ice_Backscatter_Anomalies_scripts_and_reference_data_only.zip contains only the Python scripts and reference data. The directory structure was retained for better reproducibility.
Please see the associated README-files for details.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The Geospatial Analytics Market size was valued at USD 98.93 billion in 2023 and is projected to reach USD 227.04 billion by 2032, exhibiting a CAGR of 12.6 % during the forecasts period. The Geospatial Analytics Market describes an application of technologies and approaches processing geographic and spatial data for intelligence and decision-making purposes. This market comprises of mapping tools and software, spatial data and geographic information systems (GIS) used in various fields including urban planning, environmental, transport and defence. Use varies from inventory tracking and control to route optimization and assessment of changes in environment. Other trends are the growth of big data and machine learning to improve the predictive methods, the improved real-time data processing the use of geographic data in combination with other technologies, for example, IoT and cloud. Some of the factors that are fuelling the need to find a marketplace for GIS solutions include; Increasing importance of place-specific information Increasing possibilities for data collection The need to properly manage spatial information in a high stand environment. Recent developments include: In May 2023, Google launched Google Geospatial Creator, a powerful tool that allows users to create immersive AR experiences that are both accurate and visually stunning. It is powered by Photorealistic 3D Tiles and ARCore from Google Maps Platform and can be used with Unity or Adobe Aero. Geospatial Creator provides a 3D view of the world, allowing users to place their digital content in the real world, similar to Google Earth and Google Street View. , In April 2023, Hexagon AB launched the HxGN AgrOn Control Room. It is a mobile app that allows managers and directors of agricultural companies to monitor all field operations in real time. It helps managers identify and address problems quickly, saving time and money. Additionally, the app can help to improve safety by providing managers with a way to monitor the location and status of field workers. , In December 2022, ESRI India announced the availability of Indo ArcGIS offerings on Indian public clouds and services to provide better management, collecting, forecasting, and analyzing location-based data. , In May 2022, Trimble announced the launch of the Trimble R12i GNSS receiver, which has a powerful tilt adjustment feature. It enables land surveyors to concentrate on the task and finish it more quickly and precisely. , In May 2021, Foursquare purchased Unfolded, a US-based provider of location-based services. This US-based firm provides location-based services and goods, including data enrichment analytics and geographic data visualization. With this acquisition, Foursquare aims to provide its users access to various first and third-party data sets and integrate them with the geographical characteristics. , In January 2021, ESRI, a U.S.-based geospatial image analytics solutions provider, introduced the ArcGIS platform. ArcGIS Platform by ESRI operates on a cloud consumption paradigm. App developers generally use this technology to figure out how to include location capabilities in their apps, business operations, and goods. It aids in making geospatial technologies accessible. .
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Used within the Travellers Road Information Portal Interactive Map to convey transportation related information in both official languages. Contains a list of ferry service locations operated by the Ministry of Transportation within Ontario. This data is best viewed using Google Earth or similar Keyhole Markup Language (KML) compatible software. For instructions on how to use Google Earth, read the Google Earth tutorial . This data set is now available via the Ontario 511 Developer API at *[KML]: Keyhole Markup Language
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Blue economies are measured by mining national statistics or economic modeling, requiring substantial capability and quality data, both of which are not universally available. The lack of harmonized methods hampers international comparisons and results are usually only attributable at the national scale. An alternative method is described here that leverages an open computing environment and data to quantify blue economies using marine night light producing measurements that are intercomparable and scalable from national to regional to global.
Map showing the Maine DEP Biomonitoring Programs wetland and stream sample stations. This map is met to be the replacement for Maine DEP Biomonitoring Programs Google Earth mapping project.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The broad importance of land use and land cover information has been defined by and confirmed for many applications. Therefore, many land cover products have been developed at various scales (i.e., spatial resolution) and extensions (i.e., local, national, region, and global). Several studies have reported inconsistencies among global land cover (GLC) products causing that the accuracy of these products differ between regions. Recently, this issue has received a new level of attention, because many studies have pointed out that the inaccuracy of land cover products at regional scale can make a huge impact on the results of other applications relying on the GLC products (in the following downstream applications). Therefore, developing a method which can be easily and quickly applied to many different regions, but produce highly accurate land cover information is of utmost importance. To meet the first two criteria, several studies successfully used existing GLC products to automatically generate samples. However, none of these studies have been focused on the quality of the samples, which directly and largely affect the classification results. In this context, and taking Mongolia as a case study, we proposed a simple, fast, and accurate method to produce annual land cover maps with 250 m spatial resolution for entire Mongolia over a period of 20 years, from 2001 to 2020. The maps are based on MODIS data (products MOD13Q1 and MCD12Q1, version 6) and produced using modern machine learning techniques (the Random Forest) on the Google Earth Engine. Training samples have been selected by developing a semi-random approach which ensures that samples are spatially well-distributed, the number of samples for each class is in a similar order irrespective of the dominance of the land-cover classes and the samples are sufficiently apart from each other to reduce spatial autocorrelation. It is worth noting that we have selected Mongolia because of the low accuracy of GLC in this vast and remote country. Our results show that the accuracy of the new land cover maps improved compared with the corresponding MODIS products and if visually compared to Landsat images acquired at the same time. Overall accuracy from the validation data was approximately 90% for all new maps compared to 75% for the existing MODIS product. […]
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
1 - OVERVIEW
This dataset contains overhead images of wind turbines from three regions of the United States – the Eastern Midwest (EM), Northwest (NW), and Southwest (SW). The images come from the National Agricultural Imagery Program and were extracted using Google Earth Engine and wind turbine latitude-longitude coordinates from the U.S. Wind Turbine Database. Overall, there are 2003 NAIP collected images, of which 988 images contain wind turbines and the other 1015 are background images (not containing wind turbines) collected from regions nearby the wind turbines. Labels are provided for all images containing wind turbines. We welcome uses of this dataset for object detection or other research purposes.
2 - DATA DETAILS
Each image is 608 x 608 pixels, with a GSD of 1m. This means each image represents a frame of approximately 608 m x 608m. Because images were collected from overhead the exact wind turbine coordinates, images used to be nearly exactly centered on turbines. To avoid this issue, images were randomly shifted up to 75m in two directions.
We refer to images without turbines as "background images", and further split up the images with turbines into the training and testing set splits. We call the training images with turbines "real images" and the testing images "test images".
Distribution of gathered images by region and type:
Domain
Real
Test
Background
EM
267
100
244
NW
213
100
415
SW
208
100
356
Note that this dataset is part of a larger research project in Duke's 2021-2022 Bass Connections team, Creating Artificial Worlds with AI to Improve Energy Access Data. Our research proposes a technique to synthetically generate images with implanted energy infrastructure objects. We include the synthetic images we generated along with the NAIP collected images above. Generating synthetic images requires a training and testing domain, so for each pair of domains we include 173 synthetically generated images. For a fuller picture on our research, including additional image data from domain adaptation techniques we benchmark our method against, visit our github: https://github.com/energydatalab/closing-the-domain-gap. If you use this dataset, please cite the citation found in our Github README.
3 - NAVIGATING THE DATASET
Once the data is unzipped, you will see that the base level of the dataset contains an image and a labels folder, which have the exact same structure. Here is how the images directory is divided:
| - images
| | - SW
| | | - Background
| | | - Test
| | | - Real
| | - EM
| | | - Background
| | | - Test
| | | - Real
| | - NW
| | | - Background
| | | - Test
| | | - Real
| | - Synthetic
| | | - s_EM_t_NW
| | | - s_SW_t_NW
| | | - s_NW_t_NW
| | | - s_NW_t_EM
| | | - s_SW_t_EM
| | | - s_EM_t_SW
| | | - s_NW_t_SW
| | | - s_EM_t_EM
| | | - s_SW_t_SW
For example images/SW/Real has the 208 .jpg images from the Southwest that contain turbines. The synthetic subdirectory is structured such that for example images/Synthetic/s_EM_t_NW contains synthetic images using a source domain of Eastern Midwest and a target domain of Northwest, meaning the images were stylized to artificially look like Northwest images.
Note that we also provide a domain_overview.json file at the top level to help you navigate the directory. The domain_overview.json file navigates the directory with keys, so if you load the file as f, then f['images']['SW']['Background'] should list all the background photos from the SW. The keys in the domain json are ordered in the order we used the images for our experiments. So if our experiment used 100 SW background images, we used the images corresponding to the first 100 keys.
Naming conventions:
1 - Real and Test images:
{DOMAIN}_{UNIQUE ID}.jpg
For example 'EM_136.jpg' with corresponding label file 'EM_136.txt' refers to an image from the Eastern Midwest with unique ID 136.
2 - Background images:
Background images were collected in 3 waves with the purpose to create a set of images similar visually to real images, just without turbines:
The first wave came from NAIP images from the U.S. Wind Turbine Database coordinates where no wind turbine was present in the snapshot (NAIP images span a relatively large time, thus it is possible that wind turbines might be missing from the images). These images are labeled {DOMAIN}_{UNIQUE ID}.jpg, for example 'EM_1612_background.jpg'.
Using wind turbine coordinates, images were randomly collected either 4000m Southeast or Northwest. These images are labeled {DOMAIN}_{UNIQUE_ID}_{SHIFT DIRECTION (SE or NW)}.jpg. For example 'NW_12750_SE_background.jpg' refers to an image from the Northwest without turbines captured at a shift of 4000m Southeast from a wind turbine with unique ID 12750. Using wind turbine coordinates, images were randomly collected either 6000m Southeast or Northwest. These images are labeled {DOMAIN}_{UNIQUE_ID}_{SHIFT DIRECTION (SE or NW)}_6000.jpg, for example 'NW_12937_NW_6000_background.jpg'.
3 - Synthetic images
Each synthetic image takes in labeled wind turbine examples from the source domain, a background image from the target domain, and a mask. It uses the mask to place wind turbine examples and blends those examples onto the background image using GP-GAN. Thus, the naming conventions for synthetic images are:
{BACKGROUND IMAGE NAME FROM TARGET DOMAIN}_{MASK NUMBER}.jpg.
For example, images/Synthetic/s_NW_t_SW/SW_2246_m15.jpg corresponds to a synthetic image created using labeled wind turbine examples from the Northwest and stylized in the image of the Southwest using Southwest background image SW_2246 and mask 15.
For any remaining questions, please reach out to the author point of contact at caleb.kornfein@gmail.com.
ERA5-Land is a reanalysis dataset providing a consistent view of the evolution of land variables over several decades at an enhanced resolution compared to ERA5. ERA5-Land has been produced by replaying the land component of the ECMWF ERA5 climate reanalysis. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using the laws of physics. Reanalysis produces data that goes several decades back in time, providing an accurate description of the climate of the past. This dataset includes all 50 variables as available on CDS. ERA5-Land data is available from 1950 to three months from real-time. Please consult the ERA5-Land "Known Issues" section. In particular, note that three components of the total evapotranspiration have values swapped as follows: variable "Evaporation from bare soil" (mars parameter code 228101 (evabs)) has the values corresponding to the "Evaporation from vegetation transpiration" (mars parameter 228103 (evavt)), variable "Evaporation from open water surfaces excluding oceans (mars parameter code 228102 (evaow)) has the values corresponding to the "Evaporation from bare soil" (mars parameter code 228101 (evabs)), variable "Evaporation from vegetation transpiration" (mars parameter code 228103 (evavt)) has the values corresponding to the "Evaporation from open water surfaces excluding oceans" (mars parameter code 228102 (evaow)). The asset is a daily aggregate of ECMWF ERA5 Land hourly assets which includes both flow and non-flow bands. Flow bands are formed by collecting the first hour's data of the following day which holds aggregated sum of previous day and while the non-flow bands are created by averaging all hourly data of the day. The flow bands are labeled with the "_sum" identifier, which approach is different from the daily data produced by Copernicus Climate Data Store, where flow bands are averaged too. Daily aggregates have been pre-calculated to facilitate many applications requiring easy and fast access to the data. Precipitation and other flow (accumulated) bands might occasionally have negative values, which doesn't make physical sense. At other times their values might be excessively high. This problem is due to how the GRIB format saves data: it simplifies or "packs" the data into smaller, less precise numbers, which can introduce errors. These errors get worse when the data varies a lot. Because of this, when we look at the data for a whole day to compute daily totals, sometimes the highest amount of rainfall recorded at one time can seem larger than the total rainfall measured for the entire day. To learn more, Please see: "Why are there sometimes small negative precipitation accumulations"
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Grasslands in British Columbia (BC) play a pivotal role in biodiversity, supporting over 30% of the region's endangered species. However, rapid urbanization and forest encroachment threaten these habitats. This study addresses the urgent need for an accurate, automated method for delineating and monitoring BC's grasslands by employing Object-Based Image Analysis (GEOBIA) within the Google Earth Engine platform, utilizing high-resolution Sentinel-2 satellite imagery. The approach innovates by integrating Superpixel Segmentation Based on Simple Non-Iterative Clustering (SNIC) with Random Forest classification, aimed at overcoming the mixed pixel effect prevalent in pixel-based methods. The methodology demonstrates a significant improvement in the accuracy of grassland delineation, achieving an overall classification accuracy of 96%. Specifically, the accuracy for grassland identification increased by 26.6% compared to the previous study, underscoring the effectiveness of GEOBIA for environmental monitoring. This advancement offers a promising tool for the conservation and management of grassland ecosystems in BC, suggesting a scalable model for similar ecological studies worldwide. The findings advocate for the adoption of GEOBIA in remote sensing practices, potentially transforming how grasslands are monitored and conserved, thereby contributing to the preservation of biodiversity.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In an era of climate and biodiversity crises, ecosystem rehabilitation is critical to the ongoing wellbeing of humans and the environment. Coastal ecosystem rehabilitation is particularly important, as these ecosystems sequester large quantities of carbon (known in marine ecosystems as “blue carbon”) thereby mitigating climate change effects while also providing ecosystem services and biodiversity benefits. The recent formal accreditation of blue carbon services is producing a proliferation of rehabilitation projects, which must be monitored and quantified over time and space to assess on-ground outcomes. Consequently, remote sensing techniques such as drone surveys, and machine learning techniques such as image classification, are increasingly being employed to monitor wetlands. However, few projects, if any, have tracked blue carbon restoration across temporal and spatial scales at an accuracy that could be used to adequately map species establishment with low-cost methods. This study presents an open-source, user-friendly workflow, using object-based image classification and a random forest classifier in Google Earth Engine, to accurately classify 4 years of multispectral and photogrammetrically derived digital elevation model drone data at a saltmarsh rehabilitation site on the east coast of Australia (Hunter River estuary, NSW). High classification accuracies were achieved, with >90% accuracy at 0.1 m resolution. At the study site, saltmarsh colonised most suitable areas, increasing by 142% and resulting in 56 tonnes of carbon sequestered, within a 4-year period, providing insight into blue carbon regeneration trajectories. Saltmarsh growth patterns were species-specific, influenced by species’ reproductive and dispersal strategies. Our findings suggested that biotic factors and interactions were important in influencing species’ distributions and succession trajectories. This work can help improve the efficiency and effectiveness of restoration planning and monitoring at coastal wetlands and similar ecosystems worldwide, with the potential to apply this approach to other types of remote sensing imagery and to calculate other rehabilitation co-benefits. Importantly, the method can be used to calculate blue carbon habitat creation following tidal restoration of coastal wetlands.
Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE