Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.
Explore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.
EXPLORE TIMELAPSEThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.
EXPLORE DATASETSThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.
EXPLORE THE APIUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.
LEARN ABOUT THE CODE EDITORScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.
SEE CASE STUDIESDynamic World is a 10m near-real-time (NRT) Land Use/Land Cover (LULC) dataset that includes class probabilities and label information for nine classes. Dynamic World predictions are available for the Sentinel-2 L1C collection from 2015-06-27 to present. The revisit frequency of Sentinel-2 is between 2-5 days depending on latitude. Dynamic World predictions are generated for Sentinel-2 L1C images with CLOUDY_PIXEL_PERCENTAGE <= 35%. Predictions are masked to remove clouds and cloud shadows using a combination of S2 Cloud Probability, Cloud Displacement Index, and Directional Distance Transform. Images in the Dynamic World collection have names matching the individual Sentinel-2 L1C asset names from which they were derived, e.g: ee.Image('COPERNICUS/S2/20160711T084022_20160711T084751_T35PKT') has a matching Dynamic World image named: ee.Image('GOOGLE/DYNAMICWORLD/V1/20160711T084022_20160711T084751_T35PKT'). All probability bands except the "label" band collectively sum to 1. To learn more about the Dynamic World dataset and see examples for generating composites, calculating regional statistics, and working with the time series, see the Introduction to Dynamic World tutorial series. Given Dynamic World class estimations are derived from single images using a spatial context from a small moving window, top-1 "probabilities" for predicted land covers that are in-part defined by cover over time, like crops, can be comparatively low in the absence of obvious distinguishing features. High-return surfaces in arid climates, sand, sunglint, etc may also exhibit this phenomenon. To select only pixels that confidently belong to a Dynamic World class, it is recommended to mask Dynamic World outputs by thresholding the estimated "probability" of the top-1 prediction.
Welcome to Apiscrapy, your ultimate destination for comprehensive location-based intelligence. As an AI-driven web scraping and automation platform, Apiscrapy excels in converting raw web data into polished, ready-to-use data APIs. With a unique capability to collect Google Address Data, Google Address API, Google Location API, Google Map, and Google Location Data with 100% accuracy, we redefine possibilities in location intelligence.
Key Features:
Unparalleled Data Variety: Apiscrapy offers a diverse range of address-related datasets, including Google Address Data and Google Location Data. Whether you seek B2B address data or detailed insights for various industries, we cover it all.
Integration with Google Address API: Seamlessly integrate our datasets with the powerful Google Address API. This collaboration ensures not just accessibility but a robust combination that amplifies the precision of your location-based insights.
Business Location Precision: Experience a new level of precision in business decision-making with our address data. Apiscrapy delivers accurate and up-to-date business locations, enhancing your strategic planning and expansion efforts.
Tailored B2B Marketing: Customize your B2B marketing strategies with precision using our detailed B2B address data. Target specific geographic areas, refine your approach, and maximize the impact of your marketing efforts.
Use Cases:
Location-Based Services: Companies use Google Address Data to provide location-based services such as navigation, local search, and location-aware advertisements.
Logistics and Transportation: Logistics companies utilize Google Address Data for route optimization, fleet management, and delivery tracking.
E-commerce: Online retailers integrate address autocomplete features powered by Google Address Data to simplify the checkout process and ensure accurate delivery addresses.
Real Estate: Real estate agents and property websites leverage Google Address Data to provide accurate property listings, neighborhood information, and proximity to amenities.
Urban Planning and Development: City planners and developers utilize Google Address Data to analyze population density, traffic patterns, and infrastructure needs for urban planning and development projects.
Market Analysis: Businesses use Google Address Data for market analysis, including identifying target demographics, analyzing competitor locations, and selecting optimal locations for new stores or offices.
Geographic Information Systems (GIS): GIS professionals use Google Address Data as a foundational layer for mapping and spatial analysis in fields such as environmental science, public health, and natural resource management.
Government Services: Government agencies utilize Google Address Data for census enumeration, voter registration, tax assessment, and planning public infrastructure projects.
Tourism and Hospitality: Travel agencies, hotels, and tourism websites incorporate Google Address Data to provide location-based recommendations, itinerary planning, and booking services for travelers.
Discover the difference with Apiscrapy – where accuracy meets diversity in address-related datasets, including Google Address Data, Google Address API, Google Location API, and more. Redefine your approach to location intelligence and make data-driven decisions with confidence. Revolutionize your business strategies today!
After 2022-01-25, Sentinel-2 scenes with PROCESSING_BASELINE '04.00' or above have their DN (value) range shifted by 1000. The HARMONIZED collection shifts data in newer scenes to be in the same range as in older scenes. Sentinel-2 is a wide-swath, high-resolution, multi-spectral imaging mission supporting Copernicus Land Monitoring studies, including the monitoring of vegetation, soil and water cover, as well as observation of inland waterways and coastal areas. The Sentinel-2 data contain 13 UINT16 spectral bands representing TOA reflectance scaled by 10000. See the Sentinel-2 User Handbook for details. QA60 is a bitmask band that contained rasterized cloud mask polygons until Feb 2022, when these polygons stopped being produced. Starting in February 2024, legacy-consistent QA60 bands are constructed from the MSK_CLASSI cloud classification bands. For more details, see the full explanation of how cloud masks are computed.. Each Sentinel-2 product (zip archive) may contain multiple granules. Each granule becomes a separate Earth Engine asset. EE asset ids for Sentinel-2 assets have the following format: COPERNICUS/S2/20151128T002653_20151128T102149_T56MNN. Here the first numeric part represents the sensing date and time, the second numeric part represents the product generation date and time, and the final 6-character string is a unique granule identifier indicating its UTM grid reference (see MGRS). The Level-2 data produced by ESA can be found in the collection COPERNICUS/S2_SR. For datasets to assist with cloud and/or cloud shadow detection, see COPERNICUS/S2_CLOUD_PROBABILITY and GOOGLE/CLOUD_SCORE_PLUS/V1/S2_HARMONIZED. For more details on Sentinel-2 radiometric resolution, see this page.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sentinel2GlobalLULC is a deep learning-ready dataset of RGB images from the Sentinel-2 satellites designed for global land use and land cover (LULC) mapping. Sentinel2GlobalLULC v2.1 contains 194,877 images in GeoTiff and JPEG format corresponding to 29 broad LULC classes. Each image has 224 x 224 pixels at 10 m spatial resolution and was produced by assigning the 25th percentile of all available observations in the Sentinel-2 collection between June 2015 and October 2020 in order to remove atmospheric effects (i.e., clouds, aerosols, shadows, snow, etc.). A spatial purity value was assigned to each image based on the consensus across 15 different global LULC products available in Google Earth Engine (GEE).
Our dataset is structured into 3 main zip-compressed folders, an Excel file with a dictionary for class names and descriptive statistics per LULC class, and a python script to convert RGB GeoTiff images into JPEG format. The first folder called "Sentinel2LULC_GeoTiff.zip" contains 29 zip-compressed subfolders where each one corresponds to a specific LULC class with hundreds to thousands of GeoTiff Sentinel-2 RGB images. The second folder called "Sentinel2LULC_JPEG.zip" contains 29 zip-compressed subfolders with a JPEG formatted version of the same images provided in the first main folder. The third folder called "Sentinel2LULC_CSV.zip" includes 29 zip-compressed CSV files with as many rows as provided images and with 12 columns containing the following metadata (this same metadata is provided in the image filenames):
For seven LULC classes, we could not export from GEE all images that fulfilled a spatial purity of 100% since there were millions of them. In this case, we exported a stratified random sample of 14,000 images and provided an additional CSV file with the images actually contained in our dataset. That is, for these seven LULC classes, we provide these 2 CSV files:
To clearly state the geographical coverage of images available in this dataset, we included in the version v2.1, a compressed folder called "Geographic_Representativeness.zip". This zip-compressed folder contains a csv file for each LULC class that provides the complete list of countries represented in that class. Each csv file has two columns, the first one gives the country code and the second one gives the number of images provided in that country for that LULC class. In addition to these 29 csv files, we provided another csv file that maps each ISO Alpha-2 country code to its original full country name.
© Sentinel2GlobalLULC Dataset by Yassir Benhammou, Domingo Alcaraz-Segura, Emilio Guirado, Rohaifa Khaldi, Boujemâa Achchab, Francisco Herrera & Siham Tabik is marked with Attribution 4.0 International (CC-BY 4.0)
In 2023, Google Maps was the most downloaded map and navigation app in the United States, despite being a standard pre-installed app on Android smartphones. Waze followed, with 9.89 million downloads in the examined period. The app, which comes with maps and the possibility to access information on traffic via users reports, was developed in 2006 by the homonymous Waze company, acquired by Google in 2013.
Usage of navigation apps in the U.S. As of 2021, less than two in 10 U.S. adults were using a voice assistant in their cars, in order to place voice calls or follow voice directions to a destination. Navigation apps generally offer the possibility for users to download maps to access when offline. Native iOS app Apple Maps, which does not offer this possibility, was by far the navigation app with the highest data consumption, while Google-owned Waze used only 0.23 MB per 20 minutes.
Usage of navigation apps worldwide In July 2022, Google Maps was the second most popular Google-owned mobile app, with 13.35 million downloads from global users during the examined month. In China, the Gaode Map app, which is operated along with other navigation services by the Alibaba owned AutoNavi, had approximately 730 million monthly active users as of September 2022.
The AIMS Google Earth Catalogue contains lists of KML/KMZ files, created by AIMS staff, that can be loaded into Google Earth and some other 3D programs. Maps may be used as is, or customized in Google Earth for your specific purposes.Files in the cataloque have been created for a variety of purposes such as providing high resolution imagery of islands and reefs and mapping study sites. Staff are encouraged to add their own files to the catalogue. The application contains instructions to how to add and document files to share internally. If you are familiar with RSS Feeds, Syndication or News Feeds, you might be interested in adding the RSS URL to your feed reader in your web browser or email client. The AIMS Google Earth Catalogue is an initiative of the AIMS Data Centre to provide a facility for sharing KML/KMZ files between AIMS staff.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Google Earth Engine used to compute the NDVI statistics added to Globe-LFMC. The input of the program is a point shapefile (“samplePlotsShapefile”, extensions .cpg, .dbf, .prj, .shp, .shx) representing the location of each Globe-LFMC site. This shapefile is available as additional data in figshare (see Code Availability). To run this GEE code the shapefile needs to be uploaded into the GEE Assets and, then, imported into the Code Editor with the name “plots” (without quotation marks).Google Earth Engine codeChange Notice - GEE_script_for_GlobeLFMC_ndvi_stats_v2.jsThe following acknowledgements have been added at the beginning of the code: “Portions of the following code are modifications based on work created and shared by Google in Earth Engine Data Catalog and Earth Engine Guides under the Apache 2.0 License. https://www.apache.org/licenses/LICENSE-2.0”Change Notice - samplePlotsShapefile_v2The shapefile describing the database sites has been corrected and updated with the correct coordinates.
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.7910/DVN/OGTUVNhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.7910/DVN/OGTUVN
MODIS product version comparison application for Google Earth Engine This is associated an article published by IEEE in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing on 20 March 2019, available online at doi.org/10.1109/JSTARS.2019.2901404. Reference: Peter, B.G. and Messina, J.P., 2019. Errors in Time-Series Remote Sensing and an Open Access Application for Detecting and Visualizing Spatial Data Outliers Using Google Earth Engine. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(4), pp.1165-1174. Link to manuscript https://ieeexplore.ieee.org/abstract/document/8672086 Interactive Google Earth Engine Application https://cartoscience.users.earthengine.app/view/versions Google Earth Engine Code // Version 1.1 Map.setCenter(30, 20, 2.5).setOptions('HYBRID').style().set('cursor', 'crosshair'); var countryList = ee.FeatureCollection('USDOS/LSIB_SIMPLE/2017'); var stats = function(year) { Map.layers().reset(); var countrySelected = app.country.countrySelect.getValue(); var region = countryList.filterMetadata('Country', 'equals', countrySelected).geometry(); var versionOne = app.inputBox.productBox.getValue(); var versionTwo = app.inputBox.productBoxTwo.getValue(); var band = app.inputBox.bandBox.getValue(); var bandTwo = app.inputBox.bandBoxTwo.getValue(); if (app.inputBox.customCheckbox.getValue() === true) { var latCoord = ee.Number.parse(app.inputBox.latCoordBox.getValue()).getInfo(); var lonCoord = ee.Number.parse(app.inputBox.lonCoordBox.getValue()).getInfo(); var distBuffer = ee.Number.parse(app.inputBox.distBox.getValue()).getInfo(); var distNum = distBuffer*1000; region = ee.Geometry.Point([lonCoord,latCoord]).buffer(distNum).bounds(); } var modisCollectionOne = ee.ImageCollection(versionOne).select(band); var modisCollectionTwo = ee.ImageCollection(versionTwo).select(bandTwo); var imageOne = modisCollectionOne.filter(ee.Filter.calendarRange(year,year,'year')).mean(); var imageTwo = modisCollectionTwo.filter(ee.Filter.calendarRange(year,year,'year')).mean(); var abs = imageOne.select(band).subtract(imageTwo.select(bandTwo)).abs().rename("difference"); var percentilesOne = imageOne.reduceRegion({ reducer: ee.Reducer.percentile([10,90]), geometry: region, scale: 250, maxPixels: 1e13 }); var percentilesTwo = imageTwo.reduceRegion({ reducer: ee.Reducer.percentile([10,90]), geometry: region, scale: 250, maxPixels: 1e13 }); var percentilesAbs = abs.reduceRegion({ reducer: ee.Reducer.percentile([10,90]), geometry: region, scale: 250, maxPixels: 1e13 }); var minOne = ee.Number(percentilesOne.get(band+'_p10')).getInfo(); var maxOne = ee.Number(percentilesOne.get(band+'_p90')).getInfo(); var minTwo = ee.Number(percentilesTwo.get(bandTwo+'_p10')).getInfo(); var maxTwo = ee.Number(percentilesTwo.get(bandTwo+'_p90')).getInfo(); var minBoth = Math.min(minOne,minTwo); var maxBoth = Math.max(maxOne,maxTwo); var minAbs = ee.Number(percentilesAbs.get('difference_p10')).getInfo(); var maxAbs = ee.Number(percentilesAbs.get('difference_p90')).getInfo(); var grayscale = ['f7f7f7', 'cccccc', '969696', '525252','141414']; Map.addLayer(imageOne.select(band).rename(band+'_'+versionOne).clip(region),{min: minBoth, max: maxBoth, palette: grayscale},band+' • '+versionOne, false); Map.addLayer(imageTwo.select(bandTwo).rename(bandTwo+'_'+versionTwo).clip(region),{min: minBoth, max: maxBoth, palette: grayscale},band+' • '+versionTwo, false); Map.addLayer(abs.clip(region),{min: minAbs, max: maxAbs, palette: grayscale},"Difference"); var options = { title: year+' Histogram', fontSize: 11, legend: {position: 'none'}, series: {0: {color: '7100AA'}} }; var histogram = ui.Chart.image.histogram(imageOne, region, 10000).setOptions(options); var optionsTwo = { title: year+' Histogram', fontSize: 11, legend: {position: 'none'}, series: {0: {color: '0071AA'}} }; var histogramTwo = ui.Chart.image.histogram(imageTwo, region, 10000).setOptions(optionsTwo); var clickLabel = ui.Label('Click map to get pixel time-series', {fontWeight: '300', fontSize: '13px', margin: '10px 10px 15px 30px'}); var clickLabelTwo = ui.Label('Click map to get pixel time-series', {fontWeight: '300', fontSize: '13px', margin: '10px 10px 15px 30px'}); app.rootPanels.panelOne.widgets().set(1, ui.Label('temp')); app.rootPanels.panelTwo.widgets().set(1, ui.Label('temp')); app.rootPanels.panelOne.widgets().set(1, histogram); app.rootPanels.panelOne.widgets().set(2, clickLabel); app.rootPanels.panelTwo.widgets().set(1, histogramTwo); app.rootPanels.panelTwo.widgets().set(2, clickLabelTwo); Map.centerObject(region); Map.setOptions('HYBRID'); Map.onClick(function(coords) { var point = ee.Geometry.Point(coords.lon, coords.lat); var dot = ui.Map.Layer(point, {color: 'AA0000'}, "Inspector"); Map.layers().set(3, dot); var clickChart = ui.Chart.image.series(modisCollectionOne, point, ee.Reducer.mean(), 10000); clickChart.setOptions({ title: 'Pixel | X: ' + coords.lon.toFixed(2)+', '+'Y: ' + coords.lat.toFixed(2),...
The Sentinel-1 mission provides data from a dual-polarization C-band Synthetic Aperture Radar (SAR) instrument at 5.405GHz (C band). This collection includes the S1 Ground Range Detected (GRD) scenes, processed using the Sentinel-1 Toolbox to generate a calibrated, ortho-corrected product. The collection is updated daily. New assets are ingested within two days after they become available. This collection contains all of the GRD scenes. Each scene has one of 3 resolutions (10, 25 or 40 meters), 4 band combinations (corresponding to scene polarization) and 3 instrument modes. Use of the collection in a mosaic context will likely require filtering down to a homogeneous set of bands and parameters. See this article for details of collection use and preprocessing. Each scene contains either 1 or 2 out of 4 possible polarization bands, depending on the instrument's polarization settings. The possible combinations are single band VV, single band HH, dual band VV+VH, and dual band HH+HV: VV: single co-polarization, vertical transmit/vertical receive HH: single co-polarization, horizontal transmit/horizontal receive VV + VH: dual-band cross-polarization, vertical transmit/horizontal receive HH + HV: dual-band cross-polarization, horizontal transmit/vertical receive Each scene also includes an additional 'angle' band that contains the approximate incidence angle from ellipsoid in degrees at every point. This band is generated by interpolating the 'incidenceAngle' property of the 'geolocationGridPoint' gridded field provided with each asset. Each scene was pre-processed with Sentinel-1 Toolbox using the following steps: Thermal noise removal Radiometric calibration Terrain correction using SRTM 30 or ASTER DEM for areas greater than 60 degrees latitude, where SRTM is not available. The final terrain-corrected values are converted to decibels via log scaling (10*log10(x)). For more information about these pre-processing steps, please refer to the Sentinel-1 Pre-processing article. For further advice on working with Sentinel-1 imagery, see Guido Lemoine's tutorial on SAR basics and Mort Canty's tutorial on SAR change detection. This collection is computed on-the-fly. If you want to use the underlying collection with raw power values (which is updated faster), see COPERNICUS/S1_GRD_FLOAT.
Nighttime satellite imagery were accessed via Google Earth Engine). Version 4 of the DMSP-OLS Nighttime Lights Time Series consists of cloud-free composites made using all the available archived DMSP-OLS smooth resolution data for calendar years. In cases where two satellites were collecting data - two composites were produced. The products are 30 arc second grids, spanning -180 to 180 degrees longitude and -65 to 75 degrees latitude. Several attributes are included - we used stable_lights which represents lights from cities, towns, and other sites with persistent lighting, including gas flares. Ephemeral events, such as fires have been discarded. The background noise was identified and replaced with values of zero.These data were provided to Google Earth Engine by teh National Centers for Environmental Information - National Oceanic and Atmospheric Administration of the United States (see Supporting Documentation).CANUE staff exported the annual data and extracted values of annual mean nighttime brightness for all postal codes in Canada for each year from 1992 to 2013 (DMTI Spatial, 2015).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Geospatial raster data and vector data created in the frame of the study "Mapping Arctic Lake Ice Backscatter Anomalies using Sentinel-1 Time Series on Google Earth Engine" submitted to the journal "Remote Sensing" and Python code to reproduce the results.
In addition to the full repository (Supplement_to_RS_Arctic_Lake_Ice_Backscatter_Anomalies.zip), two reduced alternatives of this repository are available due to large file size of the full repository:
Supplement_to_RS_Arctic_Lake_Ice_Backscatter_Anomalies_without_IW_result_data.zip contains the same data and Python scripts as the full repository, but results based on IW data and tiled EW delta sigma0 images directly exported from Google Earth Engine have been removed. The merged data (from tiled EW delta sigma0 images) and all other results deduced thereof are included.
Supplement_to_RS_Arctic_Lake_Ice_Backscatter_Anomalies_scripts_and_reference_data_only.zip contains only the Python scripts and reference data. The directory structure was retained for better reproducibility.
Please see the associated README-files for details.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
TerraMetrics, Inc., proposes a Phase II R/R&D program to implement the TerraBlocksTM Server architecture that provides geospatial data authoring, storage and delivery capabilities. TerraBlocks enables successful deployment, display and visual interaction of diverse, massive, multi-dimensional science datasets within popular web-based geospatial platforms like Google Earth and NASA World Wind.
TerraBlocks is a wavelet-encoded data storage technology and server architecture for NASA science data deployment into widely available web-based geospatial applications. The TerraBlocks approach provides dynamic geospatial data services with an emphasis on 1) server and data storage efficiency, 2) maintaining server-to-client science data integrity and 3) offering client-specific delivery of large Earth science geospatial datasets. The TerraBlocks approach bridges the gap between inflexible, but fast, pre-computed tile delivery approaches and highly flexible, but slower, map services approaches.
The pursued technology exploits the use of a network-friendly, wavelet-compressed data format and server architecture that extracts and delivers appropriately-sized blocks of multi-resolution geospatial data to geospatial client applications on demand and in interactive real time.
The Phase II project objective is to provide a complete and fully-functional prototype TerraBlocks data authoring and server software package delivery to NASA and simultaneously set the stage for commercial availability. The Phase III objective is to commercially deploy the TerraBlocks technology, with the collaboration of our commercial and government partners, to provide the enabling basis for widely available third-party data authoring and web-based geospatial application data services.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Many challenges prevail in cropland mapping over large areas, including dealing with massive volumes of datasets and computing capabilities. Accordingly, new opportunities have been opened at a breakneck pace with the launch of new satellites, the continuous improvements in data retrieval technology, and the upsurge of cloud computing solutions such as Google Earth Engine (GEE). Therefore, the present work is an attempt to automate the extraction of multi-year (2016–2020) cropland phenological metrics on GEE and use them as inputs with environmental covariates in a trained machine-learning model to generate high-resolution cropland and crop field-probabilities maps in Morocco. The comparison of our phenological retrievals against the MODIS phenology product shows very close agreement, implying that the suggested approach accurately captures crop phenology dynamics, which allows better cropland classification. The entire country is mapped using a large volume of reference samples collected and labelled with a visual interpretation of high-resolution imagery on Collect-Earth-Online, an online platform for systematically collecting geospatial data. The cropland classification product for the nominal year 2019–2020 showed an overall accuracy of 97.86% with a Kappa of 0.95. When compared to Morocco’s utilized agricultural land (SAU) areas, the cropland probabilities maps demonstrated the ability to accurately estimate sub-national SAU areas with an R-value of 0.9. Furthermore, analyzing cropland dynamics reveals a dramatic decrease in the 2019–2020 season by 2% since the 2018–2019 season and by 5% between 2016 and 2020, which is partly driven by climate conditions, but even more so by the novel coronavirus disease 2019 (COVID-19) that impacted the planting and managing of crops due to government measures taken at the national level, like complete lockdown. Such a result proves how much these methods and associated maps are critical for scientific studies and decision-making related to food security and agriculture.https://doi.org/10.3390/rs13214378
This resource includes Jupyter Notebooks that combine (merge) model results with observations. There are four folders:
NWM_SnowAssessment: This folder includes codes required for combining model results with observations. It also has an output folder that contains outputs of running five Jupyter Notebooks within the code folder. The order to run the Jupyter Notebooks is as follows. First run Combine_obs_mod_[*].ipynb where [*] is P (precipitation), SWE (snow water equivalent), TAir (air temperature), and FSNO (snow covered area fraction). This combines the model outputs and observations for each variable. Then, run Combine_obs_mod_P_SWE_TAir_FSNO.ipynb.
NWM_Reanalysis: This folder contains the National Water Model version 2 retrospective simulations that were retrieved and pre-processed at SNOTEL sites using https://doi.org/10.4211/hs.3d4976bf6eb84dfbbe11446ab0e31a0a and https://doi.org/10.4211/hs.1b66a752b0cc467eb0f46bda5fdc4b34.
SNOTEL: This folder contains preprocessed SNOTEL observations that were created using https://doi.org/10.4211/hs.d1fe0668734e4892b066f198c4015b06.
GEE: This folder contains MODIS observations that we downloaded using https://doi.org/10.4211/hs.d287f010b2dd48edb0573415a56d47f8. Note that the existing CSV file is the merged file of the downloaded CSV files.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Rangeland Department in the Kamloops District from the Government of British Columbia has recently raised concerns regarding the observation on the reduction of the number and the surface area of the grassland ponds in the Lac du Bois Grasslands Protected Area. This study aims to distinguish between the ponds with stable groundwater inputs (i.e. connected ponds) and the ponds with unstable groundwater inputs (i.e. perched ponds) to assist the government in determining reliable water sources. This research started by categorizing ponds with different surface areas as either low resilience or threatened resilience. Different terrain models were created using Light Detection and Ranging (LiDAR) data in addition to the calculation of the topographic wetness index (TWI). The classifications were validated using Google Earth and drone imagery. An overall of 121 ponds was discovered with 86 of them considered as low resilience, while the remaining 27 ponds being threatened resilience. For the low resilience ponds, 19 of them were identified as perched ponds, 47 as connected ponds, and 20 as intermediate ponds with the risk of having unstable groundwater connection that requires further analysis in the field. For the threatened resilience ponds, 5 of them were found to be perched ponds, 17 as connected ponds, and 5 as intermediate ponds. The outcome of the pond distribution indicates that the perched ponds were more likely to be found in an area with a flat slope, surrounded by grass, and low canopy coverage. Additionally, the calculated TWI was unable to differentiate between the pond types as the median groundwater levels are spatially dependent on the local topographic features.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This project aims to use remote sensing data from the Landsata database from Google Earth Engine to evaluate the spatial extent changes in the Bear Lake located between the US states of Utah and Idaho. This work is part of a term project submitted to Dr Alfonso Torres-Rua as a requirment to pass the Remote Sensing of Land Surfaces class (CEE6003). More information about the course is provided below. This project uses the geemap Python package (https://github.com/giswqs/geemap) for dealing with the google earth engine datasets. The content of this notebook can be used to:
learn how to retrive the Landsat 8 remote sensed data. The same functions and methodology can also be used to get the data of other Landsat satallites and other satallites such as Sentinel-2, Sentinel-3 and many others. However, slight changes might be required when dealing with other satallites then Landsat. Learn how to create time lapse images that visulaize changes in some parameters over time. Learn how to use supervised classification to track the changes in the spatial extent of water bodies such as Bear Lake that is located between the US states of Utah and Idaho. Learn how to use different functions and tools that are part of the geemap Python package. More information about the geemap Pyhton package can be found at https://github.com/giswqs/geemap and https://github.com/diviningwater/RS_of_Land_Surfaces_laboratory Course information:
Name: Remote Sensing of Land Surfaces class (CEE6003) Instructor: Alfonso Torres-Rua (alfonso.torres@usu.edu) School: Utah State University Semester: Spring semester 2023
The Alabama Department of Transportation (ALDOT) and the U.S. Geological Survey (USGS) studied several sites in the northern East Gulf Coastal Plain of Alabama to investigate effects of newly installed box culverts on the natural conditions of the streams they are traversing (Pugh and Gill, 2021). Data collection for the study spanned approximately 10 years and included before-, during-, and after-construction phases of box culvert installations at selected stream sites. The objectives of the project were to (1) assess the degree and extent of changes in geomorphic conditions, suspended-sediment concentrations, turbidity, and benthic macroinvertebrate populations at selected small streams following box culvert installation and (2) identify any substantial relationships between observed changes in geomorphology and benthic macroinvertebrate populations. Aerial imagery for each study site, taken before, during and after culvert construction, was downloaded from Google Earth (https://earth.google.com/web/) and are presented as separate Portable Document Format (PDF) files labeled by site name and imagery date. Aerial imagery was examined to see if any natural or anthropogenic changes occurred in the areas surrounding the study sites. For example, examination of the High Log Creek imagery from 2013 and 2015 shows the forested area northwest of the study site was clear cut and the start of culvert construction occurred sometime between when the two images were taken.
The S2 cloud probability is created with the sentinel2-cloud-detector library (using LightGBM). All bands are upsampled using bilinear interpolation to 10m resolution before the gradient boost base algorithm is applied. The resulting 0..1 floating point probability is scaled to 0..100 and stored as an UINT8. Areas missing any or all …
Google Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.
Explore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.
EXPLORE TIMELAPSEThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.
EXPLORE DATASETSThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.
EXPLORE THE APIUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.
LEARN ABOUT THE CODE EDITORScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.
SEE CASE STUDIES