35 datasets found
  1. a

    Data from: Google Earth Engine (GEE)

    • hub.arcgis.com
    • data.amerigeoss.org
    • +5more
    Updated Nov 28, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AmeriGEOSS (2018). Google Earth Engine (GEE) [Dataset]. https://hub.arcgis.com/items/bb1b131beda24006881d1ab019205277
    Explore at:
    Dataset updated
    Nov 28, 2018
    Dataset authored and provided by
    AmeriGEOSS
    Description

    Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE

  2. Z

    Super resolution enhancement of Landsat imagery and detections of...

    • data.niaid.nih.gov
    Updated Jul 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ethan D. Kyzivat (2024). Super resolution enhancement of Landsat imagery and detections of high-latitude lakes [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7306218
    Explore at:
    Dataset updated
    Jul 15, 2024
    Dataset authored and provided by
    Ethan D. Kyzivat
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This archive contains native resolution and super resolution (SR) Landsat imagery, derivative lake shorelines, and previously-published lake shorelines derived airborne remote sensing, used here for comparison. Landsat images are from 1985 (Landsat 5) and 2017 (Landsat 8) and are cropped to study areas used in the corresponding paper and converted to 8-bit format. SR images were created using the model of Lezine et al (2021a, 2021b), which outputs imagery at 10x-finer resolution, and they have the same extent and bit depth as the native resolution scenes included. Reference shoreline datasets are from Kyzivat et al. (2019a and 2019b) for the year 2017 and Walter Anthony et al. (2021a, 2021b) for Fairbanks, AK, USA in 1985. All derived and comparison shoreline datasets are cropped to the same extent, filtered to a common minimum lake size (40 m2 for 2017; 13 m2 for 1985), and smoothed via 10 m morphological closing. The SR-derived lakes were determined to have F-1 scores of 0.75 (2017 data) and 0.60 (1985 data) as compared to reference lakes for lakes larger than 500 m2, and accuracy is worse for smaller lakes. More details are in the forthcoming accompanying publication.

    All raster images are in cloud-optimized geotiff (COG) format (.tif) with file naming shown in Table 1. Vector shoreline datasets are in ESRI shapefile format (.shp, .dbf, etc.), and file names use the abbreviations LR for low resolution, SR for high resolution, and GT for “ground truth” comparison airborne-derived datasets.

    Landsat-5 and Landsat-8 images courtesy of the U.S. Geological Survey

    For an interactive map demo of these datasets via Google Earth Engine Apps, visit: https://ekyzivat.users.earthengine.app/view/super-resolution-demo

    Table 1: File naming scheme based on region, with some regions requiring two-scene mosaics.

    Region

    Landsat ID

    Mosaic name

    Yukon Flats Basin

    LC08_L2SP_068014_20170708_20200903_02_T1

    LC08_20170708_yflats_cog.tif

    LC08_L2SP_068013_20170708_20201015_02_T1

    Old Crow Flats

    LC08_L2SP_067012_20170903_20200903_02_T1

    -

    Mackenzie River Delta

    LC08_L2SP_064011_20170728_20200903_02_T1

    LC08_20170728_inuvik_cog.tif

    LC08_L2SP_064012_20170728_20200903_02_T1

    Canadian Shield Margin

    LC08_L2SP_050015_20170811_20200903_02_T1

    LC08_20170811_cshield-margin_cog.tif

    LC08_L2SP_048016_20170829_20200903_02_T1

    Canadian Shield near Baker Creek

    LC08_L2SP_046016_20170831_20200903_02_T1

    -

    Canadian Shield near Daring Lake

    LC08_L2SP_045015_20170723_20201015_02_T1

    -

    Peace-Athabasca Delta

    LC08_L2SP_043019_20170810_20200903_02_T1

    -

    Prairie Potholes North 1

    LC08_L2SP_041021_20170812_20200903_02_T1

    LC08_20170812_potholes-north1_cog.tif

    LC08_L2SP_041022_20170812_20200903_02_T1

    Prairie Potholes North 2

    LC08_L2SP_038023_20170823_20200903_02_T1

    -

    Prairie Potholes South

    LC08_L2SP_031027_20170907_20200903_02_T1

    -

    Fairbanks

    LT05_L2SP_070014_19850831_20200918_02_T1

    -

    References:

    Kyzivat, E. D., Smith, L. C., Pitcher, L. H., Fayne, J. V., Cooley, S. W., Cooper, M. G., Topp, S. N., Langhorst, T., Harlan, M. E., Horvat, C., Gleason, C. J., & Pavelsky, T. M. (2019b). A high-resolution airborne color-infrared camera water mask for the NASA ABoVE campaign. Remote Sensing, 11(18), 2163. https://doi.org/10.3390/rs11182163

    Kyzivat, E.D., L.C. Smith, L.H. Pitcher, J.V. Fayne, S.W. Cooley, M.G. Cooper, S. Topp, T. Langhorst, M.E. Harlan, C.J. Gleason, and T.M. Pavelsky. 2019a. ABoVE: AirSWOT Water Masks from Color-Infrared Imagery over Alaska and Canada, 2017. ORNL DAAC, Oak Ridge, Tennessee, USA. https://doi.org/10.3334/ORNLDAAC/1707

    Ekaterina M. D. Lezine, Kyzivat, E. D., & Smith, L. C. (2021a). Super-resolution surface water mapping on the Canadian shield using planet CubeSat images and a generative adversarial network. Canadian Journal of Remote Sensing, 47(2), 261–275. https://doi.org/10.1080/07038992.2021.1924646

    Ekaterina M. D. Lezine, Kyzivat, E. D., & Smith, L. C. (2021b). Super-resolution surface water mapping on the canadian shield using planet CubeSat images and a generative adversarial network. Canadian Journal of Remote Sensing, 47(2), 261–275. https://doi.org/10.1080/07038992.2021.1924646

    Walter Anthony, K.., Lindgren, P., Hanke, P., Engram, M., Anthony, P., Daanen, R. P., Bondurant, A., Liljedahl, A. K., Lenz, J., Grosse, G., Jones, B. M., Brosius, L., James, S. R., Minsley, B. J., Pastick, N. J., Munk, J., Chanton, J. P., Miller, C. E., & Meyer, F. J. (2021a). Decadal-scale hotspot methane ebullition within lakes following abrupt permafrost thaw. Environ. Res. Lett, 16, 35010. https://doi.org/10.1088/1748-9326/abc848

    Walter Anthony, K., and P. Lindgren. 2021b. ABoVE: Historical Lake Shorelines and Areas near Fairbanks, Alaska, 1949-2009. ORNL DAAC, Oak Ridge, Tennessee, USA. https://doi.org/10.3334/ORNLDAAC/1859

  3. m

    Southern California 60-cm Urban Land Cover Classification

    • data.mendeley.com
    Updated Nov 2, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Red Willow Coleman (2022). Southern California 60-cm Urban Land Cover Classification [Dataset]. http://doi.org/10.17632/zykyrtg36g.2
    Explore at:
    Dataset updated
    Nov 2, 2022
    Authors
    Red Willow Coleman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    California 60
    Description

    This dataset represents a high resolution urban land cover classification map across the southern California Air Basin (SoCAB) with a spatial resolution of 60 cm in urban regions and 10 m in non-urban regions. This map was developed to support NASA JPL-based urban biospheric CO2 modeling in Los Angeles, CA. Land cover classification was derived from a novel fusion of Sentinel-2 (10-60 m x 10-60 m) and 2016 NAIP (60 cm x 60 cm) imagery and provides identification of impervious surface, non-photosynthetic vegetation, shrub, tree, grass, pools and lakes.

    Land Cover Classes in .tif file: 0: Impervious surface 1: Tree (mixed evergreen/deciduous) 2: Grass (assumed irrigated) 3: Shrub 4: Non-photosynthetic vegetation 5: Water (masked using MNDWI/NDWI)

    Google Earth Engine interactive app displaying this map: https://wcoleman.users.earthengine.app/view/socab-irrigated-classification

    A portion of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Support from the Earth Science Division OCO-2 program is acknowledged. Copyright 2020. All rights reserved.

  4. H

    Aridity Index Mapper Google Earth Engine App

    • hydroshare.org
    • beta.hydroshare.org
    • +1more
    zip
    Updated Feb 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fitsume T. Wolkeba; Brad Peter (2024). Aridity Index Mapper Google Earth Engine App [Dataset]. http://doi.org/10.4211/hs.e5c0e11d49d24762a7edc82e1adea70c
    Explore at:
    zip(7.7 KB)Available download formats
    Dataset updated
    Feb 21, 2024
    Dataset provided by
    HydroShare
    Authors
    Fitsume T. Wolkeba; Brad Peter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2016 - Dec 31, 2021
    Area covered
    Description

    The aridity index also known as the dryness index is the ratio of potential evapotranspiration to precipitation. The aridity index indicates water deficiency. The aridity index is used to classify locations as humid or dry. The evaporation ratio (evaporation index) on the other hand indicates the availability of water in watersheds. The evaporation index is inversely proportional to water availability. For long periods renewable water resources availability is residual precipitation after evaporation loss is deducted. These two ratios provide very useful information about water availability. Understating the powerful potential of the aridity index and evaporation ratio, this app is developed on the Google Earth Engine using NLDAS-2 and MODIS products to map temporal variability of the Aridity Index and Evaporation ratio over CONUS. The app can be found at https://cartoscience.users.earthengine.app/view/aridity-index.

  5. Google Earth Images for Ships

    • kaggle.com
    Updated Jul 11, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Albert Guo (2021). Google Earth Images for Ships [Dataset]. https://www.kaggle.com/realgjl/google-earth-images-for-ships/tasks
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 11, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Albert Guo
    Description

    Google Earth Data for Ships

    The original data set is from Tom Lutherborrow. On top of that, I used roboflow to auto-orient and Resize (Stretch to 416x416).

    Acknowledgements

    We wouldn't be here without the help of Tom Lutherborrow who provided data at Kaggle ( https://www.kaggle.com/tomluther/ships-in-google-earth ). If you owe any attributions or thanks, include them here along with any citations of past research.

  6. Google Location History (GLH) mobility dataset

    • zenodo.org
    Updated Jan 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thiago Andrade; Thiago Andrade (2024). Google Location History (GLH) mobility dataset [Dataset]. http://doi.org/10.5281/zenodo.8349569
    Explore at:
    Dataset updated
    Jan 4, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Thiago Andrade; Thiago Andrade
    Description

    This is a GPS dataset acquired from Google.

    Google tracks the user’s device location through Google Maps, which also works on Android devices, the iPhone, and the web.
    It’s possible to see the Timeline from the user’s settings in the Google Maps app on Android or directly from the Google Timeline Website.
    It has detailed information such as when an individual is walking, driving, and flying.
    Such functionality of tracking can be enabled or disabled on demand by the user directly from the smartphone or via the website.
    Google has a Take Out service where the users can download all their data or select from the Google products they use the data they want to download.
    The dataset contains 120,847 instances from a period of 9 months or 253 unique days from February 2019 to October 2019 from a single user.
    The dataset comprises a pair of (latitude, and longitude), and a timestamp.
    All the data was delivered in a single CSV file.
    As the locations of this dataset are well known by the researchers, this dataset will be used as ground truth in many mobility studies.

    Please cite the following papers in order to use the datasets:

    T. Andrade, B. Cancela, and J. Gama, "Discovering locations and habits from human mobility data," Annals of Telecommunications, vol. 75, no. 9, pp. 505–521, 2020.
    10.1007/s12243-020-00807-x (DOI)
    and
    T. Andrade, B. Cancela, and J. Gama, "From mobility data to habits and common pathways," Expert Systems, vol. 37, no. 6, p. e12627, 2020.
    10.1111/exsy.12627 (DOI)

  7. d

    Dynamic Science Data Services for Display, Analysis and Interaction in...

    • datadiscoverystudio.org
    Updated Mar 12, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2015). Dynamic Science Data Services for Display, Analysis and Interaction in Widely-Accessible, Web-Based Geospatial Platforms Project [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/fccf86ecd23d4e5899f4bd1f9ce54050/html
    Explore at:
    Dataset updated
    Mar 12, 2015
    Description

    TerraMetrics, Inc., proposes an SBIR Phase I R/R&D program to investigate and develop a key web services architecture that provides data processing, storage and delivery capabilities and enables successful deployment, display and visual interaction of diverse, massive, multi-dimensional science datasets within popular web-based geospatial platforms like Google Earth, Google Maps, NASA's World Wind and others. The proposed innovation exploits the use of a wired and wireless, network-friendly, wavelet-compressed data format and server architecture that extracts and delivers appropriately-sized blocks of multi-resolution geospatial data to client applications on demand and in real time. The resulting format and architecture intelligently delivers client-required data from a server, or multiple distributed servers, to a wide range of networked client applications that can build a composite, user-interactive 3D visualization of fused, disparate, geospatial datasets. The proposed innovation provides a highly scalable approach to data storage and management while offering geospatial data services to client science applications and a wide range of client and connection types from broadband-connected desktop computers to wireless cell phones. TerraMetrics offers to research the feasibility of the proposed innovation and demonstrate it within the context of a live, server-supported, Google Earth-compatible client application with high-density, multi-dimensional NASA science data.

  8. m

    Sentiment Analysis Dataset of the Sheikh Zayed Grand Mosque’s Visitor...

    • data.mendeley.com
    Updated Apr 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elinda Elinda (2024). Sentiment Analysis Dataset of the Sheikh Zayed Grand Mosque’s Visitor Reviews on Google Maps [Dataset]. http://doi.org/10.17632/hxbbfwtygn.1
    Explore at:
    Dataset updated
    Apr 29, 2024
    Authors
    Elinda Elinda
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset was taken from Google Maps user reviews at the URL https://maps.app.goo.gl/sdiqePFZGBZsSFQz9. The dataset is in the form of three excel files with the names original_dataset, cleaned_data and labeled_data. The original dataset amounted to 2743 reviews then after preprocessing the data became 2643 reviews. then the data is processed and labeled.

  9. u

    Data from: A dataset of spatiotemporally sampled MODIS Leaf Area Index with...

    • agdatacommons.nal.usda.gov
    application/csv
    Updated May 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yanghui Kang; Mutlu Ozdogan; Feng Gao; Martha C. Anderson; William A. White; Yun Yang; Yang Yang; Tyler A. Erickson (2025). A dataset of spatiotemporally sampled MODIS Leaf Area Index with corresponding Landsat surface reflectance over the contiguous US [Dataset]. http://doi.org/10.15482/USDA.ADC/1521097
    Explore at:
    application/csvAvailable download formats
    Dataset updated
    May 1, 2025
    Dataset provided by
    Ag Data Commons
    Authors
    Yanghui Kang; Mutlu Ozdogan; Feng Gao; Martha C. Anderson; William A. White; Yun Yang; Yang Yang; Tyler A. Erickson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Contiguous United States, United States
    Description

    Leaf Area Index (LAI) is a fundamental vegetation structural variable that drives energy and mass exchanges between the plant and the atmosphere. Moderate-resolution (300m – 7km) global LAI data products have been widely applied to track global vegetation changes, drive Earth system models, monitor crop growth and productivity, etc. Yet, cutting-edge applications in climate adaptation, hydrology, and sustainable agriculture require LAI information at higher spatial resolution (< 100m) to model and understand heterogeneous landscapes. This dataset was built to assist a machine-learning-based approach for mapping LAI from 30m-resolution Landsat images across the contiguous US (CONUS). The data was derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) Version 6 LAI/FPAR, Landsat Collection 1 surface reflectance, and NLCD Land Cover datasets over 2006 – 2018 using Google Earth Engine. Each record/sample/row includes a MODIS LAI value, corresponding Landsat surface reflectance in green, red, NIR, SWIR1 bands, a land cover (biome) type, geographic location, and other auxiliary information. Each sample represents a MODIS LAI pixel (500m) within which a single biome type dominates 90% of the area. The spatial homogeneity of the samples was further controlled by a screening process based on the coefficient of variation of the Landsat surface reflectance. In total, there are approximately 1.6 million samples, stratified by biome, Landsat sensor, and saturation status from the MODIS LAI algorithm. This dataset can be used to train machine learning models and generate LAI maps for Landsat 5, 7, 8 surface reflectance images within CONUS. Detailed information on the sample generation and quality control can be found in the related journal article. Resources in this dataset:Resource Title: README. File Name: LAI_train_samples_CONUS_README.txtResource Description: Description and metadata of the main datasetResource Software Recommended: Notepad,url: https://www.microsoft.com/en-us/p/windows-notepad/9msmlrh6lzf3?activetab=pivot:overviewtab Resource Title: LAI_training_samples_CONUS. File Name: LAI_train_samples_CONUS_v0.1.1.csvResource Description: This CSV file consists of the training samples for estimating Leaf Area Index based on Landsat surface reflectance images (Collection 1 Tire 1). Each sample has a MODIS LAI value and corresponding surface reflectance derived from Landsat pixels within the MODIS pixel. Contact: Yanghui Kang (kangyanghui@gmail.com)
    Column description

    UID: Unique identifier. Format: LATITUDE_LONGITUDE_SENSOR_PATHROW_DATE
    Landsat_ID: Landsat image ID Date: Landsat image date in "YYYYMMDD" Latitude: Latitude (WGS84) of the MODIS LAI pixel center Longitude: Longitude (WGS84) of the MODIS LAI pixel center MODIS_LAI: MODIS LAI value in "m2/m2" MODIS_LAI_std: MODIS LAI standard deviation in "m2/m2" MODIS_LAI_sat: 0 - MODIS Main (RT) method used no saturation; 1 - MODIS Main (RT) method with saturation NLCD_class: Majority class code from the National Land Cover Dataset (NLCD) NLCD_frequency: Percentage of the area cover by the majority class from NLCD Biome: Biome type code mapped from NLCD (see below for more information) Blue: Landsat surface reflectance in the blue band Green: Landsat surface reflectance in the green band Red: Landsat surface reflectance in the red band Nir: Landsat surface reflectance in the near infrared band Swir1: Landsat surface reflectance in the shortwave infrared 1 band Swir2: Landsat surface reflectance in the shortwave infrared 2 band Sun_zenith: Solar zenith angle from the Landsat image metadata. This is a scene-level value. Sun_azimuth: Solar azimuth angle from the Landsat image metadata. This is a scene-level value. NDVI: Normalized Difference Vegetation Index computed from Landsat surface reflectance EVI: Enhanced Vegetation Index computed from Landsat surface reflectance NDWI: Normalized Difference Water Index computed from Landsat surface reflectance GCI: Green Chlorophyll Index = Nir/Green - 1

    Biome code

    1 - Deciduous Forest
    2 - Evergreen Forest
    3 - Mixed Forest
    4 - Shrubland
    5 - Grassland/Pasture
    6 - Cropland
    7 - Woody Wetland
    8 - Herbaceous Wetland

    Reference Dataset: All data was accessed through Google Earth Engine Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D., & Moore, R. (2017). Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment. MODIS Version 6 Leaf Area Index/FPAR 4-day L5 Global 500m Myneni, R., Y. Knyazikhin, T. Park. MOD15A2H MODIS/Terra Leaf Area Index/FPAR 8-Day L4 Global 500m SIN Grid V006. 2015, distributed by NASA EOSDIS Land Processes DAAC, https://doi.org/10.5067/MODIS/MOD15A2H.006 Landsat 5/7/8 Collection 1 Surface Reflectance Landsat Level-2 Surface Reflectance Science Product courtesy of the U.S. Geological Survey. Masek, J.G., Vermote, E.F., Saleous N.E., Wolfe, R., Hall, F.G., Huemmrich, K.F., Gao, F., Kutler, J., and Lim, T-K. (2006). A Landsat surface reflectance dataset for North America, 1990–2000. IEEE Geoscience and Remote Sensing Letters 3(1):68-72. http://dx.doi.org/10.1109/LGRS.2005.857030. Vermote, E., Justice, C., Claverie, M., & Franch, B. (2016). Preliminary analysis of the performance of the Landsat 8/OLI land surface reflectance product. Remote Sensing of Environment. http://dx.doi.org/10.1016/j.rse.2016.04.008. National Land Cover Dataset (NLCD) Yang, Limin, Jin, Suming, Danielson, Patrick, Homer, Collin G., Gass, L., Bender, S.M., Case, Adam, Costello, C., Dewitz, Jon A., Fry, Joyce A., Funk, M., Granneman, Brian J., Liknes, G.C., Rigge, Matthew B., Xian, George, A new generation of the United States National Land Cover Database—Requirements, research priorities, design, and implementation strategies: ISPRS Journal of Photogrammetry and Remote Sensing, v. 146, p. 108–123, at https://doi.org/10.1016/j.isprsjprs.2018.09.006 Resource Software Recommended: Microsoft Excel,url: https://www.microsoft.com/en-us/microsoft-365/excel

  10. D

    Data from: Soil and Land Information

    • data.nsw.gov.au
    • researchdata.edu.au
    • +1more
    html, pdf +1
    Updated Mar 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NSW Department of Climate Change, Energy, the Environment and Water (2024). Soil and Land Information [Dataset]. https://data.nsw.gov.au/data/dataset/soil-and-land-information
    Explore at:
    spatial viewer, html, pdfAvailable download formats
    Dataset updated
    Mar 13, 2024
    Dataset provided by
    NSW Department of Climate Change, Energy, the Environment and Water
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Statewide soil and land information can be discovered and viewed through eSPADE or SEED. Datasets include soil profiles, soil landscapes, soil and land resources, acid sulfate soil risk mapping, hydrogeological landscapes, land systems and land use. There are also various statewide coverages of specific soil and land characteristics, such as soil type, land and soil capability, soil fertility, soil regolith, soil hydrology and modelled soil properties.

    Both eSPADE and SEED enable soil and land data to be viewed on a map. SEED focuses more on the holistic approach by enabling you to add other environmental layers such as mining boundaries, vegetation or water monitoring points. SEED also provides access to metadata and data quality statements for layers.

    eSPADE provides greater functions and allows you to drill down into soil points or maps to access detailed information such as reports and images. You can navigate to a specific location, then search and select multiple objects and access detailed information about them. You can also export spatial information for use in other applications such as Google Earth™ and GIS software.

    eSPADE is a free Internet information system and works on desktop computers, laptops and mobile devices such as smartphones and tablets and uses a Google maps-based platform familiar to most users. It has over 42,000 soil profile descriptions and approximately 4,000 soil landscape descriptions. This includes the maps and descriptions from the Soil Landscape Mapping program. eSPADE also includes the base maps underpinning Biophysical Strategic Agricultural Land (BSAL).

    For more information on eSPADE visit: https://www.environment.nsw.gov.au/topics/land-and-soil/soil-data/espade

  11. Data from: Sightings Map of Invasive Plants in Portugal

    • gbif.org
    Updated Jan 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hélia Marchante; Maria Cristina Morais; Hélia Marchante; Maria Cristina Morais (2021). Sightings Map of Invasive Plants in Portugal [Dataset]. http://doi.org/10.15468/ic8tid
    Explore at:
    Dataset updated
    Jan 20, 2021
    Dataset provided by
    Global Biodiversity Information Facilityhttps://www.gbif.org/
    CFE - Centre for Functional Ecology, Department of Life Sciences, University of Coimbra
    Authors
    Hélia Marchante; Maria Cristina Morais; Hélia Marchante; Maria Cristina Morais
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Feb 22, 2013 - Feb 15, 2020
    Area covered
    Description

    The dataset available through the Sightings Map of Invasive Plants in Portugal results from the Citizen Science platform INVASORAS.PT, which records sightings of invasive plants in Portugal (mainland and Archipelagos of Madeira and Azores). This platform was originally created in 2013, in the context of the project “Plantas Invasoras: uma ameaça vinda de fora” (Media Ciência nº 16905), developed by researchers from Centre for Functional Ecology of University of Coimbra and of Coimbra College of Agriculture of the Polytechnic Institute of Coimbra. Currently this project is over, but the platform is maintained by the same team. Sightings are reported by users who register at the platform and submit them, either directly on the website (https://invasoras.pt/pt/mapeamento) or using an app for Android (https://play.google.com/store/apps/details?id=pt.uc.invasoras2) and iOS (https://apps.apple.com/pt/app/plantas-invasoras-em-portugal/id1501776731) devices. Only validated sightings are available on the dataset. Validation is made based on photographs submitted along with the sightings by experts from the platform INVASORAS.PT team. As with all citizen science projects there is some risk of erroneous records and duplication of sightings.

  12. Tarik Tyane Annotations Dataset

    • universe.roboflow.com
    zip
    Updated Nov 11, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ismailiadam1296@gmail.com (2021). Tarik Tyane Annotations Dataset [Dataset]. https://universe.roboflow.com/ismailiadam1296-gmail-com/tarik-tyane-annotations
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 11, 2021
    Dataset provided by
    Gmailhttp://gmail.com/
    Authors
    ismailiadam1296@gmail.com
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    ParkingSpots Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Smart Parking Assistance: Utilize the Tarik-Tyane-annotations model to develop an application that assists drivers in locating available parking spots, including valid and invalid spots like handicapped and fire hydrant zones, in real-time, making the parking experience more convenient and efficient.

    2. Traffic Management: Integrate the model into city traffic management systems to monitor and manage parking spaces in high traffic areas. By identifying improper parking, such as in fire hydrant zones, bus stops, or handicapped spots, authorities can enforce parking regulations proactively.

    3. Urban Planning and Analysis: Use the Tarik-Tyane-annotations model to analyze public parking data, identifying patterns and trends related to parking spots' utilization. This can help city planners make informed decisions on the allocation and distribution of parking spaces, optimizing infrastructure for future growth.

    4. Navigation App Integration: Enhance navigation applications like Google Maps or Waze with the model's parking spot information, allowing users to find not only available parking spaces nearby but also information on rules and restrictions in real-time, avoiding fines or inconveniences.

    5. Emergency Response: Equip emergency response vehicles, such as fire trucks and ambulances, with the Tarik-Tyane-annotations model to identify fire hydrant locations or restricted parking areas quickly. This can help emergency services to navigate congested areas and ensure unblocked access to critical infrastructure during emergencies.

  13. Data from: Code for: Mapping 33 years of sugarcane evolution in São Paulo...

    • dataverse.cirad.fr
    • dataverse-qualification.cirad.fr
    txt
    Updated Apr 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CIRAD Dataverse (2022). Code for: Mapping 33 years of sugarcane evolution in São Paulo state, Brazil, using Landsat imagery and generalized space-time classifiers [Dataset]. http://doi.org/10.18167/DVN1/DQXFZG
    Explore at:
    txt(5130)Available download formats
    Dataset updated
    Apr 15, 2022
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Area covered
    State of São Paulo, Brazil
    Dataset funded by
    CNPq
    FAPESP
    CAPES
    UNDP
    Description

    This dataset includes the Google Earth Engine (GEE) code used to produce the results of the following scientific article: Luciano, A.C.S, Campagnuci, B.C.G., le Maire, G., Mapping 33 years of sugarcane evolution in São Paulo state, Brazil, using Landsat imagery and generalized space-time classifiers, Remote Sensing Applications: Society and Environment (accepted for publication)

  14. c

    Niagara Open Data

    • catalog.civicdataecosystem.org
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Niagara Open Data [Dataset]. https://catalog.civicdataecosystem.org/dataset/niagara-open-data
    Explore at:
    Dataset updated
    May 13, 2025
    Description

    The Ontario government, generates and maintains thousands of datasets. Since 2012, we have shared data with Ontarians via a data catalogue. Open data is data that is shared with the public. Click here to learn more about open data and why Ontario releases it. Ontario’s Open Data Directive states that all data must be open, unless there is good reason for it to remain confidential. Ontario’s Chief Digital and Data Officer also has the authority to make certain datasets available publicly. Datasets listed in the catalogue that are not open will have one of the following labels: If you want to use data you find in the catalogue, that data must have a licence – a set of rules that describes how you can use it. A licence: Most of the data available in the catalogue is released under Ontario’s Open Government Licence. However, each dataset may be shared with the public under other kinds of licences or no licence at all. If a dataset doesn’t have a licence, you don’t have the right to use the data. If you have questions about how you can use a specific dataset, please contact us. The Ontario Data Catalogue endeavors to publish open data in a machine readable format. For machine readable datasets, you can simply retrieve the file you need using the file URL. The Ontario Data Catalogue is built on CKAN, which means the catalogue has the following features you can use when building applications. APIs (Application programming interfaces) let software applications communicate directly with each other. If you are using the catalogue in a software application, you might want to extract data from the catalogue through the catalogue API. Note: All Datastore API requests to the Ontario Data Catalogue must be made server-side. The catalogue's collection of dataset metadata (and dataset files) is searchable through the CKAN API. The Ontario Data Catalogue has more than just CKAN's documented search fields. You can also search these custom fields. You can also use the CKAN API to retrieve metadata about a particular dataset and check for updated files. Read the complete documentation for CKAN's API. Some of the open data in the Ontario Data Catalogue is available through the Datastore API. You can also search and access the machine-readable open data that is available in the catalogue. How to use the API feature: Read the complete documentation for CKAN's Datastore API. The Ontario Data Catalogue contains a record for each dataset that the Government of Ontario possesses. Some of these datasets will be available to you as open data. Others will not be available to you. This is because the Government of Ontario is unable to share data that would break the law or put someone's safety at risk. You can search for a dataset with a word that might describe a dataset or topic. Use words like “taxes” or “hospital locations” to discover what datasets the catalogue contains. You can search for a dataset from 3 spots on the catalogue: the homepage, the dataset search page, or the menu bar available across the catalogue. On the dataset search page, you can also filter your search results. You can select filters on the left hand side of the page to limit your search for datasets with your favourite file format, datasets that are updated weekly, datasets released by a particular organization, or datasets that are released under a specific licence. Go to the dataset search page to see the filters that are available to make your search easier. You can also do a quick search by selecting one of the catalogue’s categories on the homepage. These categories can help you see the types of data we have on key topic areas. When you find the dataset you are looking for, click on it to go to the dataset record. Each dataset record will tell you whether the data is available, and, if so, tell you about the data available. An open dataset might contain several data files. These files might represent different periods of time, different sub-sets of the dataset, different regions, language translations, or other breakdowns. You can select a file and either download it or preview it. Make sure to read the licence agreement to make sure you have permission to use it the way you want. Read more about previewing data. A non-open dataset may be not available for many reasons. Read more about non-open data. Read more about restricted data. Data that is non-open may still be subject to freedom of information requests. The catalogue has tools that enable all users to visualize the data in the catalogue without leaving the catalogue – no additional software needed. Have a look at our walk-through of how to make a chart in the catalogue. Get automatic notifications when datasets are updated. You can choose to get notifications for individual datasets, an organization’s datasets or the full catalogue. You don’t have to provide and personal information – just subscribe to our feeds using any feed reader you like using the corresponding notification web addresses. Copy those addresses and paste them into your reader. Your feed reader will let you know when the catalogue has been updated. The catalogue provides open data in several file formats (e.g., spreadsheets, geospatial data, etc). Learn about each format and how you can access and use the data each file contains. A file that has a list of items and values separated by commas without formatting (e.g. colours, italics, etc.) or extra visual features. This format provides just the data that you would display in a table. XLSX (Excel) files may be converted to CSV so they can be opened in a text editor. How to access the data: Open with any spreadsheet software application (e.g., Open Office Calc, Microsoft Excel) or text editor. Note: This format is considered machine-readable, it can be easily processed and used by a computer. Files that have visual formatting (e.g. bolded headers and colour-coded rows) can be hard for machines to understand, these elements make a file more human-readable and less machine-readable. A file that provides information without formatted text or extra visual features that may not follow a pattern of separated values like a CSV. How to access the data: Open with any word processor or text editor available on your device (e.g., Microsoft Word, Notepad). A spreadsheet file that may also include charts, graphs, and formatting. How to access the data: Open with a spreadsheet software application that supports this format (e.g., Open Office Calc, Microsoft Excel). Data can be converted to a CSV for a non-proprietary format of the same data without formatted text or extra visual features. A shapefile provides geographic information that can be used to create a map or perform geospatial analysis based on location, points/lines and other data about the shape and features of the area. It includes required files (.shp, .shx, .dbt) and might include corresponding files (e.g., .prj). How to access the data: Open with a geographic information system (GIS) software program (e.g., QGIS). A package of files and folders. The package can contain any number of different file types. How to access the data: Open with an unzipping software application (e.g., WinZIP, 7Zip). Note: If a ZIP file contains .shp, .shx, and .dbt file types, it is an ArcGIS ZIP: a package of shapefiles which provide information to create maps or perform geospatial analysis that can be opened with ArcGIS (a geographic information system software program). A file that provides information related to a geographic area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open using a GIS software application to create a map or do geospatial analysis. It can also be opened with a text editor to view raw information. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format for sharing data in a machine-readable way that can store data with more unconventional structures such as complex lists. How to access the data: Open with any text editor (e.g., Notepad) or access through a browser. Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A text-based format to store and organize data in a machine-readable way that can store data with more unconventional structures (not just data organized in tables). How to access the data: Open with any text editor (e.g., Notepad). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. A file that provides information related to an area (e.g., phone number, address, average rainfall, number of owl sightings in 2011 etc.) and its geospatial location (i.e., points/lines). How to access the data: Open with a geospatial software application that supports the KML format (e.g., Google Earth). Note: This format is machine-readable, and it can be easily processed and used by a computer. Human-readable data (including visual formatting) is easy for users to read and understand. This format contains files with data from tables used for statistical analysis and data visualization of Statistics Canada census data. How to access the data: Open with the Beyond 20/20 application. A database which links and combines data from different files or applications (including HTML, XML, Excel, etc.). The database file can be converted to a CSV/TXT to make the data machine-readable, but human-readable formatting will be lost. How to access the data: Open with Microsoft Office Access (a database management system used to develop application software). A file that keeps the original layout and

  15. Z

    Monthly aggregated Water Vapor MODIS MCD19A2 (1 km): Long-term data...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Leandro Parente (2024). Monthly aggregated Water Vapor MODIS MCD19A2 (1 km): Long-term data (2000-2022) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8192543
    Explore at:
    Dataset updated
    Jul 11, 2024
    Dataset provided by
    Leandro Parente
    Rolf Simoes
    Tomislav Hengl
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This data is part of the Monthly aggregated Water Vapor MODIS MCD19A2 (1 km) dataset. Check the related identifiers section on the Zenodo side panel to access other parts of the dataset. General Description The monthly aggregated water vapor dataset is derived from MCD19A2 v061. The Water Vapor data measures the column above ground retrieved from MODIS near-IR bands at 0.94μm. The dataset time spans from 2000 to 2022 and provides data that covers the entire globe. The dataset can be used in many applications like water cycle modeling, vegetation mapping, and soil mapping. This dataset includes:

    Monthly time-series:Derived from MCD19A2 v061, this data provides a monthly aggregated mean and standard deviation of daily water vapor time-series data from 2000 to 2022. Only positive non-cloudy pixels were considered valid observations to derive the mean and the standard deviation. The remaining no-data values were filled using the TMWM algorithm. This dataset also includes smoothed mean and standard deviation values using the Whittaker method. The quality assessment layers and the number of valid observations for each month can provide an indication of the reliability of the monthly mean and standard deviation values. Yearly time-series:Derived from monthly time-series, this data provides a yearly time-series aggregated statistics of the monthly time-series data. Long-term data (2000-2022):Derived from monthly time-series, this data provides long-term aggregated statistics for the whole series of monthly observations. Data Details

    Time period: 2000–2022 Type of data: Water vapor column above the ground (0.001cm) How the data was collected or derived: Derived from MCD19A2 v061 using Google Earth Engine. Cloudy pixels were removed and only positive values of water vapor were considered to compute the statistics. The time-series gap-filling and time-series smoothing were computed using the Scikit-map Python package. Statistical methods used: Four statistics were derived: standard deviation, percentiles 25, 50, and 75. Limitations or exclusions in the data: The dataset does not include data for Antarctica. Coordinate reference system: EPSG:4326 Bounding box (Xmin, Ymin, Xmax, Ymax): (-180.00000, -62.00081, 179.99994, 87.37000) Spatial resolution: 1/120 d.d. = 0.008333333 (1km) Image size: 43,200 x 17,924 File format: Cloud Optimized Geotiff (COG) format. Support If you discover a bug, artifact, or inconsistency, or if you have a question please use some of the following channels:

    Technical issues and questions about the code: GitLab Issues General questions and comments: LandGIS Forum Name convention To ensure consistency and ease of use across and within the projects, we follow the standard Open-Earth-Monitor file-naming convention. The convention works with 10 fields that describes important properties of the data. In this way users can search files, prepare data analysis etc, without needing to open files. The fields are:

    generic variable name: wv = Water vapor variable procedure combination: mcd19a2v061.seasconv = MCD19A2 v061 with gap-filling algorithm Position in the probability distribution / variable type: m = mean | sd = standard deviation | n = number of observations | qa = quality assessment Spatial support: 1km Depth reference: s = surface Time reference begin time: 20000101 = 2000-01-01 Time reference end time: 20221231 = 2022-12-31 Bounding box: go = global (without Antarctica) EPSG code: epsg.4326 = EPSG:4326 Version code: v20230619 = 2023-06-19 (creation date)

  16. Global overview of cloud-, snow-, and shade-free Landsat (1982-2024) and...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Katarzyna Ewa Lewińska; Stefan Ernst; David Frantz; Ulf Leser; Patrick Hostert (2025). Global overview of cloud-, snow-, and shade-free Landsat (1982-2024) and Sentinel-2 (2015-2024) data [Dataset]. http://doi.org/10.5061/dryad.gb5mkkwxm
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Humboldt-Universität zu Berlin
    Trier University of Applied Sciences
    Authors
    Katarzyna Ewa Lewińska; Stefan Ernst; David Frantz; Ulf Leser; Patrick Hostert
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Landsat and Sentinel-2 acquisitions are among the most frequently used medium-resolution (i.e., 10-30 m) optical data. The data are extensively used in terrestrial vegetation applications, including but not limited to, land cover and land use mapping, vegetation condition and phenology monitoring, and disturbance and change mapping. While the Landsat archives alone provide over 40 years, and counting, of continuous and consistent observations, since mid-2015 Sentinel-2 has enabled a revisit frequency of up to 2-days. Although the spatio-temporal availability of both data archives is well-known at the scene level, information on the actual availability of usable (i.e., cloud-, snow-, and shade-free) observations at the pixel level needs to be explored for each study to ensure correct parametrization of used algorithms, thus robustness of subsequent analyses. However, a priori data exploration is time and resource‑consuming, thus is rarely performed. As a result, the spatio-temporal heterogeneity of usable data is often inadequately accounted for in the analysis design, risking ill-advised selection of algorithms and hypotheses, and thus inferior quality of final results. Here we present a global dataset comprising precomputed daily availability of usable Landsat and Sentinel-2 data sampled at a pixel-level in a regular 0.18°-point grid. We based the dataset on the complete 1982-2024 Landsat surface reflectance data (Collection 2) and 2015-2024 Seninel-2 top-of-the-atmosphere reflectance scenes (pre‑Collection-1 and Collection-1). Derivation of cloud-, snow-, and shade-free observations followed the methodology developed in our recent study on data availability over Europe (Lewińska et al., 2023; https://doi.org/10.20944/preprints202308.2174.v2). Furthermore, we expanded the dataset with growing season information derived based on the 2001‑2019 time series of the yearly 500 m MODIS land cover dynamics product (MCD12Q2; Collection 6). As such, our dataset presents a unique overview of the spatio-temporal availability of usable daily Landsat and Sentinel-2 data at the global scale, hence offering much-needed a priori information aiding the identification of appropriate methods and challenges for terrestrial vegetation analyses at the local to global scales. The dataset can be viewed using the dedicated GEE App (link in Related Works). As of February 2025 the dataset has been extended with the 2024 data. Methods We based our analyses on freely and openly accessible Landsat and Sentinel-2 data archives available in Google Earth Engine (Gorelick et al., 2017). We used all Landsat surface reflectance Level 2, Tier 1, Collection 2 scenes acquired with the Thematic Mapper (TM) (Earth Resources Observation And Science (EROS) Center, 1982), Enhanced Thematic Mapper (ETM+) (Earth Resources Observation And Science (EROS) Center, 1999), and Operational Land Imager (OLI) (Earth Resources Observation And Science (EROS) Center, 2013) scanners between 22nd August 1982 and 31st December 2024, and Sentinel-2 TOA reflectance Level-1C scenes (pre‑Collection-1 (European Space Agency, 2015, 2021) and Collection-1 (European Space Agency, 2022)) acquired with the MultiSpectral Instrument (MSI) between 23rd June 2015 and 31st December 2024. We implemented a conservative pixel-quality screening to identify cloud-, snow-, and shade-free land pixels. For the Landsat time series, we relied on the inherent pixel quality bands (Foga et al., 2017; Zhu & Woodcock, 2012) excluding all pixels flagged as cloud, snow, or shadow as well as pixels with the fill-in value of 20,000 (scale factor 0.0001; (Zhang et al., 2022)). Furthermore, due to the Landsat 7 orbit drift (Qiu et al., 2021) we excluded all ETM+ scenes acquired after 31st December 2020. Because Sentinel-2 Level-2A quality masks lack the desired scope and accuracy (Baetens et al., 2019; Coluzzi et al., 2018), we resorted to Level-1C scenes accompanied by the supporting Cloud Probability product. Furthermore, we employed a selection of conditions, including a threshold on Band 10 (SWIR-Cirrus), which is not available at Level‑2A. Overall, our Sentinel-2-specific cloud, shadow, and snow screening comprised:

    exclusion of all pixels flagged as clouds and cirrus in the inherent ‘QA60’ cloud mask band; exclusion of all pixels with cloud probability >50% as defined in the corresponding Cloud Probability product available for each scene; exclusion of cirrus clouds (B10 reflectance >0.01); exclusion of clouds based on Cloud Displacement Analysis (CDI<‑0.5) (Frantz et al., 2018); exclusion of dark pixels (B8 reflectance <0.16) within cloud shadows modelled for each scene with scene‑specific sun parameters for the clouds identified in the previous steps. Here we assumed a cloud height of 2,000 m. exclusion of pixels within a 40-m buffer (two pixels at 20-m resolution) around each identified cloud and cloud shadow object. exclusion of snow pixels identified with a snow mask branch of the Sen2Cor processor (Main-Knorn et al., 2017).

    Through applying the data screening, we generated a collection of daily availability records for Landsat and Sentinel-2 data archives. We next subsampled the resulting binary time series with a regular 0.18° x 0.18°‑point grid defined in the EPSG:4326 projection, obtaining 475,150 points located over land between ‑179.8867°W and 179.5733°E and 83.50834°N and ‑59.05167°S. Owing to the substantial amount of data comprised in the Landsat and Sentinel-2 archives and the computationally demanding process of cloud-, snow-, and shade-screening, we performed the subsampling in batches corresponding to a 4° x 4° regular grid and consolidated the final data in post-processing. We derived the pixel-specific growing season information from the 2001-2019 time series of the yearly 500‑m MODIS land cover dynamics product (MCD12Q2; Collection 6) available in Google Earth Engine. We only used information on the start and the end of a growing season, excluding all pixels with quality below ‘best’. When a pixel went through more than one growing cycle per year, we approximated a growing season as the period between the beginning of the first growing cycle and the end of the last growing cycle. To fill in data gaps arising from low-quality data and insufficiently pronounced seasonality (Friedl et al., 2019), we used a 5x5 mean moving window filter to ensure better spatial continuity of our growing season datasets. Following (Lewińska et al., 2023), we defined the start of the season as the pixel-specific 25th percentile of the 2001-2019 distribution for the start of the season dates, and the end of the season as the pixel-specific 75th percentile of the 2001-2019 distribution for end of the season dates. Finally, we subsampled the start and end of the season datasets with the same regular 0.18° x 0.18°-point grid defined in the EPSG:4326 projection. References:

    Baetens, L., Desjardins, C., & Hagolle, O. (2019). Validation of Copernicus Sentinel-2 Cloud Masks Obtained from MAJA, Sen2Cor, and FMask Processors Using Reference Cloud Masks Generated with a Supervised Active Learning Procedure. Remote Sensing, 11(4), 433. https://doi.org/10.3390/rs11040433 Coluzzi, R., Imbrenda, V., Lanfredi, M., & Simoniello, T. (2018). A first assessment of the Sentinel-2 Level 1-C cloud mask product to support informed surface analyses. Remote Sensing of Environment, 217, 426–443. https://doi.org/10.1016/j.rse.2018.08.009 Earth Resources Observation And Science (EROS) Center. (1982). Collection-2 Landsat 4-5 Thematic Mapper (TM) Level-1 Data Products [Other]. U.S. Geological Survey. https://doi.org/10.5066/P918ROHC Earth Resources Observation And Science (EROS) Center. (1999). Collection-2 Landsat 7 Enhanced Thematic Mapper Plus (ETM+) Level-1 Data Products [dataset]. U.S. Geological Survey. https://doi.org/10.5066/P9TU80IG Earth Resources Observation And Science (EROS) Center. (2013). Collection-2 Landsat 8-9 OLI (Operational Land Imager) and TIRS (Thermal Infrared Sensor) Level-1 Data Products [Other]. U.S. Geological Survey. https://doi.org/10.5066/P975CC9B European Space Agency. (2015). Sentinel-2 MSI Level-1C TOA Reflectance [dataset]. European Space Agency. https://doi.org/10.5270/S2_-d8we2fl European Space Agency. (2021). Sentinel-2 MSI Level-1C TOA Reflectance, Collection 0 [dataset]. European Space Agency. https://doi.org/10.5270/S2_-d8we2fl European Space Agency. (2022). Sentinel-2 MSI Level-1C TOA Reflectance [dataset]. European Space Agency. https://doi.org/10.5270/S2_-742ikth Foga, S., Scaramuzza, P. L., Guo, S., Zhu, Z., Dilley, R. D., Beckmann, T., Schmidt, G. L., Dwyer, J. L., Joseph Hughes, M., & Laue, B. (2017). Cloud detection algorithm comparison and validation for operational Landsat data products. Remote Sensing of Environment, 194, 379–390. https://doi.org/10.1016/j.rse.2017.03.026 Frantz, D., Haß, E., Uhl, A., Stoffels, J., & Hill, J. (2018). Improvement of the Fmask algorithm for Sentinel-2 images: Separating clouds from bright surfaces based on parallax effects. Remote Sensing of Environment, 215, 471–481. https://doi.org/10.1016/j.rse.2018.04.046 Friedl, M., Josh, G., & Sulla-Menashe, D. (2019). MCD12Q2 MODIS/Terra+Aqua Land Cover Dynamics Yearly L3 Global 500m SIN Grid V006 [dataset]. NASA EOSDIS Land Processes DAAC. https://doi.org/10.5067/MODIS/MCD12Q2.006 Gorelick, N., Hancher, M., Dixon, M., Ilyushchenko, S., Thau, D., & Moore, R. (2017). Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sensing of Environment, 202, 18–27. https://doi.org/10.1016/j.rse.2017.06.031Lewińska K.E., Ernst S., Frantz D., Leser U., Hostert P., Global Overview of Usable Landsat and Sentinel-2 Data for 1982–2023. Data in Brief 57, (2024) https://doi.org/10.1016/j.dib.2024.111054 Main-Knorn, M., Pflug, B., Louis, J., Debaecker, V., Müller-Wilm, U., & Gascon, F. (2017). Sen2Cor for Sentinel-2. In L. Bruzzone, F. Bovolo,

  17. M

    DNRGPS

    • gisdata.mn.gov
    • data.wu.ac.at
    windows_app
    Updated Sep 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natural Resources Department (2022). DNRGPS [Dataset]. https://gisdata.mn.gov/dataset/dnrgps
    Explore at:
    windows_appAvailable download formats
    Dataset updated
    Sep 7, 2022
    Dataset provided by
    Natural Resources Department
    Description

    DNRGPS is an update to the popular DNRGarmin application. DNRGPS and its predecessor were built to transfer data between Garmin handheld GPS receivers and GIS software.

    DNRGPS was released as Open Source software with the intention that the GPS user community will become stewards of the application, initiating future modifications and enhancements.

    DNRGPS does not require installation. Simply run the application .exe

    See the DNRGPS application documentation for more details.

    Compatible with: Windows (XP, 7, 8, 10, and 11), ArcGIS shapefiles and file geodatabases, Google Earth, most hand-held Garmin GPSs, and other NMEA output GPSs

    Limited Compatibility: Interactions with ArcMap layer files and ArcMap graphics are no longer supported. Instead use shapefile or geodatabase.

    Prerequisite: .NET 4 Framework

    DNR Data and Software License Agreement

    Subscribe to the DNRGPS announcement list to be notified of upgrades or updates.

  18. g

    Places of carpooling | gimi9.com

    • gimi9.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Places of carpooling | gimi9.com [Dataset]. https://gimi9.com/dataset/eu_637f631431dd87cd017fb05d/
    Explore at:
    License

    Licence Ouverte / Open Licence 1.0https://www.etalab.gouv.fr/wp-content/uploads/2014/05/Open_Licence.pdf
    License information was derived automatically

    Description

    The data are collected here in order to provide information on car-sharing arrangements to individuals and to plan inter-communal mobility. They contain the exact location of the carpool areas per municipality, with longitude and latitude. This data could be used to register carpool areas on map applications, such as Google Maps, or on carpool sites, so that users know where they are. The publication of this dataset was carried out as part of the challenge data with the students of Sciences Po Saint-Germain-en-Laye.

  19. D

    NSW Landuse 2017 v1.5

    • data.nsw.gov.au
    arcgis rest service +4
    Updated May 9, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NSW Department of Climate Change, Energy, the Environment and Water (2025). NSW Landuse 2017 v1.5 [Dataset]. https://data.nsw.gov.au/data/dataset/nsw-landuse-2017-v1p5-f0ed-clone-a95d
    Explore at:
    zip, wms, wmts, pdf, arcgis rest serviceAvailable download formats
    Dataset updated
    May 9, 2025
    Dataset provided by
    NSW Department of Climate Change, Energy, the Environment and Water
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    New South Wales
    Description

    The 2017 Landuse captures how the landscape in NSW is being used for food production, forestry, nature conservation, infrastructure and urban development. It can be used to monitor changes in the landscape and identify impacts on biodiversity values and individual ecosystems.

    The NSW 2017 Landuse mapping is dated September 2017.

    This is version 1.5 of the dataset, published December 2023.

    Version 1.5 of the 2017 Landuse incorporates the following updates:

    Previous Versions *Version 1.4 internal update (not published) * Version 1.3 internal update (not published) * Version 1.2 published 24 June 2020 - Fine scale update to Greater Sydney Metropolitan Area * Version 1 published August 2019

    The 2017 Landuse is based on Aerial imagery and Satellite imagery available for NSW. These include, but not limited to; digital aerial imagery (ADS) captured by NSW Department of Customer Service (DCS), high resolution urban (Conurbation) digital aerial imagery captured on behalf of DCS, SPOT 5, 6 & 7(Airbus), Planet™, Sentinel 2 (European Space Agency) and LANDSAT (NASA) Satellite Imagery. Mapping also includes commercially available imagery from Nearmap™ and Google Earth™, along with Google Street View™.

    Mapping takes into consideration ancillary datasets such as tenure such as National Parks and State forests, cadastre, roads parcels, land zoning, topographic information and Google Maps, in conjunction with visual interpretation and field validation of patterns and features on the ground.

    The 2017 Landuse was captured on screen using ARC GIS (Geographical Information Software) at a scale of 1:8,000 scale (or better) and features are mapped down to 2 hectares in size. Exceptions were made for targeted Landuse classes such as horticulture, intensive animal husbandry and urban environments, which were mapped at a finer scale.

    The 2017 Landuse has complete coverage of NSW. It also includes updates to the fine scale Horticulture mapping for the east coast of NSW - Newcastle to the Queensland boarder and Murray-Riverina Region. This horticultural mapping includes operations to the commodity level based on field work and high-resolution imagery interpretation.

    Landuse classes assigned are based on activities that have occurred in the last 5-10 years that may be part of a rotational practice. Time-series LANDSAT information has been used in conjunction with more recent Satellite Imagery to determine whether grasslands have been disturbed or subject to ongoing land management activities over the past 30 years.

    The 2017 Landuse was captured on screen using ARC GIS (Geographical Information Software) at a scale of 1:8,000 scale (or better) and features are mapped down to 2 hectares in size. Exceptions were made for targeted Landuse classes such as horticulture, intensive animal husbandry and urban environments (including Greater Sydney Metropolitan region), which were mapped at a finer scale.

    The reliability scale of the dataset is 1:10,000.

    Mapping has been subject to a peer review and quality assurance process.

    Land use information has been captured in accordance with standards set by the Australian Collaborative Land Use Mapping Program (ACLUMP) and using the Australian Land Use and Management ALUM Classification Version 8. The ALUM classification is based upon the modified Baxter & Russell classification and presented according to the specifications contained in http://www.agriculture.gov.au/abares/aclump/land-use/alum-classification.

    This product will be incorporated in the National Catchment scale land use product 2020 that will be available as a 50m raster - Australian Bureau of Agricultural and Resource Economics and Sciences (ABARES) http://www.agriculture.gov.au/abares/aclump/land-use/data-download

    The Department of Planning, Industry and Environment (DPIE) will continue to complete land use mapping at approximately 5-year intervals.

    The 2017 Landuse product is considered as a benchmark product that can be used for Landuse change reporting. Ongoing improvements to the 2017 Landuse product will be undertaken to correct errors or additional improvements to the mapping.

  20. Data from: CROPSIGHT-US: An Object-Based Crop Type Ground Truth Dataset...

    • zenodo.org
    txt, zip
    Updated Jun 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhijie Zhou; Zhijie Zhou; Yin Liu; Yin Liu; Chunyuan Diao; Chunyuan Diao (2025). CROPSIGHT-US: An Object-Based Crop Type Ground Truth Dataset Using Street View and Sentinel-2 Satellite Imagery across the Contiguous United States [Dataset]. http://doi.org/10.5281/zenodo.15702415
    Explore at:
    txt, zipAvailable download formats
    Dataset updated
    Jun 27, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Zhijie Zhou; Zhijie Zhou; Yin Liu; Yin Liu; Chunyuan Diao; Chunyuan Diao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Contiguous United States, United States
    Description

    CropSight-US is the first continent-scale, object-based crop type ground truth dataset for the contiguous United States (CONUS) from 2013 to 2023, featuring records across 17 major crop types and 294 Agricultural Statistics Districts with uncertainty information of 124,419 cropland fields. The most current version is `cropsight-us_app_dat_v1.0.0.zip`.

    Our crop type ground truth dataset includes the following 17 crop types: alfalfa, almond, canola, cereal, corn, cotton, grape, orange, peanuts, pistachio, potatoes, sorghum, soybean, sugarbeet, sugarcane, sunflower, walnut.

    Each crop type ground truth data is represented by a polygon and contains the following information:

    • Label: the predicted crop type label
    • Lat: latitude of the original street view image location
    • Lon: longitude of the original street view image location
    • Heading: direction of view of the original street view image towards the target field
    • Month: month when the original street view image was captured
    • Year: year when the original street view image was captured

    Each crop type ground truth polygon also contains uncertainty information obtained using our CONUS-UncertainFusionNet model, which employs Bayesian classification with Monte Carlo (MC) dropout to estimate prediction uncertainty. High-confidence predictions indicate consistent results across MC samples, suggesting reliable classification. Low-confidence predictions reflect high variability, flagging uncertain outputs often due to seasonal crop changes or visual similarities between crop types. This confidence measure supports filtering unreliable classifications in crop type ground truthing from street view imagery. Each polygon also includes the following uncertainty-related attributes:

    • Entropy: measures the randomness in class probabilities across MC simulations. Higher entropy indicates greater uncertainty in crop type prediction.
    • Entropy_P: the percentile rank of a polygon’s entropy relative to the entire dataset.
    • Variance: quantifies the variability of predicted class probabilities across MC simulations. Higher variance signals inconsistent predictions across runs.
    • Variance_P: the percentile rank of a polygon’s variance relative to the dataset.
    • Confidence: a binary label (1/0) indicating prediction certainty, derived using entropy and variance thresholds established from the test subset of our reference dataset. A value of 1 represents high confidence in the prediction results. A value of 0 indicates uncertainty beyond our defined threshold: these predictions are not necessarily low-quality, but fall outside the range deemed confidently predictable based on our criteria.

    Code repository and more examples are available at: https://github.com/rssiuiuc/CropSight ; We also created an interactive application hosted on Google Earth Engine (GEE): https://ee-azzhou249.projects.earthengine.app/view/cropsight-us.

    For technical details about the methods used to create this dataset, please refer to Liu et al. (2024) and the data description paper available soon.

    This research was supported in part by the Illinois Computes project which is supported by the University of Illinois Urbana-Champaign and the University of Illinois System.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
AmeriGEOSS (2018). Google Earth Engine (GEE) [Dataset]. https://hub.arcgis.com/items/bb1b131beda24006881d1ab019205277

Data from: Google Earth Engine (GEE)

Related Article
Explore at:
Dataset updated
Nov 28, 2018
Dataset authored and provided by
AmeriGEOSS
Description

Meet Earth EngineGoogle Earth Engine combines a multi-petabyte catalog of satellite imagery and geospatial datasets with planetary-scale analysis capabilities and makes it available for scientists, researchers, and developers to detect changes, map trends, and quantify differences on the Earth's surface.SATELLITE IMAGERY+YOUR ALGORITHMS+REAL WORLD APPLICATIONSLEARN MOREGLOBAL-SCALE INSIGHTExplore our interactive timelapse viewer to travel back in time and see how the world has changed over the past twenty-nine years. Timelapse is one example of how Earth Engine can help gain insight into petabyte-scale datasets.EXPLORE TIMELAPSEREADY-TO-USE DATASETSThe public data archive includes more than thirty years of historical imagery and scientific datasets, updated and expanded daily. It contains over twenty petabytes of geospatial data instantly available for analysis.EXPLORE DATASETSSIMPLE, YET POWERFUL APIThe Earth Engine API is available in Python and JavaScript, making it easy to harness the power of Google’s cloud for your own geospatial analysis.EXPLORE THE APIGoogle Earth Engine has made it possible for the first time in history to rapidly and accurately process vast amounts of satellite imagery, identifying where and when tree cover change has occurred at high resolution. Global Forest Watch would not exist without it. For those who care about the future of the planet Google Earth Engine is a great blessing!-Dr. Andrew Steer, President and CEO of the World Resources Institute.CONVENIENT TOOLSUse our web-based code editor for fast, interactive algorithm development with instant access to petabytes of data.LEARN ABOUT THE CODE EDITORSCIENTIFIC AND HUMANITARIAN IMPACTScientists and non-profits use Earth Engine for remote sensing research, predicting disease outbreaks, natural resource management, and more.SEE CASE STUDIESREADY TO BE PART OF THE SOLUTION?SIGN UP NOWTERMS OF SERVICE PRIVACY ABOUT GOOGLE

Search
Clear search
Close search
Google apps
Main menu