48 datasets found
  1. Supporting information for: REMAP: An online remote sensing application for...

    • figshare.com
    txt
    Updated Jun 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicholas Murray; David A. Keith; Daniel Simpson; John H. Wilshire; Richard M. Lucas (2023). Supporting information for: REMAP: An online remote sensing application for land cover classification and monitoring [Dataset]. http://doi.org/10.6084/m9.figshare.5579620.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 6, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Nicholas Murray; David A. Keith; Daniel Simpson; John H. Wilshire; Richard M. Lucas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Supporting information for: REMAP: An online remote sensing application for land cover classification and monitoringcsv and json files for implementing land cover classifications using the remap, the remote ecosystem assessment and monitoring pipeline (https://remap-app.org/)Nearmap aerial photograph courtesy of Nearmap Pty Ltd.For further information see:Murray, N.J., Keith, D.A., Simpson, D., Wilshire, J.H., Lucas, R.M. (accepted) REMAP: A cloud-based remote sensing application for generalized ecosystem classifications. Methods in Ecology and Evolution.

  2. M

    Minnesota Land Cover Classification and Impervious Surface Area by Landsat...

    • gisdata.mn.gov
    • data.wu.ac.at
    html, jpeg, tif
    Updated Apr 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Minnesota (2025). Minnesota Land Cover Classification and Impervious Surface Area by Landsat and Lidar: 2013 update - Version 2 [Dataset]. https://gisdata.mn.gov/dataset/base-landcover-minnesota
    Explore at:
    jpeg, tif, htmlAvailable download formats
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    University of Minnesota
    Area covered
    Minnesota
    Description

    This is a 15-meter raster dataset of a land cover and impervious surface classification for 2013, level two classification. The classification was created using a combination of multitemporal Landsat 8 data and LiDAR data with Object-based image analysis. By using objects instead of pixels we were able to utilize multispectral data along with spatial and contextual information of objects such as shape, size, texture and LiDAR-derived metrics to distinguish different land cover types. While OBIA has become the standard procedure for classification of high resolution imagery we found that it works equally well with Landsat imagery. For the objects classified as urban or developed, a regression model relating the Landsat greenness variable to percent impervious was developed to estimate and map the percent impervious surface area at the pixel level.

    This dataset was funded by the the Minnesota Environment and Natural Resources Trust Fund (ENRTF).

  3. f

    Data from: Integrating geographical information systems, remote sensing, and...

    • tandf.figshare.com
    docx
    Updated Oct 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Armstrong Manuvakola Ezequias Ngolo; Teiji Watanabe (2023). Integrating geographical information systems, remote sensing, and machine learning techniques to monitor urban expansion: an application to Luanda, Angola [Dataset]. http://doi.org/10.6084/m9.figshare.20401962.v3
    Explore at:
    docxAvailable download formats
    Dataset updated
    Oct 26, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Armstrong Manuvakola Ezequias Ngolo; Teiji Watanabe
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Luanda, Angola
    Description

    According to many previous studies, application of remote sensing for the complex and heterogeneous urban environments in Sub-Saharan African countries is challenging due to the spectral confusion among features caused by diversity of construction materials. Resorting to classification based on spectral indices that are expected to better highlight features of interest and to be prone to unsupervised classification, this study aims (1) to evaluate the effectiveness of index-based classification for Land Use Land Cover (LULC) using an unsupervised machine learning algorithm Product Quantized K-means (PQk-means); and (2) to monitor the urban expansion of Luanda, the capital city of Angola in a Logistic Regression Model (LRM). Comparison with state-of-the-art algorithms shows that unsupervised classification by means of spectral indices is effective for the study area and can be used for further studies. The built-up area of Luanda has increased from 94.5 km2 in 2000 to 198.3 km2 in 2008 and to 468.4 km2 in 2018, mainly driven by the proximity to the already established residential areas and to the main roads as confirmed by the logistic regression analysis. The generated probability maps show high probability of urban growth in the areas where government had defined housing programs.

  4. Damage Classification Deep Learning Model for Vexcel Imagery- Maui Fires

    • hub.arcgis.com
    Updated Aug 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri Imagery Virtual Team (2023). Damage Classification Deep Learning Model for Vexcel Imagery- Maui Fires [Dataset]. https://hub.arcgis.com/content/30e3f11be84b418fa4dcb109a1eac6d6
    Explore at:
    Dataset updated
    Aug 18, 2023
    Dataset provided by
    Esrihttp://esri.com/
    Authors
    Esri Imagery Virtual Team
    Area covered
    Maui
    Description

    Licensing requirementsArcGIS Desktop – ArcGIS Image Analyst extension for ArcGIS ProArcGIS Enterprise – ArcGIS Image Server with raster analytics configuredArcGIS Online – ArcGIS Image for ArcGIS OnlineUsing the modelBefore using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.Input1. 8-bit, 3-band high-resolution (10 cm) imagery. The model was trained on 10 cm Vexcel imagery2. Building footprints feature classOutputFeature class containing classified building footprints. Classname field value 1 indicates damaged buildings, and value 2 corresponds to undamaged structuresApplicable geographiesThe model was specifically trained and tested over Maui, Hawaii, in response to the Maui fires in August 2023.Accuracy metricsThe model has an average accuracy of 0.96.Sample resultsResults of the models can be seen in this dashboard.

  5. d

    Alaska Peninsula/Becharof National Wildlife Refuges earth cover...

    • datadiscoverystudio.org
    • data.amerigeoss.org
    Updated May 20, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Alaska Peninsula/Becharof National Wildlife Refuges earth cover classification: Phase 2. [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/169013a16a874a878f60ff38a90f4dea/html
    Explore at:
    Dataset updated
    May 20, 2018
    Area covered
    Alaska Peninsula, Earth
    Description

    description: The US Fish and Wildlife Service (USFWS), the Bureau of Land Management Alaska (BLM) and Ducks Unlimited, Inc. (DU) have been cooperatively mapping wetlands and associated uplands in Alaska using remote sensing and GIS technologies since 1988. The goal of this project was to continue the mapping effort by mapping a portion of the Alaska Peninsula/Becharof National Wildlife Refuge, and neighboring lands. Portions of three Landsat TM satellite scenes: Path 72, Row 21 (acquired August 28, 1999), Path 73, Rows 21 (acquired June 21, 2001), and Path 74, Row 21 (August 2, 2002) were used to classify the project area into 43 earth cover categories. An unsupervised clustering technique was used to determine the location of field sites and a custom field data collection form and digital database were used to record field information. A helicopter was utilized to gain access to field sites throughout the project area. Global positioning system (GPS) technology was used both to navigate to pre-selected sites and record locations of new sites selected in the field. Data were collected on 648 field sites within the study area captured primarily on the Path 73, Row 21 Landsat TM data during a 13-day field season from June 23, 2006 through July 6, 2006. Approximately 30% (202) of these field sites were set aside for accuracy assessment. Fifteen additional sites were added for accuracy assessment purposes in the Clear Water, Turbid Water, and Snow mapping classes. A modified supervised/unsupervised classification technique was performed to classify the satellite imagery. The classification scheme for the earth cover inventory was based on Viereck et al. (1992) and revised through a series of meetings coordinated by the BLM and DU. The overall accuracy of the mapping categories was 91.2% at the +/-5% level of variation in interpretation of the accuracy assessment reference sites.; abstract: The US Fish and Wildlife Service (USFWS), the Bureau of Land Management Alaska (BLM) and Ducks Unlimited, Inc. (DU) have been cooperatively mapping wetlands and associated uplands in Alaska using remote sensing and GIS technologies since 1988. The goal of this project was to continue the mapping effort by mapping a portion of the Alaska Peninsula/Becharof National Wildlife Refuge, and neighboring lands. Portions of three Landsat TM satellite scenes: Path 72, Row 21 (acquired August 28, 1999), Path 73, Rows 21 (acquired June 21, 2001), and Path 74, Row 21 (August 2, 2002) were used to classify the project area into 43 earth cover categories. An unsupervised clustering technique was used to determine the location of field sites and a custom field data collection form and digital database were used to record field information. A helicopter was utilized to gain access to field sites throughout the project area. Global positioning system (GPS) technology was used both to navigate to pre-selected sites and record locations of new sites selected in the field. Data were collected on 648 field sites within the study area captured primarily on the Path 73, Row 21 Landsat TM data during a 13-day field season from June 23, 2006 through July 6, 2006. Approximately 30% (202) of these field sites were set aside for accuracy assessment. Fifteen additional sites were added for accuracy assessment purposes in the Clear Water, Turbid Water, and Snow mapping classes. A modified supervised/unsupervised classification technique was performed to classify the satellite imagery. The classification scheme for the earth cover inventory was based on Viereck et al. (1992) and revised through a series of meetings coordinated by the BLM and DU. The overall accuracy of the mapping categories was 91.2% at the +/-5% level of variation in interpretation of the accuracy assessment reference sites.

  6. g

    Data from: Multi-temporal landslide inventory for a study area in Southern...

    • dataservices.gfz-potsdam.de
    Updated 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robert Behling; Sigrid Roessner (2020). Multi-temporal landslide inventory for a study area in Southern Kyrgyzstan derived from RapidEye satellite time series data (2009 – 2013) [Dataset]. http://doi.org/10.5880/gfz.1.4.2020.001
    Explore at:
    Dataset updated
    2020
    Dataset provided by
    GFZ Data Services
    datacite
    Authors
    Robert Behling; Sigrid Roessner
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Dataset funded by
    German Aerospace Centerhttp://dlr.de/
    Bundesministerium für Bildung und Forschung
    Description

    Multi-temporal landslide inventories are important information for the understanding of landslide dynamics and related predisposing and triggering factors, and thus a crucial prerequisite for probabilistic hazard and risk assessment. Despite the great importance of these inventories, they do not exist for many landslide prone regions in the world. In this context, the recently evolving global-scale availability of high temporal and spatial resolution optical satellite imagery (RapidEye, Sentinel-2A/B, planet) has opened up new opportunities for the creation of these multi-temporal inventories. Taking up on these at the time still to be evolving opportunities, a semi-automated spatiotemporal landslide mapper was developed at the Remote Sensing Section of the GFZ Potsdam being capable of deriving post-failure landslide objects (polygons) from optical satellite time series data (Behling et al., 2014). The developed algorithm was applied to a 7500 km² study area using RapidEye time series data which were acquired in the frame of the RESA project (Project ID 424) for the time period between 2009 and 2013. A multi-temporal landslide inventory from 1986 to 2013 derived from multi-sensor optical satellite time series data is available as separate publications (Behling et al., 2016; Behling and Roessner, 2020). The resulting multi-temporal landslide inventory being subject of this data publication is supplementary to the article of Behling et al. (2014), which describes the developed spatiotemporal landslide mapper in detail. This landslide mapper detects landslide objects by analyzing temporal NDVI-based vegetation cover changes and relief-oriented parameters in a rule-based approach combining pixel- and object-based analysis. Typical landslide-related vegetation changes comprise abrupt disturbances of the vegetation cover in the result of the actual failure as well as post-failure revegetation which usually happens at a slower pace compared to vegetation growth in the surrounding undisturbed areas, since the displaced landslide masses are susceptible to subsequent erosion and reactivation processes. The resulting landslide-specific temporal surface cover dynamics in form of temporal trajectories is used as input information to detect freshly occurred landslides and to separate them from other temporal variations in the surrounding vegetation cover (e.g., seasonal vegetation changes or changes due to agricultural activities) and from permanently non-vegetated areas (e.g., urban non-vegetated areas, water bodies, rock outcrops). For a detailed description of the methodology of the spatiotemporal landslide mapper, please see Behling et al. (2014). The data are provided in vector format (polygons) in form of a standard shapefile contained in the zip-file Behling_et-al_2014_landslide_inventory_SouthernKyrgyzstan_2009_2013.zip and are described in more detail in the data description file.

  7. A

    Pattern-based GIS for understanding content of very large Earth Science...

    • data.amerigeoss.org
    • data.wu.ac.at
    html
    Updated Jul 19, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2018). Pattern-based GIS for understanding content of very large Earth Science datasets [Dataset]. https://data.amerigeoss.org/pl/dataset/pattern-based-gis-for-understanding-content-of-very-large-earth-science-datasets
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Jul 19, 2018
    Dataset provided by
    United States
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Area covered
    Earth
    Description

    The research focus in the field of remotely sensed imagery has shifted from collection and warehousing of data ' tasks for which a mature technology already exists, to auto-extraction of information and knowledge discovery from this valuable resource ' tasks for which technology is still under active development. In particular, intelligent algorithms for analysis of very large rasters, either high resolutions images or medium resolution global datasets, that are becoming more and more prevalent, are lacking. We propose to develop the Geospatial Pattern Analysis Toolbox (GeoPAT) a computationally efficient, scalable, and robust suite of algorithms that supports GIS processes such as segmentation, unsupervised/supervised classification of segments, query and retrieval, and change detection in giga-pixel and larger rasters. At the core of the technology that underpins GeoPAT is the novel concept of pattern-based image analysis. Unlike pixel-based or object-based (OBIA) image analysis, GeoPAT partitions an image into overlapping square scenes containing 1,000'100,000 pixels and performs further processing on those scenes using pattern signature and pattern similarity ' concepts first developed in the field of Content-Based Image Retrieval. This fusion of methods from two different areas of research results in orders of magnitude performance boost in application to very large images without sacrificing quality of the output.

    GeoPAT v.1.0 already exists as the GRASS GIS add-on that has been developed and tested on medium resolution continental-scale datasets including the National Land Cover Dataset and the National Elevation Dataset. Proposed project will develop GeoPAT v.2.0 ' much improved and extended version of the present software. We estimate an overall entry TRL for GeoPAT v.1.0 to be 3-4 and the planned exit TRL for GeoPAT v.2.0 to be 5-6. Moreover, several new important functionalities will be added. Proposed improvements includes conversion of GeoPAT from being the GRASS add-on to stand-alone software capable of being integrated with other systems, full implementation of web-based interface, writing new modules to extent it applicability to high resolution images/rasters and medium resolution climate data, extension to spatio-temporal domain, enabling hierarchical search and segmentation, development of improved pattern signature and their similarity measures, parallelization of the code, implementation of divide and conquer strategy to speed up selected modules.

    The proposed technology will contribute to a wide range of Earth Science investigations and missions through enabling extraction of information from diverse types of very large datasets. Analyzing the entire dataset without the need of sub-dividing it due to software limitations offers important advantage of uniformity and consistency. We propose to demonstrate the utilization of GeoPAT technology on two specific applications. The first application is a web-based, real time, visual search engine for local physiography utilizing query-by-example on the entire, global-extent SRTM 90 m resolution dataset. User selects region where process of interest is known to occur and the search engine identifies other areas around the world with similar physiographic character and thus potential for similar process. The second application is monitoring urban areas in their entirety at the high resolution including mapping of impervious surface and identifying settlements for improved disaggregation of census data.

  8. d

    Northern Plains High Resolution Land Cover

    • catalog.data.gov
    • agdatacommons.nal.usda.gov
    • +6more
    Updated Sep 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Forest Service (2025). Northern Plains High Resolution Land Cover [Dataset]. https://catalog.data.gov/dataset/northern-plains-high-resolution-land-cover-image-service-2e4df
    Explore at:
    Dataset updated
    Sep 2, 2025
    Dataset provided by
    U.S. Forest Service
    Description

    This image service contains high-resolution land cover data for the states of Nebraska, South Dakota, and North Dakota. These data are a digital representation of land cover derived from 1-meter aerial imagery from the USDA National Agriculture Imagery Program (NAIP.) The year of NAIP used for each state was 2014.Data are intended for use in rural areas and therefore do not include land cover in cities and towns. Land cover classes (tree cover, other land cover, or water) were mapped using an object-based image analysis approach and supervised classification. These data are designed for conducting geospatial analyses and for producing cartographic products. In particular, these data are intended to depict the location of tree cover in the county. The mapping procedures were developed specifically for agricultural landscapes that are dominated by annual crops, rangeland, and pasture and where tree cover is often found in narrow configurations, such as windbreaks and riparian corridors. Because much of the tree cover in agricultural areas of the United States occurs in windbreaks and narrow riparian corridors, many geospatial datasets derived from coarser-resolution satellite data (such as Landsat), do not capture these landscape features. This dataset is intended to address this particular data gap. These data can be downloaded by county at the Forest Service Research Data Archive. Nebraska: https://www.fs.usda.gov/rds/archive/catalog/RDS-2019-0038 South Dakota: https://www.fs.usda.gov/rds/archive/catalog/RDS-2022-0068 North Dakota: https://www.fs.usda.gov/rds/archive/catalog/RDS-2022-0067 A Kansas dataset was also developed using the same methods and is located at: Kansas data download: https://www.fs.usda.gov/rds/archive/catalog/RDS-2019-0052 Kansas map service: https://data-usfs.hub.arcgis.com/documents/high-resolution-tree-cover-of-kansas-2015-map-service/explore

  9. Geospatial data for the Vegetation Mapping Inventory Project of White Sands...

    • catalog.data.gov
    • datasets.ai
    Updated Nov 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Park Service (2025). Geospatial data for the Vegetation Mapping Inventory Project of White Sands National Monument [Dataset]. https://catalog.data.gov/dataset/geospatial-data-for-the-vegetation-mapping-inventory-project-of-white-sands-national-monum
    Explore at:
    Dataset updated
    Nov 25, 2025
    Dataset provided by
    National Park Servicehttp://www.nps.gov/
    Description

    The files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. The WHSA vegetation map was developed using a combined strategy of automated digital image classification and direct analog image interpretation of aerial photography and satellite imagery. Initially, the aerial photography and satellite imagery were processed and entered into a GIS along with ancillary spatial layers. A working map legend of ecologically based vegetation map units was developed using the vegetation classification described in the report as the foundation. The intent was to develop map units that targeted the plant-association level wherever possible within the constraints of image quality, information content, and resolution. With the provisional legend and ground-control points provided by the field-plot data (the same data used to develop the vegetation classification), a combination of heads-up screen digitizing of polygons based on image interpretation and supervised image classifications were conducted. The outcome was a vegetation map composed of a suite of map units defined by plant associations and represented by sets of mapped polygons with similar spectral and site characteristics.

  10. MAV Forest Cover Classification

    • gis-fws.opendata.arcgis.com
    Updated Jul 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Fish & Wildlife Service (2024). MAV Forest Cover Classification [Dataset]. https://gis-fws.opendata.arcgis.com/datasets/mav-forest-cover-classification-
    Explore at:
    Dataset updated
    Jul 30, 2024
    Dataset provided by
    U.S. Fish and Wildlife Servicehttp://www.fws.gov/
    Authors
    U.S. Fish & Wildlife Service
    Area covered
    Description

    This image classification of forest cover in the MAV was created using Google Dynamic World (https://www.nature.com/articles/s41597-022-01307-4 - https://dynamicworld.app/) to determine what was classified as forest. This dataset is a result of an automated land classification for every Sentinel image that is released. The code used for this process is as follows. ee.ImageCollection('GOOGLE/DYNAMICWORLD/V1') \ .filterBounds(geometry) \ .filterDate(oldstartDate, oldendDate) \ .select('label') \ .mode() \ .eq(1) \ .updateMask(urban) We selected the Dynamic World dataset and filtered by our area of interest by the extents of the Lower Mississippi Joint Venture boundary (i.e. Mississippi Alluvial Valley and West Gulf Coastal Plain ecological bird conservation regions (BCRs).We filtered the dataset based on a start and end date which is the first of 2021 and the last day of 2021.With this dataset each class has a band that represents probability of that pixel having complete coverage of that class (https://developers.google.com/earth-engine/datasets/catalog/GOOGLE_DYNAMICWORLD_V1#bands)Data accuracy was assessed at @82% accuracy and data resolution is 10m. Each image has a ‘label’ band with a discrete classification of LULC, but also 9 probability bands with class-specific probability scores generated by the deep learning model on the basis of the pixel’s spatial context. To generate an annual LULC composite comparable with WC and Esri, we calculated the mode of the predicted LULC class in the ‘label’ band of all DW images for 2020.Michael Mitchell with Ducks Unlimited Southern Regional Office led the development of this effort, in coordination and collaboration with Lower Mississippi Valley Joint Venture staff.

  11. M

    TCMA 1-Meter Urban Tree Canopy Classification

    • gisdata.mn.gov
    • data.wu.ac.at
    html, jpeg
    Updated Apr 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Minnesota (2025). TCMA 1-Meter Urban Tree Canopy Classification [Dataset]. https://gisdata.mn.gov/dataset/base-treecanopy-twincities
    Explore at:
    jpeg, htmlAvailable download formats
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    University of Minnesota
    Description

    This classification was created using high-resolution multispectral National Agriculture Imagery Program (NAIP) leaf-on imagery (2015), spring leaf-off imagery (2011- 2014), Multispectral derived indices, LiDAR data, LiDAR derived products, and other thematic ancillary data including the updated National Wetlands Inventory, LiDAR building footprints, airport, OpenStreetMap roads and railroads centerlines. These data sets were integrated using an Object-Based Image Analysis (OBIA) approach to classify 12 land cover classes: Deciduous Tree Canopy, Coniferous Tree Canopy, Buildings, Bare Soil, other Paved surface, Extraction, Row Crop, Grass/Shrub, Lakes, Rivers, Emergent Wetland, Forest and Shrub Wetland.

    We mapped the 12 classes by using an OBIA approach through the creation of customized rule sets for each area. We used the Cognition Network Language (CNL) within the software eCognition Developer to develop the customized rule sets. The eCognition Server was used to execute a batch and parallel processing which greatly reduced the amount of time to produce the classification. The classification results were evaluated for each area using independent stratified randomly generated points. Accuracy assessment estimators included overall accuracies, producers accuracy, users accuracy, and kappa coefficient. The combination of spectral data and LiDAR through an OBIA method helped to improve the overall accuracy results providing more aesthetically pleasing maps of land cover classes with highly accurate results.

  12. Land Cover Classification (Sentinel-2)

    • uneca.africageoportal.com
    • caribbeangeoportal.com
    • +7more
    Updated Feb 17, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2021). Land Cover Classification (Sentinel-2) [Dataset]. https://uneca.africageoportal.com/content/afd124844ba84da69c2c533d4af10a58
    Explore at:
    Dataset updated
    Feb 17, 2021
    Dataset authored and provided by
    Esrihttp://esri.com/
    Area covered
    Description

    Land cover describes the surface of the earth. Land cover maps are useful in urban planning, resource management, change detection, agriculture, and a variety of other applications in which information related to earth surface is required. Land cover classification is a complex exercise and is hard to capture using traditional means. Deep learning models are highly capable of learning these complex semantics, giving superior results.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.InputRaster, mosaic dataset, or image service. (Preferred cell size is 10 meters.)Note: This model is trained to work on Sentinel-2 Imagery datasets which are in WGS 1984 Web Mercator (auxiliary sphere) coordinate system (WKID 3857).OutputClassified raster with the same classes as in Corine Land Cover (CLC) 2018.Applicable geographiesThis model is expected to work well in Europe and the United States.Model architectureThis model uses the UNet model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an overall accuracy of 82.41% with Level-1C imagery and 84.0% with Level-2A imagery, for CLC class level 2 classification (15 classes). The table below summarizes the precision, recall and F1-score of the model on the validation dataset.ClassLevel-2A ImageryLevel-1C ImageryPrecisionRecallF1 ScorePrecisionRecallF1 ScoreUrban fabric0.810.830.820.820.840.83Industrial, commercial and transport units0.740.650.690.730.660.7Mine, dump and construction sites0.630.520.570.690.550.61Artificial, non-agricultural vegetated areas0.700.460.550.670.470.55Arable land0.860.900.880.860.890.87Permanent crops0.760.730.740.750.710.73Pastures0.750.710.730.740.710.73Heterogeneous agricultural areas0.610.560.580.620.510.56Forests0.880.930.900.880.920.9Scrub and/or herbaceous vegetation associations0.740.690.720.730.670.7Open spaces with little or no vegetation0.870.840.850.850.820.84Inland wetlands0.810.780.800.820.770.79Maritime wetlands0.740.760.750.870.890.88Inland waters0.940.920.930.940.910.92Marine waters0.980.990.980.970.980.98This model has an overall accuracy of 90.79% with Level-2A imagery for CLC class level 1 classification (5 classes). The table below summarizes the precision, recall and F1-score of the model on the validation dataset.ClassPrecisionRecallF1 ScoreArtificial surfaces0.850.810.83Agricultural areas0.900.910.91Forest and semi natural areas0.910.920.92Wetlands0.770.700.73Water bodies0.960.970.96Training dataThis model has been trained on the Corine Land Cover (CLC) 2018 with the same Sentinel 2 scenes that were used to produce the database. Scene IDs for the imagery were available in the metadata of the dataset.Sample resultsHere are a few results from the model. To view more, see this story.

  13. Vegetation - Auburn State Recreation Area [ds2956]

    • gis.data.ca.gov
    • data.ca.gov
    • +4more
    Updated Dec 14, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Department of Fish and Wildlife (2021). Vegetation - Auburn State Recreation Area [ds2956] [Dataset]. https://gis.data.ca.gov/datasets/CDFW::vegetation-auburn-state-recreation-area-ds2956
    Explore at:
    Dataset updated
    Dec 14, 2021
    Dataset authored and provided by
    California Department of Fish and Wildlifehttps://wildlife.ca.gov/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    Tukman Geospatial LLC made this map under a subcontract from Space Imaging. Gary Walter was the contracting officer for the project at the Department of Parks and Recreation. The vegetation map was created using a combination of automated and manual techniques. Automated techniques included image segmentation, image classification, and GIS modeling. Manual techniques included manual editing and field work. Image segments were derived from 1 meter spatial resolution pansharpened IKONOS imagery, collected in the spring of 2004. The classification is based on the Manual of California 1995, which used series translated to NVC Alliance concepts by Tukman Geospatial.

  14. o

    Canopy Cover, 1998

    • rlisdiscovery.oregonmetro.gov
    • rlis-discovery-drcmetro.hub.arcgis.com
    • +1more
    Updated Aug 27, 1998
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Metro (1998). Canopy Cover, 1998 [Dataset]. https://rlisdiscovery.oregonmetro.gov/maps/9cb565df406c4fc3a43cbd9ecfdba933
    Explore at:
    Dataset updated
    Aug 27, 1998
    Dataset authored and provided by
    Metro
    Area covered
    Description

    The classification and mapping of urban forest canopy cover was accomplished using Landsat TM digital satellite imagery. Landsat TM data includes spectral reflectance information from the visible, near-infrared, and middle-infrared portions of the spectrum and has a resampled spatial resolution of 25-meters. The wide range spectral information contained in Landsat TM data makes it ideal for classifying vegetation such as forest canopy cover. Moreover, Landsat TM data was used to produce the original 1991-based land cover map for the Portland area. These forest canopy cover classes were selected due to the proven ability of Landsat TM imagery to effectively discriminate this level of forest cover detail. In addition, other vegetation cover mapping projects in and surrounding the Willamette Valley have used these same or similar cover class discriminations. Unsupervised classification techniques were used to discriminate and map forest canopy cover. The goal in any image classification project is to determine when the image is a good predictor of the vegetative characteristic of interest, such as forest canopy cover. ERDAS Imagine digital image processing software was utilized to stratify the Landsat TM imagery into approximately 150 spectral classes based solely on the spectral information in the image data set. Spectral responses in the imagery, aerial photo and digital orthophotography interpretation, and ancillary GIS data were utilized to determine which spectral classes represent forested areas and subsequently to categorize each forested spectral class into the appropriate canopy cover class. For instance, 51 to 75% tree canopy cover in a residential area is categorized into the same class as an area of 51 to 75% forest canopy cover in a natural area. Date of last data update: 1998-08-27 This is official RLIS data. Contact Person: Joe Gordon joe.gordon@oregonmetro.gov 503-797-1587 RLIS Metadata Viewer: https://gis.oregonmetro.gov/rlis-metadata/#/details/842 RLIS Terms of Use: https://rlisdiscovery.oregonmetro.gov/pages/terms-of-use

  15. TreeSatAI Benchmark Archive for Deep Learning in Forest Applications

    • zenodo.org
    • data.niaid.nih.gov
    bin, pdf, zip
    Updated Jul 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christian Schulz; Christian Schulz; Steve Ahlswede; Steve Ahlswede; Christiano Gava; Patrick Helber; Patrick Helber; Benjamin Bischke; Benjamin Bischke; Florencia Arias; Michael Förster; Michael Förster; Jörn Hees; Jörn Hees; Begüm Demir; Begüm Demir; Birgit Kleinschmit; Birgit Kleinschmit; Christiano Gava; Florencia Arias (2024). TreeSatAI Benchmark Archive for Deep Learning in Forest Applications [Dataset]. http://doi.org/10.5281/zenodo.6598391
    Explore at:
    pdf, zip, binAvailable download formats
    Dataset updated
    Jul 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Christian Schulz; Christian Schulz; Steve Ahlswede; Steve Ahlswede; Christiano Gava; Patrick Helber; Patrick Helber; Benjamin Bischke; Benjamin Bischke; Florencia Arias; Michael Förster; Michael Förster; Jörn Hees; Jörn Hees; Begüm Demir; Begüm Demir; Birgit Kleinschmit; Birgit Kleinschmit; Christiano Gava; Florencia Arias
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Context and Aim

    Deep learning in Earth Observation requires large image archives with highly reliable labels for model training and testing. However, a preferable quality standard for forest applications in Europe has not yet been determined. The TreeSatAI consortium investigated numerous sources for annotated datasets as an alternative to manually labeled training datasets.

    We found the federal forest inventory of Lower Saxony, Germany represents an unseen treasure of annotated samples for training data generation. The respective 20-cm Color-infrared (CIR) imagery, which is used for forestry management through visual interpretation, constitutes an excellent baseline for deep learning tasks such as image segmentation and classification.

    Description

    The data archive is highly suitable for benchmarking as it represents the real-world data situation of many German forest management services. One the one hand, it has a high number of samples which are supported by the high-resolution aerial imagery. On the other hand, this data archive presents challenges, including class label imbalances between the different forest stand types.

    The TreeSatAI Benchmark Archive contains:

    • 50,381 image triplets (aerial, Sentinel-1, Sentinel-2)

    • synchronized time steps and locations

    • all original spectral bands/polarizations from the sensors

    • 20 species classes (single labels)

    • 12 age classes (single labels)

    • 15 genus classes (multi labels)

    • 60 m and 200 m patches

    • fixed split for train (90%) and test (10%) data

    • additional single labels such as English species name, genus, forest stand type, foliage type, land cover

    The geoTIFF and GeoJSON files are readable in any GIS software, such as QGIS. For further information, we refer to the PDF document in the archive and publications in the reference section.

    Version history

    v1.0.0 - First release

    Citation

    Ahlswede et al. (in prep.)

    GitHub

    Full code examples and pre-trained models from the dataset article (Ahlswede et al. 2022) using the TreeSatAI Benchmark Archive are published on the GitHub repositories of the Remote Sensing Image Analysis (RSiM) Group (https://git.tu-berlin.de/rsim/treesat_benchmark). Code examples for the sampling strategy can be made available by Christian Schulz via email request.

    Folder structure

    We refer to the proposed folder structure in the PDF file.

    • Folder “aerial” contains the aerial imagery patches derived from summertime orthophotos of the years 2011 to 2020. Patches are available in 60 x 60 m (304 x 304 pixels). Band order is near-infrared, red, green, and blue. Spatial resolution is 20 cm.

    • Folder “s1” contains the Sentinel-1 imagery patches derived from summertime mosaics of the years 2015 to 2020. Patches are available in 60 x 60 m (6 x 6 pixels) and 200 x 200 m (20 x 20 pixels). Band order is VV, VH, and VV/VH ratio. Spatial resolution is 10 m.

    • Folder “s2” contains the Sentinel-2 imagery patches derived from summertime mosaics of the years 2015 to 2020. Patches are available in 60 x 60 m (6 x 6 pixels) and 200 x 200 m (20 x 20 pixels). Band order is B02, B03, B04, B08, B05, B06, B07, B8A, B11, B12, B01, and B09. Spatial resolution is 10 m.

    • The folder “labels” contains a JSON string which was used for multi-labeling of the training patches. Code example of an image sample with respective proportions of 94% for Abies and 6% for Larix is: "Abies_alba_3_834_WEFL_NLF.tif": [["Abies", 0.93771], ["Larix", 0.06229]]

    • The two files “test_filesnames.lst” and “train_filenames.lst” define the filenames used for train (90%) and test (10%) split. We refer to this fixed split for better reproducibility and comparability.

    • The folder “geojson” contains geoJSON files with all the samples chosen for the derivation of training patch generation (point, 60 m bounding box, 200 m bounding box).

    CAUTION: As we could not upload the aerial patches as a single zip file on Zenodo, you need to download the 20 single species files (aerial_60m_…zip) separately. Then, unzip them into a folder named “aerial” with a subfolder named “60m”. This structure is recommended for better reproducibility and comparability to the experimental results of Ahlswede et al. (2022),

    Join the archive

    Model training, benchmarking, algorithm development… many applications are possible! Feel free to add samples from other regions in Europe or even worldwide. Additional remote sensing data from Lidar, UAVs or aerial imagery from different time steps are very welcome. This helps the research community in development of better deep learning and machine learning models for forest applications. You might have questions or want to share code/results/publications using that archive? Feel free to contact the authors.

    Project description

    This work was part of the project TreeSatAI (Artificial Intelligence with Satellite data and Multi-Source Geodata for Monitoring of Trees at Infrastructures, Nature Conservation Sites and Forests). Its overall aim is the development of AI methods for the monitoring of forests and woody features on a local, regional and global scale. Based on freely available geodata from different sources (e.g., remote sensing, administration maps, and social media), prototypes will be developed for the deep learning-based extraction and classification of tree- and tree stand features. These prototypes deal with real cases from the monitoring of managed forests, nature conservation and infrastructures. The development of the resulting services by three enterprises (liveEO, Vision Impulse and LUP Potsdam) will be supported by three research institutes (German Research Center for Artificial Intelligence, TU Remote Sensing Image Analysis Group, TUB Geoinformation in Environmental Planning Lab).

    Publications

    Ahlswede et al. (2022, in prep.): TreeSatAI Dataset Publication

    Ahlswede S., Nimisha, T.M., and Demir, B. (2022, in revision): Embedded Self-Enhancement Maps for Weakly Supervised Tree Species Mapping in Remote Sensing Images. IEEE Trans Geosci Remote Sens

    Schulz et al. (2022, in prep.): Phenoprofiling

    Conference contributions

    S. Ahlswede, N. T. Madam, C. Schulz, B. Kleinschmit and B. Demіr, "Weakly Supervised Semantic Segmentation of Remote Sensing Images for Tree Species Classification Based on Explanation Methods", IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 2022.

    C. Schulz, M. Förster, S. Vulova, T. Gränzig and B. Kleinschmit, “Exploring the temporal fingerprints of mid-European forest types from Sentinel-1 RVI and Sentinel-2 NDVI time series”, IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 2022.

    C. Schulz, M. Förster, S. Vulova and B. Kleinschmit, “The temporal fingerprints of common European forest types from SAR and optical remote sensing data”, AGU Fall Meeting, New Orleans, USA, 2021.

    B. Kleinschmit, M. Förster, C. Schulz, F. Arias, B. Demir, S. Ahlswede, A. K. Aksoy, T. Ha Minh, J. Hees, C. Gava, P. Helber, B. Bischke, P. Habelitz, A. Frick, R. Klinke, S. Gey, D. Seidel, S. Przywarra, R. Zondag and B. Odermatt, “Artificial Intelligence with Satellite data and Multi-Source Geodata for Monitoring of Trees and Forests”, Living Planet Symposium, Bonn, Germany, 2022.

    C. Schulz, M. Förster, S. Vulova, T. Gränzig and B. Kleinschmit, (2022, submitted): “Exploring the temporal fingerprints of sixteen mid-European forest types from Sentinel-1 and Sentinel-2 time series”, ForestSAT, Berlin, Germany, 2022.

  16. d

    Landcover Raster Data (2010) – 6in Resolution

    • catalog.data.gov
    • data.cityofnewyork.us
    • +2more
    Updated Sep 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.cityofnewyork.us (2023). Landcover Raster Data (2010) – 6in Resolution [Dataset]. https://catalog.data.gov/dataset/landcover-raster-data-2010-6in-resolution
    Explore at:
    Dataset updated
    Sep 2, 2023
    Dataset provided by
    data.cityofnewyork.us
    Description

    6 inch resolution raster image of New York City, classified by landcover type. High resolution land cover data set for New York City. This is the 6 inch version of the high-resolution land cover dataset for New York City. Seven land cover classes were mapped: (1) tree canopy, (2) grass/shrub, (3) bare earth, (4) water, (5) buildings, (6) roads, and (7) other paved surfaces. The minimum mapping unit for the delineation of features was set at 3 square feet. The primary sources used to derive this land cover layer were the 2010 LiDAR and the 2008 4-band orthoimagery. Ancillary data sources included GIS data (city boundary, building footprints, water, parking lots, roads, railroads, railroad structures, ballfields) provided by New York City (all ancillary datasets except railroads); UVM Spatial Analysis Laboratory manually created railroad polygons from manual interpretation of 2008 4-band orthoimagery. The tree canopy class was considered current as of 2010; the remaining land-cover classes were considered current as of 2008. Object-Based Image Analysis (OBIA) techniques were employed to extract land cover information using the best available remotely sensed and vector GIS datasets. OBIA systems work by grouping pixels into meaningful objects based on their spectral and spatial properties, while taking into account boundaries imposed by existing vector datasets. Within the OBIA environment a rule-based expert system was designed to effectively mimic the process of manual image analysis by incorporating the elements of image interpretation (color/tone, texture, pattern, location, size, and shape) into the classification process. A series of morphological procedures were employed to insure that the end product is both accurate and cartographically pleasing. More than 35,000 corrections were made to the classification. Overall accuracy was 96%. This dataset was developed as part of the Urban Tree Canopy (UTC) Assessment for New York City. As such, it represents a 'top down' mapping perspective in which tree canopy over hanging other features is assigned to the tree canopy class. At the time of its creation this dataset represents the most detailed and accurate land cover dataset for the area. This project was funded by National Urban and Community Forestry Advisory Council (NUCFAC) and the National Science Fundation (NSF), although it is not specifically endorsed by either agency. The methods used were developed by the University of Vermont Spatial Analysis Laboratory, in collaboration with the New York City Urban Field Station, with funding from the USDA Forest Service.

  17. i

    Broadscale habitat (EUNIS level 3) for South East of Falmouth recommended...

    • gis.ices.dk
    • data.europa.eu
    • +1more
    Updated Feb 25, 2012
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2012). Broadscale habitat (EUNIS level 3) for South East of Falmouth recommended Marine Conservation Zone (rMCZ) [Dataset]. https://gis.ices.dk/geonetwork/srv/resources/registries/vocabularies/title/concepts/Habitats
    Explore at:
    Dataset updated
    Feb 25, 2012
    Description

    Updated habitat map resulting from an integrated analysis of the dedicated 2012 survey data (CEND3/12b) for South East of Falmouth rMCZ. A new habitat map for the site was produced by analysing and interpreting the available acoustic data and the groundtruth data collected by the dedicated survey of this site. The process is a combination of two approaches, auto-classification (image analysis) and expert interpretation, as described below. The routine for auto-classification is flexible and dependent on site-specific data, allowing for application of a bespoke routine to maximise the acoustic data available. ArcGIS was used to perform an initial unsupervised classification on the supplied backscatter image. The single band backscatter mosaic was filtered and smoothed prior to the application of an Iso cluster/maximum likelihood classification routine. For further information, refer to the South-East Falmouth rMCZ Post-survey Site Report vs. 8 (Green, S. & Cooper, R., 2015).

  18. w

    Wetlands GIS of the Murray-Darling Basin Series 2.0

    • data.wu.ac.at
    • data.gov.au
    geojson, kmz, pdf +4
    Updated Oct 2, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Murray-Darling Basin Authority (2018). Wetlands GIS of the Murray-Darling Basin Series 2.0 [Dataset]. https://data.wu.ac.at/odso/data_gov_au/YmJlNWMxY2EtMmZmOC00YTNiLTljMzItMDhkYjIxMGE2ZmQz
    Explore at:
    pdf, wfs, kmz, zip, geojson, wms, rtfAvailable download formats
    Dataset updated
    Oct 2, 2018
    Dataset provided by
    Murray-Darling Basin Authority
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Area covered
    Murray–Darling basin, 1904885fdcee8c2dfa83a3e150c014967551888b
    Description

    The purpose of the Wetlands GIS of the Murray-Darling Basin project was to map the maximum extent of wetlands within a ten year period (1983-1993) based on the presence of water.

    Wetlands greater than 5ha were identified using a combination of an unsupervised classification of spectral classes of Landsat MSS imagery and ancillary wetland information to create information classes of broad wetland groups (Floodplain wetlands, freshwater lakes, saline lakes, and reservoirs). This data layer is the result of an external review of the Murray-Darling Basin Wetlands Verification Series 1.0. by relevant state agencies in NSW, Vic, Qld, SA and ACT.

  19. Wetlands GIS of the Murray-Darling Basin Series 2.0

    • researchdata.edu.au
    Updated Jul 8, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Murray-Darling Basin Authority (2015). Wetlands GIS of the Murray-Darling Basin Series 2.0 [Dataset]. https://researchdata.edu.au/wetlands-gis-murray-series-20/2996680
    Explore at:
    Dataset updated
    Jul 8, 2015
    Dataset provided by
    Data.govhttps://data.gov/
    Authors
    Murray-Darling Basin Authority
    License

    Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
    License information was derived automatically

    Area covered
    Description

    The purpose of the Wetlands GIS of the Murray-Darling Basin project was to map the maximum extent of wetlands within a ten year period (1983-1993) based on the presence of water.\r \r Wetlands greater than 5ha were identified using a combination of an unsupervised classification of spectral classes of Landsat MSS imagery and ancillary wetland information to create information classes of broad wetland groups (Floodplain wetlands, freshwater lakes, saline lakes, and reservoirs). This data layer is the result of an external review of the Murray-Darling Basin Wetlands Verification Series 1.0. by relevant state agencies in NSW, Vic, Qld, SA and ACT.\r

  20. Summary of dates and properties of the satellite imagery used in the study.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antonio Sanchez; Dania Abdul Malak; Anis Guelmami; Christian Perennou (2023). Summary of dates and properties of the satellite imagery used in the study. [Dataset]. http://doi.org/10.1371/journal.pone.0122694.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Antonio Sanchez; Dania Abdul Malak; Anis Guelmami; Christian Perennou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    According to the department, dates may change due to the presence of clouds in the images.Summary of dates and properties of the satellite imagery used in the study.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Nicholas Murray; David A. Keith; Daniel Simpson; John H. Wilshire; Richard M. Lucas (2023). Supporting information for: REMAP: An online remote sensing application for land cover classification and monitoring [Dataset]. http://doi.org/10.6084/m9.figshare.5579620.v1
Organization logoOrganization logo

Supporting information for: REMAP: An online remote sensing application for land cover classification and monitoring

Explore at:
txtAvailable download formats
Dataset updated
Jun 6, 2023
Dataset provided by
figshare
Figsharehttp://figshare.com/
Authors
Nicholas Murray; David A. Keith; Daniel Simpson; John H. Wilshire; Richard M. Lucas
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Supporting information for: REMAP: An online remote sensing application for land cover classification and monitoringcsv and json files for implementing land cover classifications using the remap, the remote ecosystem assessment and monitoring pipeline (https://remap-app.org/)Nearmap aerial photograph courtesy of Nearmap Pty Ltd.For further information see:Murray, N.J., Keith, D.A., Simpson, D., Wilshire, J.H., Lucas, R.M. (accepted) REMAP: A cloud-based remote sensing application for generalized ecosystem classifications. Methods in Ecology and Evolution.

Search
Clear search
Close search
Google apps
Main menu