This deep learning model is used to detect and segment trees in high resolution drone or aerial imagery. Tree detection can be used for applications such as vegetation management, forestry, urban planning, etc. High resolution aerial and drone imagery can be used for tree detection due to its high spatio-temporal coverage.This deep learning model is based on DeepForest and has been trained on data from the National Ecological Observatory Network (NEON). The model also uses Segment Anything Model (SAM) by Meta.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8 bit, 3-band high-resolution (10-25 cm) imagery.OutputFeature class containing separate masks for each tree.Applicable geographiesThe model is expected to work well in the United States.Model architectureThis model is based upon the DeepForest python package which uses the RetinaNet model architecture implemented in torchvision and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThis model has an precision score of 0.66 and recall of 0.79.Training dataThis model has been trained on NEON Tree Benchmark dataset, provided by the Weecology Lab at the University of Florida. The model also uses Segment Anything Model (SAM) by Meta that is trained on 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.CitationsWeinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309Geographic Generalization in Airborne RGB Deep Learning Tree Detection Ben Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare, Ethan P White bioRxiv 790071; doi: https://doi.org/10.1101/790071
Image Mask is a configurable app template for identifying areas of an image that have changed over time or that meet user-set thresholds for calculated spectral indexes. The template also includes tools for measurement, recording locations, and more.App users can zoom to bookmarked areas of interest (or search for their own), select any of the imagery layers from the associated web map to analyze, use a time slider or dropdown menu to select images, then choose between the Change Detection or Mask tools to produce results.Image Mask users can do the following:Zoom to bookmarked areas of interest (or bookmark their own)Select specific images from a layer to visualize (search by date or another attribute)Use the Change Detection tool to compare two images in a layer (see options, below)Use the Mask tool to highlight areas that meet a user-set threshold for common spectral indexes (NDVI, SAVI, a burn index, and a water index). For example, highlight all the areas in an image with NDVI values above 0.25 to find vegetation.Annotate imagery using editable feature layersPerform image measurement on imagery layers that have mensuration capabilitiesExport an imagery layer to the user's local machine, or as a layer in the user’s ArcGIS accountUse CasesA student investigating urban expansion over time using Esri’s Multispectral Landsat image serviceA farmer using NAIP imagery to examine changes in crop healthAn image analyst recording burn scar extents using satellite imageryAn aid worker identifying regions with extreme drought to focus assistanceChange detection methodsFor each imagery layer, give app users one or more of the following change detection options:Image Brightness (calculates the change in overall brightness)Vegetation Index (NDVI) (requires red and infrared bands)Soil-Adjusted Vegetation Index (SAVI) (requires red and infrared bands)Water Index (requires green and short-wave infrared bands)Burn Index (requires infrared and short-wave infrared bands)For each of the indexes, users also have a choice between three modes:Difference Image: calculates increases and decreases for the full extent Difference Mask: users can focus on significant change by setting the minimum increase or decrease to be masked—for example, a user could mask only areas where NDVI increased by at least 0.2Threshold Mask: The user sets a threshold and magnitude for what is masked as change. The app will only identify change that’s above the user-set lower threshold and bigger than the user-set minimum magnitude.Supported DevicesThis application is responsively designed to support use in browsers on desktops, mobile phones, and tablets.Data RequirementsCreating an app with this template requires a web map with at least one imagery layer.Get Started This application can be created in the following ways:Click the Create a Web App button on this pageShare a map and choose to Create a Web AppOn the Content page, click Create - App - From Template Click the Download button to access the source code. Do this if you want to host the app on your own server and optionally customize it to add features or change styling.
Satellite imagery has several applications, including land use and land cover classification, change detection, object detection, etc. Satellite based remote sensing sensors often encounter cloud coverage due to which clear imagery of earth is not collected. The clouded regions should be excluded, or cloud removal algorithms must be applied, before the imagery can be used for analysis. Most of these preprocessing steps require a cloud mask. In case of single-scene imagery, though tedious, it is relatively easy to manually create a cloud mask. However, for a larger number of images, an automated approach for identifying clouds is necessary. This model can be used to automatically generate a cloud mask from Sentinel-2 imagery.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.InputSentinel-2 L2A imagery in the form of a raster, mosaic dataset or image service.OutputClassified raster containing three classes: Low density, Medium density and High density.Applicable geographiesThis model is expected to work well in Europe and the United States. This model works well for land based areas. Large water bodies such as ocean, seas and lakes should be avoided.Model architectureThis model uses the UNet model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an overall accuracy of 94 percent with L2A imagery. The table below summarizes the precision, recall and F1-score of the model on the validation dataset. The comparatively low precision, recall and F1 score for Low density clouds might cause false detection of such clouds in certain urban areas. Also, for certain seasonal clouds some extremely bright pixels might be missed out.ClassPrecisionRecallF1 scoreHigh density0.9600.9750.968Medium density0.9050.8970.901Low density0.7740.5710.657Sample resultsHere are a few results from the model.
A Feature Layer covering the regions surrounding Oakland County, Michigan. Used for cartographic purposes. Fill and Line can be symbolised. BY USING THIS WEBSITE OR THE CONTENT THEREIN, YOU AGREE TO THE TERMS OF USE.
Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This digital geographic dataset contains polygon features representing a mask for park locations within the City of Johns Creek, Georgia.
A rectangle with the Sandy Springs city limits extracted.
This data layer has been modified from its original version. It contains state boundaries for Delaware, the District of Columbia, Maryland, New Jersey, New York, North Carolina, Ohio, Virginia, and West Virginia. Population information associated with the original data has been removed. Mask is edited to match with MD state detailed political boundary.This is a MD iMAP hosted service. Find more information at https://imap.maryland.gov.Feature Service Link:https://geodata.md.gov/imap/rest/services/Boundaries/MD_StateMask/FeatureServer/0
A black semi-transparent Vector Tile Cache layer covering the world except Oakland County, Michigan. Masks out entire world except Oakland County, Michigan. Used for cartographic purposes. Visible at all scales.
BY USING THIS WEBSITE OR THE CONTENT THEREIN, YOU AGREE TO THE TERMS OF USE.
This was created using the Feature Outline Mask tool in ArcGIS 10.2.2. The feature outline mask was applied to the State of WA boundary layer and a tolerance of 5 points was used. Then a polygon was created which is the extent of the world. Finally, the mask was used to cut a hole in the world extent polygon. This is the result you will see in this layer.
U.S. States and Canada Provinces represents the states of the United States and the provinces of Canada. Metadata https://oe.oregonexplorer.info/metadata/bnd_us_states.htm Download https://oregonexplorer.info/ExternalContent/SpatialDataForDownload/bnd_us_states.zip
The Idaho boundary, taken from the Tiger lines file is used here for the purposes of creating a masking showing only data within the state of Idaho. This allows for the prioritization of mesic habitat within idaho.TIGER/Line Geodatabases are spatial extracts from the Census Bureau’s Master Address File/Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) System for use with geographic information systems (GIS) software. The geodatabases contain national coverage (for geographic boundaries or features) or state coverage (boundaries within state).https://www.census.gov/geographies/mapping-files/time-series/geo/tiger-geodatabase-file.html
This basemap was designed with the Vizzuality team for use in the Half-Earth Project globe. The saturated palette and rich landcover tones are meant to engage an audience and to provide the sense that the earth is a charming and beautiful place worthy of thoughtful stewardship. As you zoom in, the saturated basemap is slowly replaced by imagery.This basemap is the major component of the Vibrant Map. The Vibrant Map is configured to use these basemap tiles from global to regional extents, then transition to Esri's World Imagery basemap tiles for a seamless transition from small to large scale.Find more information about this basemap, and its contributing data, here: https://www.esri.com/arcgis-blog/products/arcgis-pro/mapping/creating-the-half-earth-vibrant-basemap/Learn more about the Half-Earth Project here and explore highlighted areas of biodiversity here.Happy Mapping! John
[Metadata] Ocean polygon layer developed by Hawaii Statewide GIS Program for cartographic purposes, to mask out ocean areas - provides a large polygon around the main 8 Hawaiian island for use as a mask or background when making maps. June 2024: Hawaii Statewide GIS Program staff removed extraneous fields that had been added as part of a 2016 GIS database conversion and were no longer needed. For additional information, please refer to complete metadata at https://files.hawaii.gov/dbedt/op/gis/data/ocean_mask.pdf or contact the Hawaii Statewide GIS Program, Office of Planning, State of Hawaii; PO Box 2359, Honolulu, Hi. 96804; (808) 587-2846; email: gis@hawaii.gov; Website: https://planning.hawaii.gov/gis.
This personal geodatabase contains land and water masks (as rasters and polygons) for the remotely sensed data. It also contains a polygon feature class named: Spatial_Extent_Remote_Sensing_Data, which denotes the outer boundaries of all of the remote sensing data. All of these masks were derived directly from the remotely sensed imagery using geoprocessing functionality in ArcGIS 9.1.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The USDA Forest Service (USFS) builds multiple versions of percent tree canopy cover data, in order to serve needs of multiple user communities. These datasets encompass CONUS, Coastal Alaska, Hawaii, U.S. Virgin Islands and Puerto Rico. There are three versions of data within the 2016 TCC Product Suite, which include: The initial model outputs referred to as the Analytical data; A masked version of the initial output referred to as Cartographic data; And a modified version built for the National Land Cover Database and referred to as NLCD data, which includes a canopy cover change dataset derived from subtraction of datasets for the nominal years of 2011 and 2016. The Analytical data are the initial model outputs generated in the production workflow. These data are best suited for users who will carry out their own detailed statistical and uncertainty analyses on the dataset and place lower priority on the visual appearance of the dataset for cartographic purposes. Datasets for the nominal years of 2011 and 2016 are available.
The Cartographic products mask the initial model outputs to improve the visual appearance of the datasets. These data are best suited for users who prioritize visual appearance of the data for cartographic and illustrative purposes. Datasets for the nominal years of 2011 and 2016 are available.
The NLCD data are the result of further processing of the masked data. The goal was to generate three coordinated components. The components are (1) a dataset for the nominal year of 2011, (2) a dataset for the nominal year of 2016, and (3) a dataset that captures the change in canopy cover between the two nominal years of 2011 and 2016. For the NLCD data, the three components meet the criterion of 2011 TCC + change in TCC = 2016 TCC. These NLCD data are best suited for users who require a coordinated three-component data stack where each pixels values meet the criterion of 2011 TCC + change in TCC = 2016 TCC. Datasets for the nominal years of 2011 and 2016 are available, as well as a dataset that captures the change (loss or gain) in canopy cover between those two nominal years of 2011 and 2016, in areas where change was identified.
These tree canopy cover data are accessible for multiple user communities, through multiple channels and platforms, as listed below: Analytical USFS Tree Canopy Cover Datasets (Download) USFS Enterprise Data Warehouse (Image Service) Cartographic USFS Tree Canopy Cover Datasets (Download) USFS Enterprise Data Warehouse (Map Service) NLCD Multi-Resolution Land Characteristics (MRLC) Consortium (Download) USFS Enterprise Data Warehouse (Image Service) The Puerto Rico and the US Virgin Islands TCC NLCD change dataset is comprised of a single layer. The pixel values range from -97 to 98 percent where negative values represent canopy loss and positive values represent canopy gain. The background is represented by the value 127 and data gaps are represented by the value 110 since this is a signed 8-bit image.This record was taken from the USDA Enterprise Data Inventory that feeds into the https://data.gov catalog. Data for this record includes the following resources: ISO-19139 metadata ArcGIS Hub Dataset ArcGIS GeoService For complete information, please visit https://data.gov.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
Abstract This dataset was created within the Bioregional Assessment Programme for cartographic purposes. Data has not been derived from any source datasets. Metadata has been compiled by the Bioregional Assessment Programme. Cartographic masks for map products COO 116, used for clear annotation and masking unwanted features from report maps. Dataset History Masks created using the 'Features Outline Masks (Cartography)' tool on annotation layers (labels) within ArcMap. Dataset Citation Bioregion…Show full descriptionAbstract This dataset was created within the Bioregional Assessment Programme for cartographic purposes. Data has not been derived from any source datasets. Metadata has been compiled by the Bioregional Assessment Programme. Cartographic masks for map products COO 116, used for clear annotation and masking unwanted features from report maps. Dataset History Masks created using the 'Features Outline Masks (Cartography)' tool on annotation layers (labels) within ArcMap. Dataset Citation Bioregional Assessment Programme (2015) Cartographic masks for map products COO 116. Bioregional Assessment Source Dataset. Viewed 05 July 2017, http://data.bioregionalassessments.gov.au/dataset/0b52e0d0-9a5c-413e-a60f-715ac23e03a4.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This is a MD iMAP hosted service. Find more information at http://imap.maryland.gov. This data layer has been modified from its original version. It contains state boundaries for Delaware - the District of Columbia - Maryland - New Jersey - New York - North Carolina - Ohio - Virginia - and West Virginia. Population information associated with the original data has been removed. Mask is edited to match with MD state detailed political boundary.Feature Service Link:http://geodata.md.gov/imap/rest/services/Boundaries/MD_StateMask/FeatureServer/0 ADDITIONAL LICENSE TERMS: The Spatial Data and the information therein (collectively "the Data") is provided "as is" without warranty of any kind either expressed implied or statutory. The user assumes the entire risk as to quality and performance of the Data. No guarantee of accuracy is granted nor is any responsibility for reliance thereon assumed. In no event shall the State of Maryland be liable for direct indirect incidental consequential or special damages of any kind. The State of Maryland does not accept liability for any damages or misrepresentation caused by inaccuracies in the Data or as a result to changes to the Data nor is there responsibility assumed to maintain the Data in any manner or form. The Data can be freely distributed as long as the metadata entry is not modified or deleted. Any data derived from the Data must acknowledge the State of Maryland in the metadata.
Landsat layer with NBR Mask applied on it.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract The dataset was derived by the Bioregional Assessment Programme from Geoscience Australia GEODATA TOPO series - 1:1 Million to 1:10 Million scale data (GUID: 310c5d07-5a56-4cf7-a5c8-63bdb001cd1a). You can find a link to the parent datasets in the Lineage Field in this metadata statement. The History Field in this metadata statement describes how this dataset was derived. This dataset contains a landmass mask of Australia. Purpose This mask has been developed to readily differentiate …Show full descriptionAbstract The dataset was derived by the Bioregional Assessment Programme from Geoscience Australia GEODATA TOPO series - 1:1 Million to 1:10 Million scale data (GUID: 310c5d07-5a56-4cf7-a5c8-63bdb001cd1a). You can find a link to the parent datasets in the Lineage Field in this metadata statement. The History Field in this metadata statement describes how this dataset was derived. This dataset contains a landmass mask of Australia. Purpose This mask has been developed to readily differentiate onshore and offshore areas thus allowing clear representation of data in the map by masking the offshore areas. It differs from the dataset Geoscience Australia Topographic 250K series 3 data (GUID: a0650f18-518a-4b99-a553-44f82f28bb5f) through the inclusion of the Gippsland Lakes. This mask has been used in Gippsland Bioregional Assessment cartographic products to tidy overlapping layers or help display labels. Dataset History The 2.5 Million scale Mainlands and Islands layers from the Geoscience Australia GEODATA TOPO series - 1:1 Million to 1:10 Million scale data (GUID: 310c5d07-5a56-4cf7-a5c8-63bdb001cd1a) were clipped from a large rectangular extent layer of Australia within ESRI ArcMap 10.2. The mask area is essentially the non-land surface. Dataset Citation Bioregional Assessment Programme (2015) Topo 2.5M landmass mask of Australia. Bioregional Assessment Derived Dataset. Viewed 29 September 2017, http://data.bioregionalassessments.gov.au/dataset/4ffd342b-50b9-4dd2-b793-8bf5c4161428. Dataset Ancestors Derived From Geoscience Australia GEODATA TOPO series - 1:1 Million to 1:10 Million scale
This deep learning model is used to detect and segment trees in high resolution drone or aerial imagery. Tree detection can be used for applications such as vegetation management, forestry, urban planning, etc. High resolution aerial and drone imagery can be used for tree detection due to its high spatio-temporal coverage.This deep learning model is based on DeepForest and has been trained on data from the National Ecological Observatory Network (NEON). The model also uses Segment Anything Model (SAM) by Meta.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8 bit, 3-band high-resolution (10-25 cm) imagery.OutputFeature class containing separate masks for each tree.Applicable geographiesThe model is expected to work well in the United States.Model architectureThis model is based upon the DeepForest python package which uses the RetinaNet model architecture implemented in torchvision and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThis model has an precision score of 0.66 and recall of 0.79.Training dataThis model has been trained on NEON Tree Benchmark dataset, provided by the Weecology Lab at the University of Florida. The model also uses Segment Anything Model (SAM) by Meta that is trained on 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.CitationsWeinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309Geographic Generalization in Airborne RGB Deep Learning Tree Detection Ben Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare, Ethan P White bioRxiv 790071; doi: https://doi.org/10.1101/790071