Coconuts and coconut products are an important commodity in the Tongan economy. Plantations, such as the one in the town of Kolovai, have thousands of trees. Inventorying each of these trees by hand would require lots of time and manpower. Alternatively, tree health and location can be surveyed using remote sensing and deep learning. In this lesson, you'll use the Deep Learning tools in ArcGIS Pro to create training samples and run a deep learning model to identify the trees on the plantation. Then, you'll estimate tree health using a Visible Atmospherically Resistant Index (VARI) calculation to determine which trees may need inspection or maintenance.
To detect palm trees and calculate vegetation health, you only need ArcGIS Pro with the Image Analyst extension. To publish the palm tree health data as a feature service, you need ArcGIS Online and the Spatial Analyst extension.
In this lesson you will build skills in these areas:
Learn ArcGIS is a hands-on, problem-based learning website using real-world scenarios. Our mission is to encourage critical thinking, and to develop resources that support STEM education.
Manually digitizing the track of an object can be a slow process. This model automates the object tracking process significantly, and hence speeds up motion imagery analysis workflows. It can be used with the Motion Imagery Toolset found in the Image Analyst extension to track objects. The detailed workflow and description of the object tracking capability in ArcGIS Pro can be found here.This model can be used for applications such as object follower and surveillance of stationary objects. It does not perform very well in case there are sudden camera shakes or abrupt scale changes.Using the modelFollow the guide to use the model. The model can be used with the Motion Imagery tools in ArcGIS Pro 2.8 and onwards. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.InputObject to track marked as a bounding box in 8-bit, 3-band high resolution full motion video / motion imagery. Recommended object size is greater than 15x15 (in pixels).OutputBounding box depicting object location in successive frames.Applicable geographiesThis model is expected to work well in all regions globally for any generic-type of objects of interest. However, results can vary for motion imagery that are statistically dissimilar to the training data.Model architectureThis model uses the SiamMask model architecture implemented in ArcGIS API for Python.Accuracy metricsThe model has an average precision score of 0.853. Training dataThe model was trained using image sequences from the DAVIS dataset licensed under CC BY 4.0 license, and further fine-tuned on aerial motion imagery.Sample resultsHere are a few results from the model.
The Virginia Geographic Information Network (VGIN) has coordinated the development and maintenance of a statewide Building Footprint data layer in conjunction with local governments across the Commonwealth. The Virginia Building Footprint dataset is aggregated as part of the VGIN Local Government Data Call update cycle. Localities are encouraged to submit data bi-annually and are included into the Building Footprint dataset with their most recent geography.Building footprints are polygon outlines of structures remotely rendered through digitizing of Virginia Base Mapping Program’s digital ortho-photogrammetry imagery, or digitizing of local government subdivision plats. VBMP building footprints are a collection of locally submitted data and as published from the Virginia Geographic Information Network carry no addressing, nor is there any ownership, resident information, or construction specifications provided.VBMP building footprints are not assumed to be of survey quality and carry no guarantee as to accuracy. Even with these restrictions and limitations, building outlines are a valuable resource for geospatial analysis and derivative data development. Data input from localities are processed and published quarterly. To date the majority of Virginia’s localities building footprints have been captured but not all.GDB Version: ArcGIS Pro 3.3Additional Resources:Shapefile DownloadREST Endpoint
DescriptionThe baseline features in this dataset were created by Applied Geographics for CDOT during the ROW/Real property modernization project which began in 2015. In March 2020 CDOT took over maintenance of the data. The source files used to create the baseline dataset were primarily right of way project files. The features were either digitized in ArcGIS Pro from georeferenced PDF source files, converted from the source Microstation DGN files using a custom CAD to GIS conversion tool, or created in ArcGIS Pro using the traverse tool from a legal description. Last UpdateOngoingUpdate FrequencyAs NeededData OwnerDivision of Transportation DevelopmentData ContactGIS Support UnitCollection MethodProjectionNAD 1983 / UTM Zone 13NCoverage AreaStatewideTemporalDisclaimer/LimitationsThis real property and/or right-of-way geographic information system (GIS) data and any related documents are for reference only and may not be suitable for legal, engineering, or surveying purposes. The Colorado Department of Transportation, and its employees and agents, make no warranty, express or implied, as to the accuracy, completeness, or usefulness of any information and assumes no liability for errors or consequences from use. Verifying the accuracy, completeness or usefulness of this data is the responsibility of the user.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The state of Tennessee is divided into 805 individual 7.5-minute topographic quadrangle maps. The Tennessee Department of Environment and Conservation (TDEC) maintains an archive of paper maps that were utilized for estimating groundwater well locations. Each well location was plotted by hand and marked with corresponding water well data. These hand-plotted locations represent the most accurate spatial information for each well but exist solely in paper format. To create the shapefile of the well location data for this data release, individual paper maps were scanned and georeferenced. From these georeferenced map images (GRI), the hand-plotted well locations were digitized into a shapefile of point data using ArcGIS Pro. The shapefile is contained in "TN_waterwell.zip," which contains locations for 8,826 points from the first 200 7.5-minute quadrangles in Tennessee (sorted alphabetically) from Adair 438NW through Harriman 123NE. While some spring locations are included in this da ...
California Department of Transportation (Caltrans), Division of Transportation Planning, Aeronautics Program provided airport layout drawings with estimated digitized airport property or fence lines with Google Pro images background.Caltrans Division of Research, Innovation and System Information (DRISI) GIS office digitized the airport boundary lines with Bing Maps Aerial background and built the boundary lines into a GIS polygon feature class.Generally, Airport Layout Plans do not show complete connected property or fence lines. In many cases the boundary lines were interpreted among the property and fence lines with our best judgment. The airport general information derived from FAA Airport Master Record and Reports with their URL are included in the attribute table.Airport boundary data is intended for general reference and does not represent official airport property boundary determinations.
This polygon data represents the final interpretation of side scan sonar. This includes the delineation of sand/mud bottom, artificial reef, boulder field, cobble, and reef ledge. The previously stated features were delineated using side scan sonar signatures that were acquired using a side scan towfish. The side scan sonar was then processed in SonarWiz 7.2 to produce the initial imagery for interpretation, and then brought into ArcGIS Pro for actual feature digitization.
This data represents a land use survey of 2017 San Joaquin County conducted by the California Department of Water Resources, North Central Region Office staff. Land use field boundaries were digitized with ArcGIS 10.5.1 using 2016 NAIP as the base, and Google Earth and Sentinel-2 imagery website were used as reference as well. Agricultural fields were delineated by following actual field boundaries instead of using the centerlines of roads to represent the field borders. Field boundaries were not drawn to represent legal parcel (ownership) boundaries and are not meant to be used as parcel boundaries. The field work for this survey was conducted from July 2017 through August 2017. Images, land use boundaries and ESRI ArcMap software were loaded onto Surface Pro tablet PCs that were used as the field data collection tools. Staff took these Surface Pro tablet into the field and virtually all agricultural fields were visited to identify the land use. Global positioning System (GPS) units connected to the laptops were used to confirm the surveyor's location with respect to the fields. Land use codes were digitized in the field using dropdown selections from defined domains. Agricultural fields the staff were unable to access were designated 'E' in the Class field for Entry Denied in accordance with the 2016 Land Use Legend. The areas designated with 'E' were also interpreted using a combination of Google Earth, Sentinel-2 Imagery website, Land IQ (LIQ) 2017 Delta Survey, and the county of San Joaquin 2017 Agriculture GIS feature class. Upon completion of the survey, a Python script was used to convert the data table into the standard land use format. ArcGIS geoprocessing tools and topology rules were used to locate errors for quality control. The primary focus of this land use survey is mapping agricultural fields. Urban residences and other urban areas were delineated using aerial photo interpretation. Some urban areas may have been missed. Rural residential land use was delineated by drawing polygons to surround houses and other buildings along with some of the surrounding land. These footprint areas do not represent the entire footprint of urban land. Water source information was not collected for this land use survey. Therefore, the water source has been designated as Unknown. Before final processing, standard quality control procedures were performed jointly by staff at DWR’s North Central Region,Office and at DRA's headquarters office under the leadership of Muffet Wilkerson, Senior Land and Water Use Supervisor. After quality control procedures were completed, the data was finalized. The positional accuracy of the digital line work, which is based upon the orthorectified NAIP imagery, is approximately 6 meters. The land use attribute accuracy for agricultural fields is high, because almost every delineated field was visited by a surveyor. The accuracy is 95 percent because some errors may have occurred. Possible sources of attribute errors are: a) Human error in the identification of crop types, b) Data entry errors.
The delineation of agricultural field boundaries has a wide range of applications, such as for crop management, precision agriculture, land use planning and crop insurance, etc. Manually digitizing agricultural fields from imagery is labor-intensive and time-consuming. This deep learning model automates the process of extracting agricultural field boundaries from satellite imagery, thereby significantly reducing the time and effort required. Its ability to adapt to varying crop types, geographical regions, and imaging conditions makes it suitable for large-scale operations.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.InputSentinel-2 L2A 12-bands multispectral imagery using Bottom of Atmosphere (BOA) reflectance product in the form of a raster, mosaic or image service.OutputFeature class containing delineated agricultural fields.Applicable geographiesThe model is expected to work well in agricultural regions of USA.Model architectureThis model uses the Mask R-CNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an average precision score of 0.64 for fields.Training dataThis model has been trained on an Esri proprietary agricultural field delineation dataset.LimitationsThis model works well only in areas having farmlands and may not give satisfactory results in areas near water bodies and hilly regions. The results of this pretrained model cannot be guaranteed against any other variation of the Sentinel-2 data.Sample resultsHere are a few results from the model.
This wetland mapping project was funded by the King County Water and Land Services, Ecological Restoration and Engineering Services Unit, as part of a Best Available Science update. Wetlands within the King County boundary were mapped and classified, and reviewed by King County team members and National Wetland Inventory Staff. Wetlands were mapped and classified using: the National Wetlands Inventory (NWI) classification system (Cowardin et al., 1979) and the Landscape Position, Landform, Water Flow Path, and Water Body Type (LLWW) classification developed for the Western U.S. (Lemly et al. 2018).
The main objective for this project was to improve the knowledge of wetland extent and value within King County. In all, more than approximately 6,600 square miles of land comprise the county. King County contracted with Geospatial Services (GSS) at Saint Mary's University of Minnesota to create of high-quality National Wetlands Inventory Plus (NWIPlus) level mapping for the county. Program staff will conduct some ground truthing of data. NWIPlus is an enhanced NWI product with hydrogeomorphic-type descriptors that can facilitate predicting wetland functions. The enhanced attributes describe wetland landform, water flow path and water body type. The updated mapping will be utilized by developers and landowners to avoid wetland impacts, and may be incorporated into other GIS models which would identify potential wetland restoration projects and conservation priorities. Finalized mapping was made available through the county’s online map applications and submitted to the US Fish and Wildlife Service for addition to the National Wetlands Inventory.
King County completed this work as part of a Landscape Level 1 wetlands assessment. This work fits into the counties Wetland Program Plan (“The Plan”) and its goal of providing greater projection of wetlands and aquatic resources statewide. This work is overseen and is supported by the King County Wetland Program, within the Water and Land Services Department. The project, entitled “King County Wetland Inventory Update, King County, WA ” used geospatial techniques and image interpretation processes to remotely map and classify wetlands (includes deepwater habitats) and riparian areas in King County, WA. Wetlands for the project area were mapped and classified using on-screen digitizing methods in a Geographical Information System (GIS). This process was supported by development of a selective image interpretation key that resulted from field verification of image signatures and wetland classifications. Wetland image interpretation employed a variety of input image and collateral data sources, as well as field verification techniques. All mapping was completed at an on-screen scale of 1:5,000 or larger in compliance with national wetland mapping standards. The primary source imagery for mapping consisted of Eagleview, 2021, one-quarter foot, true-color pictometry. 8-bit, tiled orthophotography in TIFF format published by King County and mosaiced by GSS. Collateral data used in the mapping process included Light Detection and Ranging (LiDAR) Digital Elevation Model (DEM) 1.5 ft resolution and LiDAR derived products such as hillshade, contours, depth grids, and synthetic flow networks; King County Digital Surface Model Vegetation Height; King County Coho intrinsic potential stream layer; Beaver Intrinsic Potential (BIP); Historic National Wetland Inventory (NWI); National Hydrography Dataset (NHD) springs and watershed boundaries; ESRI basemap imagery; and Google Earth Time Slider True Color Imagery (GE); King County wetland layers; King County Stormwater features; King County wetland mitigation sites; King County Habitat Restoration sites; and Wetland Intrinsic Potential (WIP). All feature creation and attribution were completed with on-screen digitization procedures using ESRI, ArcGIS Pro 3.2.0 with advanced editing tools. For wetland mapping and classification projects at the landscape level, a desktop computer heads-up digitizing process is performed referencing the Federal Geographic Data Committee (FGDC) Wetlands Mapping Standard (FGDC-STD-015-2009, FGDC 2009) and the FGDC Classification of Wetlands and Deepwater Habitats of the United States Standard (FGDC-STD-004-2013, FGDC 2013). Field reviews are used to address questions regarding image interpretation, land use practices, classification of wetland type and verification of preliminary mapping. The King County inventory of wetlands used source imagery and collateral data to identify and classify features within the FGDC Standards (FGDC-STD-015-2009, FGDC 2009; FGDC-STD-004-2013, FGDC 2013). The projects Target Mapping Unit was 0.25 acres; however, features mapped beyond this TMU by request of King County and at the interpreters discretion. Following this process, the King County inventory went through a standardized Quality Assurance and Quality Control (QA/QC) process with the United States Fish and Wildlife Service (USFWS) NWI program, King County, and GSS’s internal QAQC review.
Microsoft Buildings Footprints with Heights from service: https://services.arcgis.com/P3ePLMYs2RVChkJx/arcgis/rest/services/MS_Buildings_Training_Data_with_Heights/FeatureServer (restrictions, do not use)Source: Approx. 9.8 million building footprints for portions of metro areas in 44 US States in Shapefile format.Microsoft recently released a free set of deep learning generated building footprints covering the United States of America. As part of that project Microsoft shared 8 million digitized building footprints with height information used for training the Deep Learning Algorithm. This map layer includes all buildings with height information for the original training set that can be used in scene viewer and ArcGIS pro to create simple 3D representations of buildings. Learn more about the Microsoft Project at the Announcement Blog or the raw data is available at Github.Click see Microsoft Building Layers in ArcGIS Online.Digitized building footprint by State and CityAlabamaGreater Phoenix City, Mobile, and MontgomeryArizonaTucsonArkansasLittle Rock with 5 buildings just across the river from MemphisCaliforniaBakersfield, Fresno, Modesto, Santa Barbara, Sacramento, Stockton, Calaveras County, San Fran & bay area south to San Jose and north to CloverdaleColoradoInterior of DenverConnecticutEnfield and Windsor LocksDelawareDoverFloridaTampa, Clearwater, St. Petersburg, Orlando, Daytona Beach, Jacksonville and GainesvilleGeorgiaColumbus, Atlanta, and AugustaIllinoisEast St. Louis, downtown area, Springfield, Champaign and UrbanaIndianaIndianapolis downtown and Jeffersonville downtownIowaDes MoinesKansasTopekaKentuckyLouisville downtown, Covington and NewportLouisianaShreveport, Baton Rouge and center of New OrleansMaineAugusta and PortlandMarylandBaltimoreMassachusettsBoston, South Attleboro, commercial area in Seekonk, and SpringfieldMichiganDowntown DetroitMinnesotaDowntown MinneapolisMississippiBiloxi and GulfportMissouriDowntown St. Louis, Jefferson City and SpringfieldNebraskaLincolnNevadaCarson City, Reno and Los VegasNew HampshireConcordNew JerseyCamden and downtown Jersey CityNew MexicoAlbuquerque and Santa FeNew YorkSyracuse and ManhattanNorth CarolinaGreensboro, Durham, and RaleighNorth DakotaBismarckOhioDowntown Cleveland, downtown Cincinnati, and downtown ColumbusOklahomaDowntown Tulsa and downtown Oklahoma CityOregonPortlandPennsylvaniaDowntown Pittsburgh, Harrisburg, and PhiladelphiaRhode IslandThe greater Providence areaSouth CarolinaGreensville, downtown Augsta, greater Columbia area and greater Charleston areaSouth Dakotagreater Pierre areaTennesseeMemphis and NashvilleTexasLubbock, Longview, part of Fort Worth, Austin, downtown Houston, and Corpus ChristiUtahSalt Lake City downtownVirginiaRichmondWashingtonGreater Seattle area to Tacoma to the south and Marysville to the northWisconsinGreen Bay, downtown Milwaukee and MadisonWyomingCheyenne
Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
Microsoft recently released a free set of deep learning generated building footprints covering the United States of America. As part of that project Microsoft shared 8 million digitized building footprints with height information used for training the Deep Learning Algorithm. This map layer includes all buildings with height information for the original training set that can be used in scene viewer and ArcGIS pro to create simple 3D representations of buildings. Learn more about the Microsoft Project at the Announcement Blog or the raw data is available at Github.Click see Microsoft Building Layers in ArcGIS Online.Digitized building footprint by State and City
Alabama Greater Phoenix City, Mobile, and Montgomery
Arizona Tucson
Arkansas Little Rock with 5 buildings just across the river from Memphis
California Bakersfield, Fresno, Modesto, Santa Barbara, Sacramento, Stockton, Calaveras County, San Fran & bay area south to San Jose and north to Cloverdale
Colorado Interior of Denver
Connecticut Enfield and Windsor Locks
Delaware Dover
Florida Tampa, Clearwater, St. Petersburg, Orlando, Daytona Beach, Jacksonville and Gainesville
Georgia Columbus, Atlanta, and Augusta
Illinois East St. Louis, downtown area, Springfield, Champaign and Urbana
Indiana Indianapolis downtown and Jeffersonville downtown
Iowa Des Moines
Kansas Topeka
Kentucky Louisville downtown, Covington and Newport
Louisiana Shreveport, Baton Rouge and center of New Orleans
Maine Augusta and Portland
Maryland Baltimore
Massachusetts Boston, South Attleboro, commercial area in Seekonk, and Springfield
Michigan Downtown Detroit
Minnesota Downtown Minneapolis
Mississippi Biloxi and Gulfport
Missouri Downtown St. Louis, Jefferson City and Springfield
Nebraska Lincoln
Nevada Carson City, Reno and Los Vegas
New Hampshire Concord
New Jersey Camden and downtown Jersey City
New Mexico Albuquerque and Santa Fe
New York Syracuse and Manhattan
North Carolina Greensboro, Durham, and Raleigh
North Dakota Bismarck
Ohio Downtown Cleveland, downtown Cincinnati, and downtown Columbus
Oklahoma Downtown Tulsa and downtown Oklahoma City
Oregon Portland
Pennsylvania Downtown Pittsburgh, Harrisburg, and Philadelphia
Rhode Island The greater Providence area
South Carolina Greensville, downtown Augsta, greater Columbia area and greater Charleston area
South Dakota greater Pierre area
Tennessee Memphis and Nashville
Texas Lubbock, Longview, part of Fort Worth, Austin, downtown Houston, and Corpus Christi
Utah Salt Lake City downtown
Virginia Richmond
Washington Greater Seattle area to Tacoma to the south and Marysville to the north
Wisconsin Green Bay, downtown Milwaukee and Madison
Wyoming Cheyenne
Building footprint layers are useful in preparing base maps and analysis workflows for urban planning and development. They also have use in insurance, taxation, change detection, infrastructure planning, and a variety of other applications.
Digitizing building footprints from imagery is a time-consuming task and is commonly done by digitizing features manually. Deep learning models are highly capable of learning these complex semantics and can produce superior results. Use this deep learning model to automate the tedious manual process of extracting building footprints, reducing time and effort required significantly.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.Input8-bit, 3-band high-resolution (10–40 cm) imagery.OutputFeature class containing building footprints.Applicable geographiesThe model is expected to work well in the United States.Model architectureThe model uses the MaskRCNN model architecture implemented using ArcGIS API for Python.Accuracy metricsThe model has an average precision score of 0.718.Sample resultsHere are a few results from the model. To view more, see this story.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Coconuts and coconut products are an important commodity in the Tongan economy. Plantations, such as the one in the town of Kolovai, have thousands of trees. Inventorying each of these trees by hand would require lots of time and manpower. Alternatively, tree health and location can be surveyed using remote sensing and deep learning. In this lesson, you'll use the Deep Learning tools in ArcGIS Pro to create training samples and run a deep learning model to identify the trees on the plantation. Then, you'll estimate tree health using a Visible Atmospherically Resistant Index (VARI) calculation to determine which trees may need inspection or maintenance.
To detect palm trees and calculate vegetation health, you only need ArcGIS Pro with the Image Analyst extension. To publish the palm tree health data as a feature service, you need ArcGIS Online and the Spatial Analyst extension.
In this lesson you will build skills in these areas:
Learn ArcGIS is a hands-on, problem-based learning website using real-world scenarios. Our mission is to encourage critical thinking, and to develop resources that support STEM education.