Swimming pools are important for property tax assessment because they impact the value of the property. Tax assessors at local government agencies often rely on expensive and infrequent surveys, leading to assessment inaccuracies. Finding the area of pools that are not on the assessment roll (such as those recently constructed) is valuable to assessors and will ultimately mean additional revenue for the community.This deep learning model helps automate the task of finding the area of pools from high resolution satellite imagery. This model can also benefit swimming pool maintenance companies and help redirect their marketing efforts. Public health and mosquito control agencies can also use this model to detect pools and drive field activity and mitigation efforts. Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8-bit, 3-band high resolution (5-30 centimeters) imagery.OutputFeature class containing masks depicting pool.Applicable geographiesThe model is expected to work well in the United States.Model architectureThe model uses the FasterRCNN model architecture implemented using ArcGIS API for Python and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThe model has an average precision score of 0.59.Sample resultsHere are a few results from the model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data involved in this paper is from https://www.planet.com/explorer/. The resolution is 3m, and there are 3 main bands, RGB. Since the platform can only download a certain amount of data after applying for an account in the form of education, and at the same time the data is only retained for one month, we chose 8 major cities for the study, 2 images per city. we also provide detailed information on the data visualization and classification results that we have tested and retained in a PPT file called paper, we also provide detailed information on the data visualization and classification results of our tests in a PPT file called paper-result, which can be easily reviewed by reviewers. At the same time, reviewers can also download the data to verify the applicability of the results based on the coordinates of the data sources provided in this paper.The algorithms consist of three main types, one is based on traditional algorithms including object-based and pixel-based, in which we tested the generalization ability of four classifiers, including Random Forest, Support Vector Machine, Maximum Likelihood, and K-mean, in the form of classification in this different way. In addition, we tested two of the more mainstream deep learning classification algorithms, U-net and deeplabV3, both of which can be found and applied in the ArcGIS pro software. The traditional algorithms can be found by checking https://pro.arcgis.com/en/pro-app/latest/help/analysis/image-analyst/the-image-classification-wizard.htm to find the running process, while the related parameter settings and Sample selection rules can be found in detail in the article. Deep learning algorithms can be found at https://pro.arcgis.com/en/pro-app/latest/help/analysis/deep-learning/deep-learning-in-arcgis-pro.htm, and the related parameter settings and sample selection rules can be found in detail in the article. Finally, the big model is based on the SAM model, in which the running process of SAM is from https://github.com/facebookresearch/segment-anything, and you can also use the official Meta segmentation official website to provide a web-based segmentation platform for testing https:// segment-anything.com/. However, the official website has restrictions on the format of the data and the scope of processing.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This seminar is an applied study of deep learning methods for extracting information from geospatial data, such as aerial imagery, multispectral imagery, digital terrain data, and other digital cartographic representations. We first provide an introduction and conceptualization of artificial neural networks (ANNs). Next, we explore appropriate loss and assessment metrics for different use cases followed by the tensor data model, which is central to applying deep learning methods. Convolutional neural networks (CNNs) are then conceptualized with scene classification use cases. Lastly, we explore semantic segmentation, object detection, and instance segmentation. The primary focus of this course is semantic segmenation for pixel-level classification.
The associated GitHub repo provides a series of applied examples. We hope to continue to add examples as methods and technologies further develop. These examples make use of a vareity of datasets (e.g., SAT-6, topoDL, Inria, LandCover.ai, vfillDL, and wvlcDL). Please see the repo for links to the data and associated papers. All examples have associated videos that walk through the process, which are also linked to the repo. A variety of deep learning architectures are explored including UNet, UNet++, DeepLabv3+, and Mask R-CNN. Currenlty, two examples use ArcGIS Pro and require no coding. The remaining five examples require coding and make use of PyTorch, Python, and R within the RStudio IDE. It is assumed that you have prior knowledge of coding in the Python and R enviroinments. If you do not have experience coding, please take a look at our Open-Source GIScience and Open-Source Spatial Analytics (R) courses, which explore coding in Python and R, respectively.
After completing this seminar you will be able to:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
scripts.zip
arcgisTools.atbx: terrainDerivatives: make terrain derivatives from digital terrain model (Band 1 = TPI (50 m radius circle), Band 2 = square root of slope, Band 3 = TPI (annulus), Band 4 = hillshade, Band 5 = multidirectional hillshades, Band 6 = slopeshade). rasterizeFeatures: convert vector polygons to raster masks (1 = feature, 0 = background).
makeChips.R: R function to break terrain derivatives and chips into image chips of a defined size. makeTerrainDerivatives.R: R function to generated 6-band terrain derivatives from digital terrain data (same as ArcGIS Pro tool). merge_logs.R: R script to merge training logs into a single file. predictToExtents.ipynb: Python notebook to use trained model to predict to new data. trainExperiments.ipynb: Python notebook used to train semantic segmentation models using PyTorch and the Segmentation Models package. assessmentExperiments.ipynb: Python code to generate assessment metrics using PyTorch and the torchmetrics library. graphs_results.R: R code to make graphs with ggplot2 to summarize results. makeChipsList.R: R code to generate lists of chips in a directory. makeMasks.R: R function to make raster masks from vector data (same as rasterizeFeatures ArcGIS Pro tool).
terraceDL.zip
dems: LiDAR DTM data partitioned into training, testing, and validation datasets based on HUC8 watershed boundaries. Original DTM data were provided by the Iowa BMP mapping project: https://www.gis.iastate.edu/BMPs. extents: extents of the training, testing, and validation areas as defined by HUC 8 watershed boundaries. vectors: vector features representing agricultural terraces and partitioned into separate training, testing, and validation datasets. Original digitized features were provided by the Iowa BMP Mapping Project: https://www.gis.iastate.edu/BMPs.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
scripts.zip
arcgisTools.atbx: terrainDerivatives: make terrain derivatives from digital terrain model (Band 1 = TPI (50 m radius circle), Band 2 = square root of slope, Band 3 = TPI (annulus), Band 4 = hillshade, Band 5 = multidirectional hillshades, Band 6 = slopeshade). rasterizeFeatures: convert vector polygons to raster masks (1 = feature, 0 = background).
makeChips.R: R function to break terrain derivatives and chips into image chips of a defined size. makeTerrainDerivatives.R: R function to generated 6-band terrain derivatives from digital terrain data (same as ArcGIS Pro tool). merge_logs.R: R script to merge training logs into a single file. predictToExtents.ipynb: Python notebook to use trained model to predict to new data. trainExperiments.ipynb: Python notebook used to train semantic segmentation models using PyTorch and the Segmentation Models package. assessmentExperiments.ipynb: Python code to generate assessment metrics using PyTorch and the torchmetrics library. graphs_results.R: R code to make graphs with ggplot2 to summarize results. makeChipsList.R: R code to generate lists of chips in a directory. makeMasks.R: R function to make raster masks from vector data (same as rasterizeFeatures ArcGIS Pro tool).
vfillDL.zip
dems: LiDAR DTM data partitioned into training, three testing, and two validation datasets. Original DTM data were obtained from 3DEP (https://www.usgs.gov/3d-elevation-program) and the WV GIS Technical Center (https://wvgis.wvu.edu/) . extents: extents of the training, testing, and validation areas. These extents were defined by the researchers. vectors: vector features representing valley fills and partitioned into separate training, testing, and validation datasets. Extents were created by the researchers.
Source DataThe National Agriculture Imagery Program (NAIP) Color Infrared Imagery, captured in 2018 Processing Methodsdownloaded NAIP imagery tiles for all Southern Appalachian sky islands with spruce forest type present. Mosaiced individual imagery tiles by sky island. This step resulted in a single, seamless imagery raster dataset for each sky island.Changed the raster band combination of the mosaiced sky island imagery to visually enhance the spruce forest type from the other forest types. Typically, the band combination was Band 2 for Red, Band 3 for Green, and Band 1 for Blue. Utilizing the ArcGIS Pro Image Analyst extension, performed an image segmentation of the mosaiced sky island imagery. Segmentation is a process in which adjacent pixels with similar multispectral or spatial characteristics are grouped together. These objects represent partial or complete features on the landscape. In this case, it simplified the imagery to be more uniform by forest type present in the imagery, especially for the spruce forest type.Utilizing the segmented mosaiced sky island imagery, training samples were digitized. Training samples are areas in the imagery that contain representative sites of a classification type that are used to train the imagery classification. Adequate training samples were digitized for every classification type required for the imagery classification. The spruce forest type was included for every sky island. Classified the segmented mosaiced sky island imagery utilizing a Support Vector Machine (SVM) classifier. The SVM provides a powerful, supervised classification method that is less susceptible to noise, correlated bands, and an unbalanced number or size of training sites within each class and is widely used among researchers. This step took the segmented mosaiced sky island imagery and created a classified raster dataset based on the training sample classification scheme. Reclassified the classified dataset only retaining the spruce forest type and shadows class.Converted the spruce and shadows raster dataset to polygon.
This layer depicts roads within Washington State Parks, and information about their physical characteristics. Data is maintained by State Parks staff. Public roads are included where they pass through the park, to show where public roads lead to the entrances of parks. Development of this dataset is ongoing.Attribute Definitions: Type - Defines the type of roadway. Types include Camp Loop, Park Road, Private Road, Public Road, and Service Road. Park - The name of the WA State Park that contains the road. RoadName - Name of the park road, where available. Comments - General comments. Miles - The length of a road segment, in miles, as calculated in GIS (ArcGIS Pro). Surface - Road surface type, where known. Gravel, paved. To download this and other data from Washington State Parks, go to geo.wa.gov and search for "wsprc" (Washington State Parks and Recreation Commission).
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Natural rivers are inherently dynamic. Spatial and temporal variations in water, sediment, and wood fluxes both cause and respond to an increase in geomorphic heterogeneity within the river corridor. We analyze 16 two-kilometer river corridor segments of the Swan River in Montana, USA to examine relationships between wood accumulations (wood accumulation distribution density, count, and persistence), channel dynamism (total sinuosity and average channel migration), and geomorphic heterogeneity (density, aggregation, interspersion, and evenness of patches in the river corridor). We hypothesize that i) more dynamic river segments correlate with a greater presence, persistence, and distribution of wood accumulations; ii) years with higher peak discharge correspond with greater channel dynamism and wood accumulations; and iii) all river corridor variables analyzed play a role in explaining river corridor spatial heterogeneity. Our results suggest that decadal-scale channel dynamism, as reflected in total sinuosity, corresponds to greater numbers of wood accumulations per surface area and greater persistence of these wood accumulations through time. Second, higher peak discharges correspond to greater values of wood distribution density, but not to greater channel dynamism. Third, persistent values of geomorphic heterogeneity, as reflected in the heterogeneity metrics of aggregation, interspersion, patch density, and evenness, are explained by potential predictor variables analyzed here. Our results reflect the complex interactions of water, sediment, and large wood in river corridors; the difficulties of interpreting causal relationships among these variables through time; and the importance of spatial and temporal analyses of past and present river processes to understand future river conditions Methods This data was collected using field and remote sensing methods. To provide spatial context for the measurements of wood distributions, geomorphic heterogeneity, and channel dynamism along our 32-km study reach, we segmented the study reach at uniform 2-km intervals prior to data collection. The downstream-most 8 segments were selected based on the naturalness of the river corridor and the presence of abundant large wood accumulations in the active channel(s). We focused on these segments for ground-based measurements. We subsequently expanded analyses to include an additional eight upstream segments. These segments were included because of anecdotal evidence of at least localized timber harvest in the river corridor, bank stabilization, and large wood removal from the active channel. We included these sites to provide a greater range of values within some of the variables analyzed and thus potentially increase the power of our statistical analyses. Wood accumulations and beaver modifications We conducted aerial wood accumulation surveys using available Google Earth imagery between 2013 and 2022 (four years of available imagery: 2013, 2016, 2020, 2022). We mapped all logjams that could be detected via the aerial imagery. Wood accumulations that were under canopy, too small for the spatial resolution of imagery, not interacting with base flows, or containing less than three visible wood pieces were not included. We recorded the number of wood accumulations per 2-km segment for each available imagery year as a minimum wood-accumulations count and divided the wood count by floodplain area for each segment to get the wood distribution density. We also noted the occurrence of persistent wood accumulations that were continually present in the Google Earth imagery, in what we refer to as “sticky sites”. GPS coordinates of wood accumulations were collected in the field during August 2022 to verify imagery identification. We also manually identified active and remnant beaver meadows using Google Earth. Similar to large wood, American beaver (Castor canadensis) both respond to spatial heterogeneity in the river corridor (e.g., preferentially damming secondary channels) and create spatial heterogeneity through their ecosystem modifications. Beaver-modified portions of the river corridor (beaver meadows) were identified based on presence of standing water in ponds with a visible berm (beaver dam); different vegetation (wetland vegetation including rushes, sedges, and willow carrs that appear as a lighter green color in imagery) than adjacent floodplain areas; and detectable active or relict beaver dams (linear berms with different vegetation than adjacent areas). Several of the sites identified in imagery were also visited in the field to verify identification. Channel dynamism and annual peak discharge Channel dynamism was quantified using metrics of active channel migration and total sinuosity over time. To measure active channel migration, we developed a semi-automated approach to map surface water extent and planimetric centerline movement, which are commonly used to understand morphological evolution in rivers. We followed existing methodologies using base flow conditions as a conservative delineation of planimetric change given our goal of looking at relative channel change over time to understand which segments of our study area were the most dynamic. Surface water extent was delineated for 2013, 2016, 2020, and 2022 to keep the timestep consistent with our wood surveys. Imagery collected for the National Agriculture Imagery Program (NAIP) was used when available (2013 and 2016). For 2020 and 2022, cloud-free multispectral composite images were created in Google Earth Engine (GEE) from Sentinel-2 imagery from average baseflow months (August-October). Surface water was classified using the normalized difference water index (NDWI) (Gao, 1996) for NAIP imagery, and modified normalized difference water index (MNDWI) in Sentinel-2 imagery. A unique threshold was empirically determined for each year to optimize the identification of the river surface while minimizing false-positive water identification, resulting in binary water and non-water masks for each year. Gaps and voids in the Sentinel-2 derived water masks (from shadow-covered areas, thin river segments, or mixed pixels along the river edge) were filled by sequentially buffering the water areas outwards by 30 meters (three pixels) and then inwards by 15 m. Similarly, gaps and voids in NAIP-derived water masks were filled using a sequential 20 m outwards then inwards buffer. The resulting binary water masks were imported into ArcGIS Pro and vectorized. Manual adjustments were made to remove any remaining misclassified areas and join disconnected segments. We delineated centerlines of our channel masks using the ArcGIS Pro Polygon to Centerline tool. When multiple channels were present, the dominant channel branch was chosen for the channel centerline. Consequently, our analysis represents a minimum value of channel migration during each time step because it does not include secondary channel movements. The Feature to Polygon tool was used to extract area differences between two centerlines at each segment. Areas between the centerlines for each segment were divided by centerline length to get a horizontal change distance. We measured total sinuosity in each 2-km segment for 2013, 2016, 2020, and 2022 using Google Earth imagery and the built-in Measure tool in Google Earth. We measured total sinuosity as the ratio of total channel length of all active channels/valley length. We obtained annual peak discharge from the nearest US Geological Survey gauge (12370000, Swan River near Bigfork, MT). This site is below Swan Lake, a natural lake, into which the Swan River in our study area flows. Consequently, the gauge records reflect relative inter-annual fluctuations in peak discharge, but not actual discharge at the study site. We used annual peak discharge for the same time intervals used for analyzing channel position. Geomorphic heterogeneity We performed an unsupervised remote sensing classification on a stack of data containing a 2022 Sentinel-2 imagery mosaic prepared in GEE, and normalized difference vegetation index (NDVI) and normalized difference moisture index (NDMI) rasters calculated from the Sentinel-2 mosaic in ArcGIS Pro. The Sentinel mosaic was prepared for the approximate growing season in Montana, USA, (June 1 to October 31) based on annual phenology activity curves (2018-2022) of the existence of leaves or needles on flowering plants. The unsupervised classification was completed on the floodplain extent of the Swan, delineated manually in ArcGIS Pro using the 10-m 3DEP DEM, hillshade prepared from the DEM, Sentinel-2 imagery, and ArcGIS Pro Imagery basemap as visual references. Although the classification is unsupervised, the classes were intended to represent distinct types of habitats within the river corridor that blend geomorphic features and vegetation communities as observed in the field, including, but not limited to: active channels, secondary channels, accretionary bars, backswamps, natural levees, old-growth forest, wetlands, and beaver meadows. The ISO Cluster Unsupervised Classification ArcGIS Pro tool was used to perform the classification. Inputs to the tool were a maximum of 10 classes, a minimum class size of 20 pixels (tool default), and a sample interval of 10 pixels (tool default). The entire reach was classified once, and then clipped into individual 2-km segments. The classified Swan raster was brought into R for statistical analysis of heterogeneity metrics. Data were visualized using the tidyverse and terra packages. All heterogeneity metrics were calculated using the landscapemetrics package using the Queen’s case. Statistical analyses Statistical analyses were conducted in R. The data we collected span different time intervals, and we conduct our statistical analyses to match the temporal and spatial scales of data we have for each of our hypotheses. We used an alpha (probability of rejecting the null hypothesis when
The dataset was assembled using the following NHGS 2022 National Hydrography Dataset (NHD) shapefiles:NHDFlowline (single-line representation)NHDAreaNHDWaterbodyThis composite layer displays only river segments officially designated under RSA 483: New Hampshire Rivers Management and Protection Program. These rivers are classified into four types according to RSA 483:7-a, and include supporting attributes to aid in visualization, management, and regulatory review.Fields Included:FID: Unique numeric identifier auto-generated by ArcGIS (read-only).Shape: Geometry type, set to Polyline ZM (read-only).Shape_Length: Geometric length of the feature (units depend on projection; for reference only).CLASS: Official river class as defined in RSA 483:7-a. Values include: Natural, Rural, Rural-Community, and Community.STRMORDR: Stream order derived from NHGS' 2013 NHDStreamOrder layer. Included only for the Lamprey and Oyster Rivers, which require stream order classification to determine Shoreland Water Quality Protection Act coverage under RSA 483-B:4(XV). Coded as 3- (Third Order or less) or 4+ (Fourth Order or greater).MILES: Length of the river segment in miles, calculated manually using ArcGIS Pro.RIVERSECT: A sequential identifier indicating each designated class segment's position along the river, beginning from the upstream end. Branches and tributaries are numbered independently.RIVERNAME: Name of the designated river, branch, or tributary, as specified in RSA 483:15.LAC: Name of the Local Advisory Committee (LAC) formed for the designated river per RSA 483:8-a.RSAREF: Citation of the relevant RSA 483:15 statutory reference describing the designated river segment.Usage Notes:This layer is intended for visualization purposes only. It does not represent the full legal extent of designated river corridors.Best viewed at scales 1:24,000 or smaller.Not suitable for site-specific regulatory or parcel-scale analysis without cross-referencing authoritative datasets or legal descriptions.
This is a collection of maps, layers, apps and dashboards that show population access to essential retail locations, such as grocery stores. Data sourcesPopulation data is from the 2010 U.S. Census blocks. Each census block has a count of stores within a 10 minute walk, and a count of stores within a ten minute drive. Census blocks known to be unpopulated are given a score of 0. The layer is available as a hosted feature layer. Grocery store locations are from SafeGraph, reflecting what was in the data as of October 2020. Access to the layer was obtained from the SafeGraph offering in ArcGIS Marketplace. For this project, ArcGIS StreetMap Premium was used for the street network in the origin-destination analysis work, because it already has the necessary attributes on each street segment to identify which streets are considered walkable, and supports a wide variety of driving parameters. The walkable access layer and drivable access layers are rasters, whose colors were chosen to allow the drivable access layer to serve as backdrop to the walkable access layer. Data PreparationArcGIS Network Analyst was used to set up a network street layer for analysis. ArcGIS StreetMap Premium was installed to a local hard drive and selected in the Origin-Destination workflow as the network data source. This allows the origins (Census block centroids) and destinations (SafeGraph grocery stores) to be connected to that network, to allow origin-destination analysis. The Census blocks layer contains the centroid of each Census block. The data allows a simple popup to be created. This layer's block figures can be summarized further, to tract, county and state levels. The SafeGraph grocery store locations were created by querying the SafeGraph source layer based on primary NAICS code. After connecting to the layer in ArcGIS Pro, a definition query was set to only show records with NAICS code 445110 as an initial screening. The layer was exported to a local disk drive for further definition query refinement, to eliminate any records that were obviously not grocery stores. The final layer used in the analysis had approximately 53,600 records. In this map, this layer is included as a vector tile layer. Methodology Every census block in the U.S. was assigned two access scores, whose numbers are simply how many grocery stores are within a 10 minute walk and a 10 minute drive of that census block. Every census block has a score of 0 (no stores), 1, 2 or more stores. The count of accessible stores was determined using Origin-Destination Analysis in ArcGIS Network Analyst, in ArcGIS Pro. A set of Tools in this ArcGIS Pro package allow a similar analysis to be conducted for any city or other area. The Tools step through the data prep and analysis steps. Download the Pro package, open it and substitute your own layers for Origins and Destinations. Parcel centroids are a suggested option for Origins, for example. Origin-Destination analysis was configured, using ArcGIS StreetMap Premium as the network data source. Census block centroids with population greater than zero were used as the Origins, and grocery store locations were used as the Destinations. A cutoff of 10 minutes was used with the Walk Time option. Only one restriction was applied to the street network: Walkable, which means Interstates and other non-walkable street segments were treated appropriately. You see the results in the map: wherever freeway overpasses and underpasses are present near a grocery store, the walkable area extends across/through that pass, but not along the freeway. A cutoff of 10 minutes was used with the Drive Time option. The default restrictions were applied to the street network, which means a typical vehicle's access to all types of roads was factored in. The results for each analysis were captured in the Lines layer, which shows which origins are within the cutoff of each destination over the street network, given the assumptions about that network (walking, or driving a vehicle). The Lines layer was then summarized by census block ID to capture the Maximum value of the Destination_Rank field. A census block within 10 minutes of 3 stores would have 3 records in the Lines layer, but only one value in the summarized table, with a MAX_Destination_Rank field value of 3. This is the number of stores accessible to that census block in the 10 minutes measured, for walking and driving. These data were joined to the block centroids layer and given unique names. At this point, all blocks with zero population or null values in the MAX_Destination_Rank fields were given a store count of 0, to help the next step. Walkable and Drivable areas are calculated into a raster layer, using Nearest Neighbor geoprocessing tool on the count of stores within a 10 minute walk, and a count of stores within a ten minute drive, respectively. This tool uses a 200 meter grid and interpolates the values between each census block. A census tracts layer containing all water polygons "erased" from the census tract boundaries was used as an environment setting, to help constrain interpolation into/across bodies of water. The same layer use used to "shoreline" the Nearest Neighbor results, to eliminate any interpolation into the ocean or Great Lakes. This helped but was not perfect. Notes and LimitationsThe map provides a baseline for discussing access to grocery stores in a city. It does not presume local population has the desire or means to walk or drive to obtain groceries. It does not take elevation gain or loss into account. It does not factor time of day nor weather, seasons, or other variables that affect a person's commute choices. Walking and driving are just two ways people get to a grocery store. Some people ride a bike, others take public transit, have groceries delivered, or rely on a friend with a vehicle. Thank you to Melinda Morang on the Network Analyst team for guidance and suggestions at key moments along the way; to Emily Meriam for reviewing the previous version of this map and creating new color palettes and marker symbols specific to this project. Additional ReadingThe methods by which access to food is measured and reported have improved in the past decade or so, as has the uses of such measurements. Some relevant papers and articles are provided below as a starting point. Measuring Food Insecurity Using the Food Abundance Index: Implications for Economic, Health and Social Well-BeingHow to Identify Food Deserts: Measuring Physical and Economic Access to Supermarkets in King County, WashingtonAccess to Affordable and Nutritious Food: Measuring and Understanding Food Deserts and Their ConsequencesDifferent Measures of Food Access Inform Different SolutionsThe time cost of access to food – Distance to the grocery store as measured in minutes
Source DataThe National Agriculture Imagery Program (NAIP) Color Infrared Imagery, captured in 2018 Processing Methodsdownloaded NAIP imagery tiles for all Southern Appalachian sky islands with spruce forest type present. Mosaiced individual imagery tiles by sky island. This step resulted in a single, seamless imagery raster dataset for each sky island.Changed the raster band combination of the mosaiced sky island imagery to visually enhance the spruce forest type from the other forest types. Typically, the band combination was Band 2 for Red, Band 3 for Green, and Band 1 for Blue. Utilizing the ArcGIS Pro Image Analyst extension, performed an image segmentation of the mosaiced sky island imagery. Segmentation is a process in which adjacent pixels with similar multispectral or spatial characteristics are grouped together. These objects represent partial or complete features on the landscape. In this case, it simplified the imagery to be more uniform by forest type present in the imagery, especially for the spruce forest type.Utilizing the segmented mosaiced sky island imagery, training samples were digitized. Training samples are areas in the imagery that contain representative sites of a classification type that are used to train the imagery classification. Adequate training samples were digitized for every classification type required for the imagery classification. The spruce forest type was included for every sky island. Classified the segmented mosaiced sky island imagery utilizing a Support Vector Machine (SVM) classifier. The SVM provides a powerful, supervised classification method that is less susceptible to noise, correlated bands, and an unbalanced number or size of training sites within each class and is widely used among researchers. This step took the segmented mosaiced sky island imagery and created a classified raster dataset based on the training sample classification scheme. Reclassified the classified dataset only retaining the spruce forest type and shadows class.Converted the spruce and shadows raster dataset to polygon.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Swimming pools are important for property tax assessment because they impact the value of the property. Tax assessors at local government agencies often rely on expensive and infrequent surveys, leading to assessment inaccuracies. Finding the area of pools that are not on the assessment roll (such as those recently constructed) is valuable to assessors and will ultimately mean additional revenue for the community.This deep learning model helps automate the task of finding the area of pools from high resolution satellite imagery. This model can also benefit swimming pool maintenance companies and help redirect their marketing efforts. Public health and mosquito control agencies can also use this model to detect pools and drive field activity and mitigation efforts. Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8-bit, 3-band high resolution (5-30 centimeters) imagery.OutputFeature class containing masks depicting pool.Applicable geographiesThe model is expected to work well in the United States.Model architectureThe model uses the FasterRCNN model architecture implemented using ArcGIS API for Python and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThe model has an average precision score of 0.59.Sample resultsHere are a few results from the model.