Created to honor the impressionistic atmospheric quality of the work of Swiss topographic painter and cartographer, Eduard Imhof. These symbols and palettes allow for the application of an homage aesthetic when applied to layered hillshades and digital elevation models. An accompanying how-to resource is forthcoming.In the meantime, the Hillshade color scheme is intended to be applied to a traditional hillshade layer and a multidirectional hillshade layer. The Mist color scheme is intended to be applied to a DEM layer. When viewed in concert with an imagery basemap, the hues and opacities combine to create a distinctive quality.Here it is at a broader scale...Here is a map that uses the Area of Interest, Mask, and Locator layers...Contents:Alternatively, you can download an ArcGIS Pro project with the data and styles already implemented, and you can just start cranking away at Imhofs.Happy Topographic Painting! John Nelson
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
scripts.zip
arcgisTools.atbx: terrainDerivatives: make terrain derivatives from digital terrain model (Band 1 = TPI (50 m radius circle), Band 2 = square root of slope, Band 3 = TPI (annulus), Band 4 = hillshade, Band 5 = multidirectional hillshades, Band 6 = slopeshade). rasterizeFeatures: convert vector polygons to raster masks (1 = feature, 0 = background).
makeChips.R: R function to break terrain derivatives and chips into image chips of a defined size. makeTerrainDerivatives.R: R function to generated 6-band terrain derivatives from digital terrain data (same as ArcGIS Pro tool). merge_logs.R: R script to merge training logs into a single file. predictToExtents.ipynb: Python notebook to use trained model to predict to new data. trainExperiments.ipynb: Python notebook used to train semantic segmentation models using PyTorch and the Segmentation Models package. assessmentExperiments.ipynb: Python code to generate assessment metrics using PyTorch and the torchmetrics library. graphs_results.R: R code to make graphs with ggplot2 to summarize results. makeChipsList.R: R code to generate lists of chips in a directory. makeMasks.R: R function to make raster masks from vector data (same as rasterizeFeatures ArcGIS Pro tool).
terraceDL.zip
dems: LiDAR DTM data partitioned into training, testing, and validation datasets based on HUC8 watershed boundaries. Original DTM data were provided by the Iowa BMP mapping project: https://www.gis.iastate.edu/BMPs. extents: extents of the training, testing, and validation areas as defined by HUC 8 watershed boundaries. vectors: vector features representing agricultural terraces and partitioned into separate training, testing, and validation datasets. Original digitized features were provided by the Iowa BMP Mapping Project: https://www.gis.iastate.edu/BMPs.
This basemap was designed with the Vizzuality team for use in the Half-Earth Project globe. The saturated palette and rich landcover tones are meant to engage an audience and to provide the sense that the earth is a charming and beautiful place worthy of thoughtful stewardship. As you zoom in, the saturated basemap is slowly replaced by imagery.This basemap is the major component of the Vibrant Map. The Vibrant Map is configured to use these basemap tiles from global to regional extents, then transition to Esri's World Imagery basemap tiles for a seamless transition from small to large scale.Find more information about this basemap, and its contributing data, here: https://www.esri.com/arcgis-blog/products/arcgis-pro/mapping/creating-the-half-earth-vibrant-basemap/Learn more about the Half-Earth Project here and explore highlighted areas of biodiversity here.Happy Mapping! John
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This seminar is an applied study of deep learning methods for extracting information from geospatial data, such as aerial imagery, multispectral imagery, digital terrain data, and other digital cartographic representations. We first provide an introduction and conceptualization of artificial neural networks (ANNs). Next, we explore appropriate loss and assessment metrics for different use cases followed by the tensor data model, which is central to applying deep learning methods. Convolutional neural networks (CNNs) are then conceptualized with scene classification use cases. Lastly, we explore semantic segmentation, object detection, and instance segmentation. The primary focus of this course is semantic segmenation for pixel-level classification. The associated GitHub repo provides a series of applied examples. We hope to continue to add examples as methods and technologies further develop. These examples make use of a vareity of datasets (e.g., SAT-6, topoDL, Inria, LandCover.ai, vfillDL, and wvlcDL). Please see the repo for links to the data and associated papers. All examples have associated videos that walk through the process, which are also linked to the repo. A variety of deep learning architectures are explored including UNet, UNet++, DeepLabv3+, and Mask R-CNN. Currenlty, two examples use ArcGIS Pro and require no coding. The remaining five examples require coding and make use of PyTorch, Python, and R within the RStudio IDE. It is assumed that you have prior knowledge of coding in the Python and R enviroinments. If you do not have experience coding, please take a look at our Open-Source GIScience and Open-Source Spatial Analytics (R) courses, which explore coding in Python and R, respectively. After completing this seminar you will be able to: explain how ANNs work including weights, bias, activation, and optimization. describe and explain different loss and assessment metrics and determine appropriate use cases. use the tensor data model to represent data as input for deep learning. explain how CNNs work including convolutional operations/layers, kernel size, stride, padding, max pooling, activation, and batch normalization. use PyTorch, Python, and R to prepare data, produce and assess scene classification models, and infer to new data. explain common semantic segmentation architectures and how these methods allow for pixel-level classification and how they are different from traditional CNNs. use PyTorch, Python, and R (or ArcGIS Pro) to prepare data, produce and assess semantic segmentation models, and infer to new data.
Segmentation models perform a pixel-wise classification by classifying the pixels into different classes. The classified pixels correspond to different objects or regions in the image. These models have a wide variety of use cases across multiple domains. When used with satellite and aerial imagery, these models can help to identify features such as building footprints, roads, water bodies, crop fields, etc.Generally, every segmentation model needs to be trained from scratch using a dataset labeled with the objects of interest. This can be an arduous and time-consuming task. Meta's Segment Anything Model (SAM) is aimed at creating a foundational model that can be used to segment (as the name suggests) anything using zero-shot learning and generalize across domains without additional training. SAM is trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks. This makes the model highly robust in identifying object boundaries and differentiating between various objects across domains, even though it might have never seen them before. Use this model to extract masks of various objects in any image.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS. Fine-tuning the modelThis model can be fine-tuned using SamLoRA architecture in ArcGIS. Follow the guide and refer to this sample notebook to fine-tune this model.Input8-bit, 3-band imagery.OutputFeature class containing masks of various objects in the image.Applicable geographiesThe model is expected to work globally.Model architectureThis model is based on the open-source Segment Anything Model (SAM) by Meta.Training dataThis model has been trained on the Segment Anything 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.
This deep learning model is used to detect and segment trees in high resolution drone or aerial imagery. Tree detection can be used for applications such as vegetation management, forestry, urban planning, etc. High resolution aerial and drone imagery can be used for tree detection due to its high spatio-temporal coverage.This deep learning model is based on DeepForest and has been trained on data from the National Ecological Observatory Network (NEON). The model also uses Segment Anything Model (SAM) by Meta.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8 bit, 3-band high-resolution (10-25 cm) imagery.OutputFeature class containing separate masks for each tree.Applicable geographiesThe model is expected to work well in the United States.Model architectureThis model is based upon the DeepForest python package which uses the RetinaNet model architecture implemented in torchvision and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThis model has an precision score of 0.66 and recall of 0.79.Training dataThis model has been trained on NEON Tree Benchmark dataset, provided by the Weecology Lab at the University of Florida. The model also uses Segment Anything Model (SAM) by Meta that is trained on 1-Billion mask dataset (SA-1B) which comprises a diverse set of 11 million images and over 1 billion masks.Sample resultsHere are a few results from the model.CitationsWeinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309Geographic Generalization in Airborne RGB Deep Learning Tree Detection Ben Weinstein, Sergio Marconi, Stephanie Bohlman, Alina Zare, Ethan P White bioRxiv 790071; doi: https://doi.org/10.1101/790071
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset contains building locations in Poland in 1970-80s. The source information were polish archival 1:10 000 topographical maps. Buildings were extracted from maps using Mask R-CNN model implemented in Esri ArcGIS Pro software. In post processing we have removed most of the false possitives. The dataset of building locations covers the entire country and contains approximately 11 million buildings. The accuracy of the dataset was assessed manually on randomly selected map sheets. The overall accuracy is 95% (F1 0.98).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
scripts.zip
arcgisTools.atbx: terrainDerivatives: make terrain derivatives from digital terrain model (Band 1 = TPI (50 m radius circle), Band 2 = square root of slope, Band 3 = TPI (annulus), Band 4 = hillshade, Band 5 = multidirectional hillshades, Band 6 = slopeshade). rasterizeFeatures: convert vector polygons to raster masks (1 = feature, 0 = background).
makeChips.R: R function to break terrain derivatives and chips into image chips of a defined size. makeTerrainDerivatives.R: R function to generated 6-band terrain derivatives from digital terrain data (same as ArcGIS Pro tool). merge_logs.R: R script to merge training logs into a single file. predictToExtents.ipynb: Python notebook to use trained model to predict to new data. trainExperiments.ipynb: Python notebook used to train semantic segmentation models using PyTorch and the Segmentation Models package. assessmentExperiments.ipynb: Python code to generate assessment metrics using PyTorch and the torchmetrics library. graphs_results.R: R code to make graphs with ggplot2 to summarize results. makeChipsList.R: R code to generate lists of chips in a directory. makeMasks.R: R function to make raster masks from vector data (same as rasterizeFeatures ArcGIS Pro tool).
vfillDL.zip
dems: LiDAR DTM data partitioned into training, three testing, and two validation datasets. Original DTM data were obtained from 3DEP (https://www.usgs.gov/3d-elevation-program) and the WV GIS Technical Center (https://wvgis.wvu.edu/) . extents: extents of the training, testing, and validation areas. These extents were defined by the researchers. vectors: vector features representing valley fills and partitioned into separate training, testing, and validation datasets. Extents were created by the researchers.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This seminar is an applied study of deep learning methods for extracting information from geospatial data, such as aerial imagery, multispectral imagery, digital terrain data, and other digital cartographic representations. We first provide an introduction and conceptualization of artificial neural networks (ANNs). Next, we explore appropriate loss and assessment metrics for different use cases followed by the tensor data model, which is central to applying deep learning methods. Convolutional neural networks (CNNs) are then conceptualized with scene classification use cases. Lastly, we explore semantic segmentation, object detection, and instance segmentation. The primary focus of this course is semantic segmenation for pixel-level classification.
The associated GitHub repo provides a series of applied examples. We hope to continue to add examples as methods and technologies further develop. These examples make use of a vareity of datasets (e.g., SAT-6, topoDL, Inria, LandCover.ai, vfillDL, and wvlcDL). Please see the repo for links to the data and associated papers. All examples have associated videos that walk through the process, which are also linked to the repo. A variety of deep learning architectures are explored including UNet, UNet++, DeepLabv3+, and Mask R-CNN. Currenlty, two examples use ArcGIS Pro and require no coding. The remaining five examples require coding and make use of PyTorch, Python, and R within the RStudio IDE. It is assumed that you have prior knowledge of coding in the Python and R enviroinments. If you do not have experience coding, please take a look at our Open-Source GIScience and Open-Source Spatial Analytics (R) courses, which explore coding in Python and R, respectively.
After completing this seminar you will be able to:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains both large (A0) printable maps of the Torres Strait broken into six overlapping regions, based on a clear sky, clear water composite Sentinel 2 composite imagery and the imagery used to create these maps. These maps show satellite imagery of the region, overlaid with reef and island boundaries and names. Not all features are named, just the more prominent features. This also includes a vector map of Ashmore Reef and Boot Reef in Coral Sea as these were used in the same discussions that these maps were developed for. The map of Ashmore Reef includes the atoll platform, reef boundaries and depth polygons for 5 m and 10 m.
This dataset contains all working files used in the development of these maps. This includes all a copy of all the source datasets and all derived satellite image tiles and QGIS files used to create the maps. This includes cloud free Sentinel 2 composite imagery of the Torres Strait region with alpha blended edges to allow the creation of a smooth high resolution basemap of the region.
The base imagery is similar to the older base imagery dataset: Torres Strait clear sky, clear water Landsat 5 satellite composite (NERP TE 13.1 eAtlas, AIMS, source: NASA).
Most of the imagery in the composite imagery from 2017 - 2021.
Method:
The Sentinel 2 basemap was produced by processing imagery from the World_AIMS_Marine-satellite-imagery dataset (01-data/World_AIMS_Marine-satellite-imagery in the data download) for the Torres Strait region. The TrueColour imagery for the scenes covering the mapped area were downloaded. Both the reference 1 imagery (R1) and reference 2 imagery (R2) was copied for processing. R1 imagery contains the lowest noise, most cloud free imagery, while R2 contains the next best set of imagery. Both R1 and R2 are typically composite images from multiple dates.
The R2 images were selectively blended using manually created masks with the R1 images. This was done to get the best combination of both images and typically resulted in a reduction in some of the cloud artefacts in the R1 images. The mask creation and previewing of the blending was performed in Photoshop. The created masks were saved in 01-data/R2-R1-masks. To help with the blending of neighbouring images a feathered alpha channel was added to the imagery. The processing of the merging (using the masks) and the creation of the feathered borders on the images was performed using a Python script (src/local/03-merge-R2-R1-images.py) using the Pillow library and GDAL. The neighbouring image blending mask was created by applying a blurring of the original hard image mask. This allowed neighbouring image tiles to merge together.
The imagery and reference datasets (reef boundaries, EEZ) were loaded into QGIS for the creation of the printable maps.
To optimise the matching of the resulting map slight brightness adjustments were applied to each scene tile to match its neighbours. This was done in the setup of each image in QGIS. This adjustment was imperfect as each tile was made from a different combinations of days (to remove clouds) resulting in each scene having a different tonal gradients across the scene then its neighbours. Additionally Sentinel 2 has slight stripes (at 13 degrees off the vertical) due to the swath of each sensor having a slight sensitivity difference. This effect was uncorrected in this imagery.
Single merged composite GeoTiff:
The image tiles with alpha blended edges work well in QGIS, but not in ArcGIS Pro. To allow this imagery to be used across tools that don't support the alpha blending we merged and flattened the tiles into a single large GeoTiff with no alpha channel. This was done by rendering the map created in QGIS into a single large image. This was done in multiple steps to make the process manageable.
The rendered map was cut into twenty 1 x 1 degree georeferenced PNG images using the Atlas feature of QGIS. This process baked in the alpha blending across neighbouring Sentinel 2 scenes. The PNG images were then merged back into a large GeoTiff image using GDAL (via QGIS), removing the alpha channel. The brightness of the image was adjusted so that the darkest pixels in the image were 1, saving the value 0 for nodata masking and the boundary was clipped, using a polygon boundary, to trim off the outer feathering. The image was then optimised for performance by using internal tiling and adding overviews. A full breakdown of these steps is provided in the README.md in the 'Browse and download all data files' link.
The merged final image is available in export\TS_AIMS_Torres Strait-Sentinel-2_Composite.tif
.
Source datasets:
Complete Great Barrier Reef (GBR) Island and Reef Feature boundaries including Torres Strait Version 1b (NESP TWQ 3.13, AIMS, TSRA, GBRMPA), https://eatlas.org.au/data/uuid/d2396b2c-68d4-4f4b-aab0-52f7bc4a81f5
Geoscience Australia (2014b), Seas and Submerged Lands Act 1973 - Australian Maritime Boundaries 2014a - Geodatabase [Dataset]. Canberra, Australia: Author. https://creativecommons.org/licenses/by/4.0/ [license]. Sourced on 12 July 2017, https://dx.doi.org/10.4225/25/5539DFE87D895
Basemap/AU_GA_AMB_2014a/Exclusive_Economic_Zone_AMB2014a_Limit.shp
The original data was obtained from GA (Geoscience Australia, 2014a). The Geodatabase was loaded in ArcMap. The Exclusive_Economic_Zone_AMB2014a_Limit layer was loaded and exported as a shapefile. Since this file was small no clipping was applied to the data.
Geoscience Australia (2014a), Treaties - Australian Maritime Boundaries (AMB) 2014a [Dataset]. Canberra, Australia: Author. https://creativecommons.org/licenses/by/4.0/ [license]. Sourced on 12 July 2017, http://dx.doi.org/10.4225/25/5539E01878302
Basemap/AU_GA_Treaties-AMB_2014a/Papua_New_Guinea_TSPZ_AMB2014a_Limit.shp
The original data was obtained from GA (Geoscience Australia, 2014b). The Geodatabase was loaded in ArcMap. The Papua_New_Guinea_TSPZ_AMB2014a_Limit layer was loaded and exported as a shapefile. Since this file was small no clipping was applied to the data.
AIMS Coral Sea Features (2022) - DRAFT
This is a draft version of this dataset. The region for Ashmore and Boot reef was checked. The attributes in these datasets haven't been cleaned up. Note these files should not be considered finalised and are only suitable for maps around Ashmore Reef. Please source an updated version of this dataset for any other purpose.
CS_AIMS_Coral-Sea-Features/CS_Names/Names.shp
CS_AIMS_Coral-Sea-Features/CS_Platform_adj/CS_Platform.shp
CS_AIMS_Coral-Sea-Features/CS_Reef_Boundaries_adj/CS_Reef_Boundaries.shp
CS_AIMS_Coral-Sea-Features/CS_Depth/CS_AIMS_Coral-Sea-Features_Img_S2_R1_Depth5m_Coral-Sea.shp
CS_AIMS_Coral-Sea-Features/CS_Depth/CS_AIMS_Coral-Sea-Features_Img_S2_R1_Depth10m_Coral-Sea.shp
Murray Island 20 Sept 2011 15cm SISP aerial imagery, Queensland Spatial Imagery Services Program, Department of Resources, Queensland
This is the high resolution imagery used to create the map of Mer.
World_AIMS_Marine-satellite-imagery
The base image composites used in this dataset were based on an early version of Lawrey, E., Hammerton, M. (2024). Marine satellite imagery test collections (AIMS) [Data set]. eAtlas. https://doi.org/10.26274/zq26-a956. A snapshot of the code at the time this dataset was developed is made available in the 01-data/World_AIMS_Marine-satellite-imagery folder of the download of this dataset.
Data Location:
This dataset is filed in the eAtlas enduring data repository at: data\custodian\2020-2029-AIMS\TS_AIMS_Torres-Strait-Sentinel-2-regional-maps. On the eAtlas server it is stored at eAtlas GeoServer\data\2020-2029-AIMS.
Change Log:
2025-05-12: Eric Lawrey
Added Torres-Strait-Region-Map-Masig-Ugar-Erub-45k-A0 and Torres-Strait-Eastern-Region-Map-Landscape-A0. These maps have a brighten satellite imagery to allow easier reading of writing on the maps. They also include markers for geo-referencing the maps for digitisation.
2025-02-04: Eric Lawrey
Fixed up the reference to the World_AIMS_Marine-satellite-imagery dataset, clarifying where the source that was used in this dataset. Added ORCID and RORs to the record.
2023-11-22: Eric Lawrey
Added the data and maps for close up of Mer.
- 01-data/TS_DNRM_Mer-aerial-imagery/
- preview/Torres-Strait-Mer-Map-Landscape-A0.jpeg
- exports/Torres-Strait-Mer-Map-Landscape-A0.pdf
Updated 02-Torres-Strait-regional-maps.qgz to include the layout for the new map.
2023-03-02: Eric Lawrey
Created a merged version of the satellite imagery, with no alpha blending so that it can be used in ArcGIS Pro. It is now a single large GeoTiff image. The Google Earth Engine source code for the World_AIMS_Marine-satellite-imagery was included to improve the reproducibility and provenance of the dataset, along with a calculation of the distribution of image dates that went into the final composite image. A WMS service for the imagery was also setup and linked to from the metadata. A cross reference to the older Torres Strait clear sky clear water Landsat composite imagery was also added to the record.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Natural rivers are inherently dynamic. Spatial and temporal variations in water, sediment, and wood fluxes both cause and respond to an increase in geomorphic heterogeneity within the river corridor. We analyze 16 two-kilometer river corridor segments of the Swan River in Montana, USA to examine relationships between wood accumulations (wood accumulation distribution density, count, and persistence), channel dynamism (total sinuosity and average channel migration), and geomorphic heterogeneity (density, aggregation, interspersion, and evenness of patches in the river corridor). We hypothesize that i) more dynamic river segments correlate with a greater presence, persistence, and distribution of wood accumulations; ii) years with higher peak discharge correspond with greater channel dynamism and wood accumulations; and iii) all river corridor variables analyzed play a role in explaining river corridor spatial heterogeneity. Our results suggest that decadal-scale channel dynamism, as reflected in total sinuosity, corresponds to greater numbers of wood accumulations per surface area and greater persistence of these wood accumulations through time. Second, higher peak discharges correspond to greater values of wood distribution density, but not to greater channel dynamism. Third, persistent values of geomorphic heterogeneity, as reflected in the heterogeneity metrics of aggregation, interspersion, patch density, and evenness, are explained by potential predictor variables analyzed here. Our results reflect the complex interactions of water, sediment, and large wood in river corridors; the difficulties of interpreting causal relationships among these variables through time; and the importance of spatial and temporal analyses of past and present river processes to understand future river conditions Methods This data was collected using field and remote sensing methods. To provide spatial context for the measurements of wood distributions, geomorphic heterogeneity, and channel dynamism along our 32-km study reach, we segmented the study reach at uniform 2-km intervals prior to data collection. The downstream-most 8 segments were selected based on the naturalness of the river corridor and the presence of abundant large wood accumulations in the active channel(s). We focused on these segments for ground-based measurements. We subsequently expanded analyses to include an additional eight upstream segments. These segments were included because of anecdotal evidence of at least localized timber harvest in the river corridor, bank stabilization, and large wood removal from the active channel. We included these sites to provide a greater range of values within some of the variables analyzed and thus potentially increase the power of our statistical analyses. Wood accumulations and beaver modifications We conducted aerial wood accumulation surveys using available Google Earth imagery between 2013 and 2022 (four years of available imagery: 2013, 2016, 2020, 2022). We mapped all logjams that could be detected via the aerial imagery. Wood accumulations that were under canopy, too small for the spatial resolution of imagery, not interacting with base flows, or containing less than three visible wood pieces were not included. We recorded the number of wood accumulations per 2-km segment for each available imagery year as a minimum wood-accumulations count and divided the wood count by floodplain area for each segment to get the wood distribution density. We also noted the occurrence of persistent wood accumulations that were continually present in the Google Earth imagery, in what we refer to as “sticky sites”. GPS coordinates of wood accumulations were collected in the field during August 2022 to verify imagery identification. We also manually identified active and remnant beaver meadows using Google Earth. Similar to large wood, American beaver (Castor canadensis) both respond to spatial heterogeneity in the river corridor (e.g., preferentially damming secondary channels) and create spatial heterogeneity through their ecosystem modifications. Beaver-modified portions of the river corridor (beaver meadows) were identified based on presence of standing water in ponds with a visible berm (beaver dam); different vegetation (wetland vegetation including rushes, sedges, and willow carrs that appear as a lighter green color in imagery) than adjacent floodplain areas; and detectable active or relict beaver dams (linear berms with different vegetation than adjacent areas). Several of the sites identified in imagery were also visited in the field to verify identification. Channel dynamism and annual peak discharge Channel dynamism was quantified using metrics of active channel migration and total sinuosity over time. To measure active channel migration, we developed a semi-automated approach to map surface water extent and planimetric centerline movement, which are commonly used to understand morphological evolution in rivers. We followed existing methodologies using base flow conditions as a conservative delineation of planimetric change given our goal of looking at relative channel change over time to understand which segments of our study area were the most dynamic. Surface water extent was delineated for 2013, 2016, 2020, and 2022 to keep the timestep consistent with our wood surveys. Imagery collected for the National Agriculture Imagery Program (NAIP) was used when available (2013 and 2016). For 2020 and 2022, cloud-free multispectral composite images were created in Google Earth Engine (GEE) from Sentinel-2 imagery from average baseflow months (August-October). Surface water was classified using the normalized difference water index (NDWI) (Gao, 1996) for NAIP imagery, and modified normalized difference water index (MNDWI) in Sentinel-2 imagery. A unique threshold was empirically determined for each year to optimize the identification of the river surface while minimizing false-positive water identification, resulting in binary water and non-water masks for each year. Gaps and voids in the Sentinel-2 derived water masks (from shadow-covered areas, thin river segments, or mixed pixels along the river edge) were filled by sequentially buffering the water areas outwards by 30 meters (three pixels) and then inwards by 15 m. Similarly, gaps and voids in NAIP-derived water masks were filled using a sequential 20 m outwards then inwards buffer. The resulting binary water masks were imported into ArcGIS Pro and vectorized. Manual adjustments were made to remove any remaining misclassified areas and join disconnected segments. We delineated centerlines of our channel masks using the ArcGIS Pro Polygon to Centerline tool. When multiple channels were present, the dominant channel branch was chosen for the channel centerline. Consequently, our analysis represents a minimum value of channel migration during each time step because it does not include secondary channel movements. The Feature to Polygon tool was used to extract area differences between two centerlines at each segment. Areas between the centerlines for each segment were divided by centerline length to get a horizontal change distance. We measured total sinuosity in each 2-km segment for 2013, 2016, 2020, and 2022 using Google Earth imagery and the built-in Measure tool in Google Earth. We measured total sinuosity as the ratio of total channel length of all active channels/valley length. We obtained annual peak discharge from the nearest US Geological Survey gauge (12370000, Swan River near Bigfork, MT). This site is below Swan Lake, a natural lake, into which the Swan River in our study area flows. Consequently, the gauge records reflect relative inter-annual fluctuations in peak discharge, but not actual discharge at the study site. We used annual peak discharge for the same time intervals used for analyzing channel position. Geomorphic heterogeneity We performed an unsupervised remote sensing classification on a stack of data containing a 2022 Sentinel-2 imagery mosaic prepared in GEE, and normalized difference vegetation index (NDVI) and normalized difference moisture index (NDMI) rasters calculated from the Sentinel-2 mosaic in ArcGIS Pro. The Sentinel mosaic was prepared for the approximate growing season in Montana, USA, (June 1 to October 31) based on annual phenology activity curves (2018-2022) of the existence of leaves or needles on flowering plants. The unsupervised classification was completed on the floodplain extent of the Swan, delineated manually in ArcGIS Pro using the 10-m 3DEP DEM, hillshade prepared from the DEM, Sentinel-2 imagery, and ArcGIS Pro Imagery basemap as visual references. Although the classification is unsupervised, the classes were intended to represent distinct types of habitats within the river corridor that blend geomorphic features and vegetation communities as observed in the field, including, but not limited to: active channels, secondary channels, accretionary bars, backswamps, natural levees, old-growth forest, wetlands, and beaver meadows. The ISO Cluster Unsupervised Classification ArcGIS Pro tool was used to perform the classification. Inputs to the tool were a maximum of 10 classes, a minimum class size of 20 pixels (tool default), and a sample interval of 10 pixels (tool default). The entire reach was classified once, and then clipped into individual 2-km segments. The classified Swan raster was brought into R for statistical analysis of heterogeneity metrics. Data were visualized using the tidyverse and terra packages. All heterogeneity metrics were calculated using the landscapemetrics package using the Queen’s case. Statistical analyses Statistical analyses were conducted in R. The data we collected span different time intervals, and we conduct our statistical analyses to match the temporal and spatial scales of data we have for each of our hypotheses. We used an alpha (probability of rejecting the null hypothesis when
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
surficialDL: A geomorpholgy deep learning dataset of alluvium and thick glacial till derived form 1:24,000 scale surficial geology data for the western portion of Massachusetts, USA
scripts.zip
arcgisTools.atbx: terrainDerivatives: make terrain derivatives from digital terrain model (Band 1 = TPI (50 m radius circle), Band 2 = square root of slope, Band 3 = TPI (annulus), Band 4 = hillshade, Band 5 = multidirectional hillshades, Band 6 = slopeshade). rasterizeFeatures: convert vector polygons to raster masks (1 = feature, 0 = background).
makeChips.R: R function to break terrain derivatives and chips into image chips of a defined size. makeTerrainDerivatives.R: R function to generated 6-band terrain derivatives from digital terrain data (same as ArcGIS Pro tool). merge_logs.R: R script to merge training logs into a single file. predictToExtents.ipynb: Python notebook to use trained model to predict to new data. trainExperiments.ipynb: Python notebook used to train semantic segmentation models using PyTorch and the Segmentation Models package. assessmentExperiments.ipynb: Python code to generate assessment metrics using PyTorch and the torchmetrics library. graphs_results.R: R code to make graphs with ggplot2 to summarize results. makeChipsList.R: R code to generate lists of chips in a directory. makeMasks.R: R function to make raster masks from vector data (same as rasterizeFeatures ArcGIS Pro tool).
surficialDL
The digital terrain model associated with these data/project is available here: https://s3.us-east-1.amazonaws.com/download.massgis.digital.mass.gov/lidar/LIDAR_DEM_32BIT_FP.gdb.zip.
alluvDL: polygons (vectors folder) and extents (extents folder) for alluvium features separated into training, validation, and testing partitions. These data were derived from the 1:24,000 scale Massachusetts Surficial Geology dataset: https://www.mass.gov/info-details/massgis-data-usgs-124000-surficial-geology.
tillDL: polygons (vector folder) and extents (extents folder) for thick till features separated into training, validation, and testing partitions. These data were derived from the 1:24,000 scale Massachusetts Surficial Geology dataset: https://www.mass.gov/info-details/massgis-data-usgs-124000-surficial-geology.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This directory contains files related to the scientific research project of Luc van Dijk at the Department of Earth, Energy, and Environment, University of Calgary. The project title is "High-resolution, Decadal to Weekly Geomorphic Change Analysis of the Elbow River in Calgary, using Multi-temporal Lidar and Repeat Terrestrial Laser Scanning". This project in the field of geomorphology was a collaboration between the University of Calgary and Utrecht University in the Netherlands. The project was completed on October 27, 2023. Below is a description of the files in this directory.
DisplacementVolumeDistributions_TLS.xlsx Excel file containing tabular data of the normalized sediment displacement volumes that were obtained using TLS. Each tab in the Excel file represents a period of interest in 2023. The data in this file were used to generate the 'histogram-like' figures in the report.
DoD_rasters.zip Folder containing the aerial lidar DEMs of Difference (DoDs) for each period of interest. The DoDs are 'waterless', i.e. the water surface is masked. The suffix of the file name before the file extension (e.g., ..._10cm.tif) indicates the maximum REM value that was used for the automated masking of the water surface extent (see report section 3.1.2). If the file name contains "large", it refers to the upstream greater area (see report section 3.1.3). Within this folder is another folder called 'Clipped2AOIs'. This folder contains the same DoDs, but covering only the extents of the sites of interest ('AOIs' = Areas Of Interest).
FilteredPointClouds_TLS.zip Folder containing the processed and filtered point clouds that were acquired throughout the summer of 2023 using TLS. These point clouds have been pre-processed and filtered to remove vegetation (see report section 3.2). They are grouped in sub-folders per acquisition date. The filenames are numbered to location, i.e. 'elbow1', 'elbow2', 'elbow3' and 'elbow4'. These correspond to the sites of interest: Glenmore Dam, golf club, Sandy Beach and Riverdale, respectively.
PythonScripts_Discharge_Rainfall.zip Folder containing the Python scripts that were made to process the discharge and rainfall data that were sourced from Environment Canada and The City of Calgary (see report section 3.3). The scripts themselves contain descriptions of their purpose.
PythonScripts_DisplacementVolumeAnalysis.zip Folder containing the Python scripts that were made to process and analyze the aerial lidar DoDs and the TLS rasterized difference point clouds (M3C2 output). The 'convert2pickle' scripts converted the sizable rasters to smaller pickle files, which were easier and faster to work with. The 'chart' scripts load the data from the pickle files, analyze them and produce the 'histogram-like' figures in the report. The scripts themselves contain descriptions of their purpose.
RainfallDischargeData.xlsx Excel file containing the discharge and rainfall data from Environment Canada and The City of Calgary. The data came from different sources in different formats and were combined into this single table.
RasterizedDifferencedPointClouds_M3C2.zip Folder containing the rasterized results of the differenced TLS point clouds (M3C2 output) (see report section 3.2.4). The filenames are numbered to location, i.e. 'Elbow1', 'Elbow2', 'Elbow3' and 'Elbow4'. These correspond to the sites of interest: Glenmore Dam, golf club, Sandy Beach and Riverdale, respectively. The numeric sequence in the file name indicates the start and end date of the change analysis in a 'mm-dd' format. The suffixes '_dist', '_unc' and '_sig' refer to the three output layers of the M3C2 algorithm: distance, uncertainty and significance of change. The main files of interest are the '.tif' files. Files sharing the same name, but with different extensions (.tfw, .tif.aux.xml, .tif.xml) are supplementary/auxiliary files for the '.tif' file, generated by ArcGIS Pro.
ScarpsOfInterest_shapefile.zip Folder containing a polygon shapefile describing the extents and locations of the sites of interest. The main file of interest is the '.shp' file. The other files with the same name, but different extensions (.cpg, .dbf, .prj, .sbn, .sbx, .shp.xml, .shx) are supplementary/auxiliary files for the '.shp' file, generated by ArcGIS Pro.
USA Cropland is a time-enabled imagery layer of the USDA Cropland Data Layer dataset from the National Agricultural Statistics Service (NASS). The time series shows the crop grown during every growing season in the conterminous US since 2008. Use the time slider to select only one year to view or analyze. Press play to see each growing season displayed sequentially in an animated map. The USDA is now serving the Cropland Data Layer in their own application called CroplandCros which allows selection and display of a single product or growing season. This application will eventually replace their popular CropScape application. Dataset SummaryVariable mapped: Crop grown in each pixel since 2008.Data Projection: AlbersMosaic Projection: AlbersExtent: Conterminous USACell Size: 30m in 2008-2023, 10m in 2024Source Type: ThematicVisible Scale: All scales are visibleSource: USDA NASSPublication Date: 2/26/2025 Why USA Cropland living atlas layer masks out NLCD land cover in its default templateUSDA Cropland Data Layer, by default as downloaded from USDA, fills in the non-cultivated areas of the conterminous USA with land cover classes from the MRLC National Land Cover Dataset (NLCD). The default behavior for Esri"s USA Cropland layer is a little bit different. By default the Esri USA Cropland layer uses the analytic renderer, which masks out this NLCD data. Why did we choose to mask out the NLCD land cover classes by default? While crops are updated every year from USDA NASS, the NLCD data changes every several years, and it can be quite a bit older than the crop data beside it. If analysis is conducted to quantify landscape change, the NLCD-derived pixels will skew the results of the analysis because NLCD land cover in a yearly time series may appear to remain the same class for several years in a row. This can be problematic because conclusions drawn from this dataset may underrepresent the amount of change happening to the landscape. To display the most current land cover available from both sources, add both the USA NLCD Land Cover service and USA Cropland time series to your map. Use the analytical template with the USA Cropland service, and draw it on top of the USA NLCD Land Cover service. When a time slider is used with these datasets together, the map user will see the most current land cover from both services in any given year. This layer and the data making up the layer are in the Albers map projection. Albers is an equal area projection, and this allows users of this layer to accurately calculate acreage without additional data preparation steps. This also means it takes a tiny bit longer to project on the fly into web Mercator, if that is the destination projection of the layer. Processing templates available with this layerTo help filter out and display just the crops and land use categories you are interested in showing, choose one of the thirteen processing templates that will help you tailor the symbols in the time series to suit your map application. The following are the processing templates that are available with this layer: Analytic RendererUSDA Analytic RendererThe analytic renderer is the default template. NLCD codes are masked when using analytic renderer processing templates. There is a default esri analytic renderer, but also an analytic renderer that uses the original USDA color scheme that was developed for the CropScape layers. This is useful if you have already built maps with the USDA color scheme or otherwise prefer the USDA color scheme. Cartographic RendererUSDA Cartographic RendererThese templates fill in with NLCD land cover types where crops are not cultivated, thereby filling the map with color from coast to coast. There is also a template using the USDA color scheme, which is identical to the datasets as downloaded from USDA NASS. In addition to different ways to display the whole dataset, some processing templates are included which help display the top agricultural products in the United States. If these templates seem to include too many crops in their category (for example, tomatoes are included in both the fruit and vegetables templates), this is because it"s easier for a map user to remove a symbol from a template than it is to add one. Corn - Corn, sweet corn, popcorn or ornamental corn, plus double crops with corn and another crop.Cotton - Cotton and double crops, includes double crops with cotton and another crop.Fruit - Symbolized fruit crops include not only things like melons, apricots, and strawberries, but also olives, avocados, and tomatoes.Nuts - Peanuts, tree nuts, sunflower, etc.Oil Crops - Oil crops include rapeseed and canola, soybeans, avocado, peanut, corn, safflower, sunflower, also cotton and grapes.Permanent Crops - Crops that do not need to be replanted after harvest. Includes fruit and nut trees, caneberries, and grapes.Rice - Rice crops.Sugar - Crops grown to make sugars. Sugar beets and cane are displayed of course, but so are corn and grapes.Soybeans - Soybean crops. Includes double crops where soybeans are grown at some time during the growing season.Vegetables - Vegetable crops, and yes this includes tomatoes.Wheat - Winter and spring wheat, durum wheat, triticale, spelt, and wheat double crops. In many places, two crops were grown in one growing season. Keep in mind that a double crop of corn and soybeans will display in both the corn and soybeans processing templates. What can you do with this layer?This layer is suitable for both visualization and analysis acrossthe ArcGIS system. This layer can be combined with your data and other layers from the ArcGIS Living Atlas of the World in ArcGIS Online and ArcGIS Pro to create powerful web maps that can be used alone or in a story map or other application. Because this layer is part of the ArcGIS Living Atlas of the World it is easy to add to your map:In ArcGIS Online, you can add this layer to a map by selecting Add then Browse Living Atlas Layers. A window will open. Type "USA Cropland" in the search box and browse to the layer. Select the layer then click Add to Map. In ArcGIS Pro, open a map and select Add Data from the Map Tab. Select Data at the top of the drop down menu. The Add Data dialog box will open on the left side of the box, expand Portal if necessary, then select Living Atlas. Type "USA Cropland" in the search box, browse to the layer then click OK. In ArcGIS Pro you can use the built-in raster functions or create your own to create custom extracts of the data. Imagery layers provide fast, powerful inputs to geoprocessing tools, models, or Python scripts in Pro. Online you can filter the layer to show subsets of the data using the filter button and the layer"s built-in raster functions. The ArcGIS Living Atlas of the World provides an easy way to explore many other beautiful and authoritative maps on hundreds of topics like this one. Index to raster values in USA Cropland:Value,Crop0,Background (not a cultivated crop or no data)1,Corn2,Cotton3,Rice4,Sorghum5,Soybeans6,Sunflower10,Peanuts11,Tobacco12,Sweet Corn13,Popcorn or Ornamental Corn14,Mint21,Barley22,Durum Wheat23,Spring Wheat24,Winter Wheat25,Other Small Grains26,Double Crop Winter Wheat/Soybeans27,Rye28,Oats29,Millet30,Speltz31,Canola32,Flaxseed33,Safflower34,Rape Seed35,Mustard36,Alfalfa37,Other Hay/Non Alfalfa38,Camelina39,Buckwheat41,Sugarbeets42,Dry Beans43,Potatoes44,Other Crops45,Sugarcane46,Sweet Potatoes47,Miscellaneous Vegetables and Fruits48,Watermelons49,Onions50,Cucumbers51,Chick Peas52,Lentils53,Peas54,Tomatoes55,Caneberries56,Hops57,Herbs58,Clover/Wildflowers59,Sod/Grass Seed60,Switchgrass61,Fallow/Idle Cropland62,Pasture/Grass63,Forest64,Shrubland65,Barren66,Cherries67,Peaches68,Apples69,Grapes70,Christmas Trees71,Other Tree Crops72,Citrus74,Pecans75,Almonds76,Walnuts77,Pears81,Clouds/No Data82,Developed83,Water87,Wetlands88,Nonagricultural/Undefined92,Aquaculture111,Open Water112,Perennial Ice/Snow121,Developed/Open Space122,Developed/Low Intensity123,Developed/Med Intensity124,Developed/High Intensity131,Barren141,Deciduous Forest142,Evergreen Forest143,Mixed Forest152,Shrubland176,Grassland/Pasture190,Woody Wetlands195,Herbaceous Wetlands204,Pistachios205,Triticale206,Carrots207,Asparagus208,Garlic209,Cantaloupes210,Prunes211,Olives212,Oranges213,Honeydew Melons214,Broccoli215,Avocados216,Peppers217,Pomegranates218,Nectarines219,Greens220,Plums221,Strawberries222,Squash223,Apricots224,Vetch225,Double Crop Winter Wheat/Corn226,Double Crop Oats/Corn227,Lettuce228,Double Crop Triticale/Corn229,Pumpkins230,Double Crop Lettuce/Durum Wheat231,Double Crop Lettuce/Cantaloupe232,Double Crop Lettuce/Cotton233,Double Crop Lettuce/Barley234,Double Crop Durum Wheat/Sorghum235,Double Crop Barley/Sorghum236,Double Crop Winter Wheat/Sorghum237,Double Crop Barley/Corn238,Double Crop Winter Wheat/Cotton239,Double Crop Soybeans/Cotton240,Double Crop Soybeans/Oats241,Double Crop Corn/Soybeans242,Blueberries243,Cabbage244,Cauliflower245,Celery246,Radishes247,Turnips248,Eggplants249,Gourds250,Cranberries254,Double Crop Barley/Soybeans Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.
This data is shown with a mask around Dallas to help you zero into data targeted to Dallas TX.underlying map can be found here: https://dallasgis.maps.arcgis.com/home/item.html?id=1a2f5820471a4bb6aa9ae7a6fcc3c991This layer shows poverty status by age group. This is shown by tract, county, and state boundaries. This service is updated annually to contain the most currently released American Community Survey (ACS) 5-year data, and contains estimates and margins of error. There are also additional calculated attributes related to this topic, which can be mapped or used within analysis. Poverty status is based on income in past 12 months of survey. This layer is symbolized to show the percentage of the population whose income falls below the Federal poverty line. To see the full list of attributes available in this service, go to the "Data" tab, and choose "Fields" at the top right. Current Vintage: 2015-2019ACS Table(s): B17020Data downloaded from: Census Bureau's API for American Community Survey Date of API call: December 10, 2020National Figures: data.census.govThe United States Census Bureau's American Community Survey (ACS):About the SurveyGeography & ACSTechnical DocumentationNews & UpdatesThis ready-to-use layer can be used within ArcGIS Pro, ArcGIS Online, its configurable apps, dashboards, Story Maps, custom apps, and mobile apps. Data can also be exported for offline workflows. For more information about ACS layers, visit the FAQ. Please cite the Census and ACS when using this data.Data Note from the Census:Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is represented through the use of a margin of error. The value shown here is the 90 percent margin of error. The margin of error can be interpreted as providing a 90 percent probability that the interval defined by the estimate minus the margin of error and the estimate plus the margin of error (the lower and upper confidence bounds) contains the true value. In addition to sampling variability, the ACS estimates are subject to nonsampling error (for a discussion of nonsampling variability, see Accuracy of the Data). The effect of nonsampling error is not represented in these tables.Data Processing Notes:This layer is updated automatically when the most current vintage of ACS data is released each year, usually in December. The layer always contains the latest available ACS 5-year estimates. It is updated annually within days of the Census Bureau's release schedule. Click here to learn more about ACS data releases.Boundaries come from the US Census TIGER geodatabases. Boundaries are updated at the same time as the data updates (annually), and the boundary vintage appropriately matches the data vintage as specified by the Census. These are Census boundaries with water and/or coastlines clipped for cartographic purposes. For census tracts, the water cutouts are derived from a subset of the 2010 AWATER (Area Water) boundaries offered by TIGER. For state and county boundaries, the water and coastlines are derived from the coastlines of the 500k TIGER Cartographic Boundary Shapefiles. The original AWATER and ALAND fields are still available as attributes within the data table (units are square meters). The States layer contains 52 records - all US states, Washington D.C., and Puerto RicoCensus tracts with no population that occur in areas of water, such as oceans, are removed from this data service (Census Tracts beginning with 99).Percentages and derived counts, and associated margins of error, are calculated values (that can be identified by the "_calc_" stub in the field name), and abide by the specifications defined by the American Community Survey.Field alias names were created based on the Table Shells file available from the American Community Survey Summary File Documentation page.Negative values (e.g., -4444...) have been set to null, with the exception of -5555... which has been set to zero. These negative values exist in the raw API data to indicate the following situations:The margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate.Either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution.The median falls in the lowest interval of an open-ended distribution, or in the upper interval of an open-ended distribution. A statistical test is not appropriate.The estimate is controlled. A statistical test for sampling variability is not appropriate.The data for this geographic area cannot be displayed because the number of sample cases is too small.
Swimming pools are important for property tax assessment because they impact the value of the property. Tax assessors at local government agencies often rely on expensive and infrequent surveys, leading to assessment inaccuracies. Finding the area of pools that are not on the assessment roll (such as those recently constructed) is valuable to assessors and will ultimately mean additional revenue for the community.This deep learning model helps automate the task of finding the area of pools from high resolution satellite imagery. This model can also benefit swimming pool maintenance companies and help redirect their marketing efforts. Public health and mosquito control agencies can also use this model to detect pools and drive field activity and mitigation efforts. Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model cannot be fine-tuned using ArcGIS tools.Input8-bit, 3-band high resolution (5-30 centimeters) imagery.OutputFeature class containing masks depicting pool.Applicable geographiesThe model is expected to work well in the United States.Model architectureThe model uses the FasterRCNN model architecture implemented using ArcGIS API for Python and open-source Segment Anything Model (SAM) by Meta.Accuracy metricsThe model has an average precision score of 0.59.Sample resultsHere are a few results from the model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of true-ortho high-resolution aerial images captured in 2023 by the local government of Emmen, Netherlands for the purpose of solar panel segmentation. The images are taken under similar conditions using a small aircraft, providing a significantly higher spatial resolution (7.5 cm per pixel) compared to satellite imagery (30 cm per pixel). This results in sharper and more detailed images suitable for solar panel detection and related spatial analyses.
The aerial images cover an area of 346.26 km², with four selected regions comprising diverse building types and vegetation, totaling 18.55 km². The selected areas are divided into 30x30 meter grid cells, resulting in 20,618 squares of 900 m² each. Within these areas, solar panels were manually annotated with polygons, identifying 4,389 unique solar panel objects.
Given that the proportion of solar panel surface relative to the total area is small, the dataset includes only 224x224 pixel RGB images from grid cells that either contain solar panels or are in close proximity to a cell with solar panels. This selection avoids a significant class imbalance. The final dataset consists of 5,327 annotated images, of which 1,743 contain solar panels.
For our research purposes, the dataset is enriched with elevation and slope data from the (also publicly available) Actueel Hoogtebestand Nederland (AHN) dataset:
Elevation Data (AHN4 DEMs and LiDAR-derived Point Cloud Data): AHN4 provides precise elevation measurements with a minimum of 10 measurements per square meter. The digital terrain model (DTM) was generated using a Squared Inverse Distance Weighting (IDW) method.
Slope Calculation: The tilt of surfaces was computed based on the AHN-4 dataset using the Planar Method.
The dataset is organized into the following directories:
/input -- Stores the raw aerial images
- 000000.png
- 000001.png/mask -- Pixel-wise ground truth masks indicating the presence of solar panels
- 000000.png (corresponds to input 0000000.png)
- 000001.png/height -- Elevation data derived from the AHN4 dataset for each pixel
- 000000.tiff
- 000001.tiff/slope -- Slope values computed using a 3 × 3 sliding window for each pixel
- 000000.tiff
- 000001.tiff/annotations -- Contains meta information about the annotations
- annotations.shp -- the annotations in polygon form
- grids.shp -- the grid cells indicating the whole area that we annotated and contains for each cell whether it was includedfold_info.pkl -- Python dictionary containing indices for stratified 5-fold cross-validation
The code for producing baseline deep learning models on this dataset can be found at the following Gitlab repository.
If you want to cite this work, please cite our underlying paper found here.
Reason for SelectionProtected natural areas in urban environments provide urban residents a nearby place to connect with nature and offer refugia for some species. They help foster a conservation ethic by providing opportunities for people to connect with nature, and also support ecosystem services like offsetting heat island effects (Greene and Millward 2017, Simpson 1998), water filtration, stormwater retention, and more (Hoover and Hopton 2019). In addition, parks, greenspace, and greenways can help improve physical and psychological health in communities (Gies 2006). Urban park size complements the equitable access to potential parks indicator by capturing the value of existing parks.Input DataSoutheast Blueprint 2024 extentFWS National Realty Tracts, accessed 12-13-2023Protected Areas Database of the United States(PAD-US):PAD-US 3.0national geodatabase -Combined Proclamation Marine Fee Designation Easement, accessed 12-6-20232020 Census Urban Areas from the Census Bureau’s urban-rural classification; download the data, read more about how urban areas were redefined following the 2020 censusOpenStreetMap data “multipolygons” layer, accessed 12-5-2023A polygon from this dataset is considered a beach if the value in the “natural” tag attribute is “beach”. Data for coastal states (VA, NC, SC, GA, FL, AL, MS, LA, TX) were downloaded in .pbf format and translated to an ESRI shapefile using R code. OpenStreetMap® is open data, licensed under theOpen Data Commons Open Database License (ODbL) by theOpenStreetMap Foundation (OSMF). Additional credit to OSM contributors. Read more onthe OSM copyright page.2021 National Land Cover Database (NLCD): Percentdevelopedimperviousness2023NOAA coastal relief model: volumes 2 (Southeast Atlantic), 3 (Florida and East Gulf of America), 4 (Central Gulf of America), and 5 (Western Gulf of America), accessed 3-27-2024Mapping StepsCreate a seamless vector layer to constrain the extent of the urban park size indicator to inland and nearshore marine areas <10 m in depth. The deep offshore areas of marine parks do not meet the intent of this indicator to capture nearby opportunities for urban residents to connect with nature. Shallow areas are more accessible for recreational activities like snorkeling, which typically has a maximum recommended depth of 12-15 meters. This step mirrors the approach taken in the Caribbean version of this indicator.Merge all coastal relief model rasters (.nc format) together using QGIS “create virtual raster”.Save merged raster to .tif and import into ArcPro.Reclassify the NOAA coastal relief model data to assign areas with an elevation of land to -10 m a value of 1. Assign all other areas (deep marine) a value of 0.Convert the raster produced above to vector using the “RasterToPolygon” tool.Clip to 2024 subregions using “Pairwise Clip” tool.Break apart multipart polygons using “Multipart to single parts” tool.Hand-edit to remove deep marine polygon.Dissolve the resulting data layer.This produces a seamless polygon defining land and shallow marine areas.Clip the Census urban area layer to the bounding box of NoData surrounding the extent of Southeast Blueprint 2024.Clip PAD-US 3.0 to the bounding box of NoData surrounding the extent of Southeast Blueprint 2024.Remove the following areas from PAD-US 3.0, which are outside the scope of this indicator to represent parks:All School Trust Lands in Oklahoma and Mississippi (Loc Des = “School Lands” or “School Trust Lands”). These extensive lands are leased out and are not open to the public.All tribal and military lands (“Des_Tp” = "TRIBL" or “Des_Tp” = "MIL"). Generally, these lands are not intended for public recreational use.All BOEM marine lease blocks (“Own_Name” = "BOEM"). These Outer Continental Shelf lease blocks do not represent actively protected marine parks, but serve as the “legal definition for BOEM offshore boundary coordinates...for leasing and administrative purposes” (BOEM).All lands designated as “proclamation” (“Des_Tp” = "PROC"). These typically represent the approved boundary of public lands, within which land protection is authorized to occur, but not all lands within the proclamation boundary are necessarily currently in a conserved status.Retain only selected attribute fields from PAD-US to get rid of irrelevant attributes.Merged the filtered PAD-US layer produced above with the OSM beaches and FWS National Realty Tracts to produce a combined protected areas dataset.The resulting merged data layer contains overlapping polygons. To remove overlapping polygons, use the Dissolve function.Clip the resulting data layer to the inland and nearshore extent.Process all multipart polygons (e.g., separate parcels within a National Wildlife Refuge) to single parts (referred to in Arc software as an “explode”).Select all polygons that intersect the Census urban extent within 0.5 miles. We chose 0.5 miles to represent a reasonable walking distance based on input and feedback from park access experts. Assuming a moderate intensity walking pace of 3 miles per hour, as defined by the U.S. Department of Health and Human Service’s physical activity guidelines, the 0.5 mi distance also corresponds to the 10-minute walk threshold used in the equitable access to potential parks indicator.Dissolve all the park polygons that were selected in the previous step.Process all multipart polygons to single parts (“explode”) again.Add a unique ID to the selected parks. This value will be used in a later step to join the parks to their buffers.Create a 0.5 mi (805 m) buffer ring around each park using the multiring plugin in QGIS. Ensure that “dissolve buffers” is disabled so that a single 0.5 mi buffer is created for each park.Assess the amount of overlap between the buffered park and the Census urban area using “overlap analysis”. This step is necessary to identify parks that do not intersect the urban area, but which lie within an urban matrix (e.g., Umstead Park in Raleigh, NC and Davidson-Arabia Mountain Nature Preserve in Atlanta, GA). This step creates a table that is joined back to the park polygons using the UniqueID.Remove parks that had ≤10% overlap with the urban areas when buffered. This excludes mostly non-urban parks that do not meet the intent of this indicator to capture parks that provide nearby access for urban residents. Note: The 10% threshold is a judgement call based on testing which known urban parks and urban National Wildlife Refuges are captured at different overlap cutoffs and is intended to be as inclusive as possible.Calculate the GIS acres of each remaining park unit using the Add Geometry Attributes function.Buffer the selected parks by 15 m. Buffering prevents very small and narrow parks from being left out of the indicator when the polygons are converted to raster.Reclassify the parks based on their area into the 7 classes seen in the final indicator values below. These thresholds were informed by park classification guidelines from the National Recreation and Park Association, which classify neighborhood parks as 5-10 acres, community parks as 30-50 acres, and large urban parks as optimally 75+ acres (Mertes and Hall 1995).Assess the impervious surface composition of each park using the NLCD 2021 impervious layer and the Zonal Statistics “MEAN” function. Retain only the mean percent impervious value for each park.Extract only parks with a mean impervious pixel value <80%. This step excludes parks that do not meet the intent of the indicator to capture opportunities to connect with nature and offer refugia for species (e.g., the Superdome in New Orleans, LA, the Astrodome in Houston, TX, and City Plaza in Raleigh, NC).Extract again to the inland and nearshore extent.Export the final vector file to a shapefile and import to ArcGIS Pro.Convert the resulting polygons to raster using the ArcPy Feature to Raster function and the area class field.Assign a value of 0 to all other pixels in the Southeast Blueprint 2024 extent not already identified as an urban park in the mapping steps above. Zero values are intended to help users better understand the extent of this indicator and make it perform better in online tools.Use the land and shallow marine layer and “extract by mask” tool to save the final version of this indicator.Add color and legend to raster attribute table.As a final step, clip to the spatial extent of Southeast Blueprint 2024.Note: For more details on the mapping steps, code used to create this layer is available in theSoutheast Blueprint Data Downloadunder > 6_Code.Final indicator valuesIndicator values are assigned as follows:6= 75+ acre urban park5= 50 to <75 acre urban park4= 30 to <50 acre urban park3= 10 to <30 acre urban park2=5 to <10acreurbanpark1 = <5 acre urban park0 = Not identified as an urban parkKnown IssuesThis indicator does not include park amenities that influence how well the park serves people and should not be the only tool used for parks and recreation planning. Park standards should be determined at a local level to account for various community issues, values, needs, and available resources.This indicator includes some protected areas that are not open to the public and not typically thought of as “parks”, like mitigation lands, private easements, and private golf courses. While we experimented with excluding them using the public access attribute in PAD, due to numerous inaccuracies, this inadvertently removed protected lands that are known to be publicly accessible. As a result, we erred on the side of including the non-publicly accessible lands.The NLCD percent impervious layer contains classification inaccuracies. As a result, this indicator may exclude parks that are mostly natural because they are misclassified as mostly impervious. Conversely, this indicator may include parks that are mostly impervious because they are misclassified as mostly
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Created to honor the impressionistic atmospheric quality of the work of Swiss topographic painter and cartographer, Eduard Imhof. These symbols and palettes allow for the application of an homage aesthetic when applied to layered hillshades and digital elevation models. An accompanying how-to resource is forthcoming.In the meantime, the Hillshade color scheme is intended to be applied to a traditional hillshade layer and a multidirectional hillshade layer. The Mist color scheme is intended to be applied to a DEM layer. When viewed in concert with an imagery basemap, the hues and opacities combine to create a distinctive quality.Here it is at a broader scale...Here is a map that uses the Area of Interest, Mask, and Locator layers...Contents:Alternatively, you can download an ArcGIS Pro project with the data and styles already implemented, and you can just start cranking away at Imhofs.Happy Topographic Painting! John Nelson