Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This resource was created by Esri Canada Education and Research. To browse our full collection of higher-education learning resources, please visit https://hed.esri.ca/resourcefinder/.This tutorial introduces you to using Python code in a Jupyter Notebook, an open source web application that enables you to create and share documents that contain rich text, equations and multimedia, alongside executable code and visualization of analysis outputs. The tutorial begins by stepping through the basics of setting up and being productive with Python notebooks. You will be introduced to ArcGIS Notebooks, which are Python Notebooks that are well-integrated within the ArcGIS platform. Finally, you will be guided through a series of ArcGIS Notebooks that illustrate how to create compelling notebooks for data science that integrate your own Python scripts using the ArcGIS API for Python and ArcPy in combination with thousands of open source Python libraries to enhance your analysis and visualization.To download the dataset Labs, click the Open button to the top right. This will automatically download a ZIP file containing all files and data required.You can also clone the tutorial documents and datasets for this GitHub repo: https://github.com/highered-esricanada/arcgis-notebooks-tutorial.git.Software & Solutions Used: Required: This tutorial was last tested on August 27th, 2024, using ArcGIS Pro 3.3. If you're using a different version of ArcGIS Pro, you may encounter different functionality and results.Recommended: ArcGIS Online subscription account with permissions to use advanced Notebooks and GeoEnrichmentOptional: Notebook Server for ArcGIS Enterprise 11.3+Time to Complete: 2 h (excludes processing time)File Size: 196 MBDate Created: January 2022Last Updated: August 27, 2024
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This seminar is an applied study of deep learning methods for extracting information from geospatial data, such as aerial imagery, multispectral imagery, digital terrain data, and other digital cartographic representations. We first provide an introduction and conceptualization of artificial neural networks (ANNs). Next, we explore appropriate loss and assessment metrics for different use cases followed by the tensor data model, which is central to applying deep learning methods. Convolutional neural networks (CNNs) are then conceptualized with scene classification use cases. Lastly, we explore semantic segmentation, object detection, and instance segmentation. The primary focus of this course is semantic segmenation for pixel-level classification. The associated GitHub repo provides a series of applied examples. We hope to continue to add examples as methods and technologies further develop. These examples make use of a vareity of datasets (e.g., SAT-6, topoDL, Inria, LandCover.ai, vfillDL, and wvlcDL). Please see the repo for links to the data and associated papers. All examples have associated videos that walk through the process, which are also linked to the repo. A variety of deep learning architectures are explored including UNet, UNet++, DeepLabv3+, and Mask R-CNN. Currenlty, two examples use ArcGIS Pro and require no coding. The remaining five examples require coding and make use of PyTorch, Python, and R within the RStudio IDE. It is assumed that you have prior knowledge of coding in the Python and R enviroinments. If you do not have experience coding, please take a look at our Open-Source GIScience and Open-Source Spatial Analytics (R) courses, which explore coding in Python and R, respectively. After completing this seminar you will be able to: explain how ANNs work including weights, bias, activation, and optimization. describe and explain different loss and assessment metrics and determine appropriate use cases. use the tensor data model to represent data as input for deep learning. explain how CNNs work including convolutional operations/layers, kernel size, stride, padding, max pooling, activation, and batch normalization. use PyTorch, Python, and R to prepare data, produce and assess scene classification models, and infer to new data. explain common semantic segmentation architectures and how these methods allow for pixel-level classification and how they are different from traditional CNNs. use PyTorch, Python, and R (or ArcGIS Pro) to prepare data, produce and assess semantic segmentation models, and infer to new data.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Python Notebooks are a tool that has become vital in the Python and Data Science communities to enhance your workflows for GIS data management, analysis, and visualization. This workshop will introduce how to use Python Notebooks within ArcGIS Pro. The learning outcome is to gain an understanding of the basics for working with Python Notebooks to describe and document workflows, execute Python code, and visualize data and analysis outputs. There will be a focus on integrating with more advanced geospatial capabilities of ArcGIS Pro and ArcGIS Online via Python modules including ArcPy and ArcGIS.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into tree and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Trees is useful in applications such as high-quality 3D basemap creation, urban planning, forestry workflows, and planning climate change response.Trees could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Tree in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.InputThe model is trained with classified LiDAR that follows the LINZ base specification. The input data should be similar to this specification.Note: The model is dependent on additional attributes such as Intensity, Number of Returns, etc, similar to the LINZ base specification. This model is trained to work on classified and unclassified point clouds that are in a projected coordinate system, in which the units of X, Y and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 5 Trees / High-vegetationApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Wellington CityTesting dataset - Tawa CityValidation/Evaluation dataset - Christchurch City Dataset City Training Wellington Testing Tawa Validating ChristchurchModel architectureThis model uses the PointCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.991200 0.975404 0.983239 High Vegetation 0.933569 0.975559 0.954102Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 80%, Test: 20%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-121.69 m to 26.84 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-15 to +15 Maximum points per block8192 Block Size20 Meters Class structure[0, 5]Sample resultsModel to classify a dataset with 5pts/m density Christchurch city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story
Facebook
TwitterDataset for the textbook Computational Methods and GIS Applications in Social Science (3rd Edition), 2023 Fahui Wang, Lingbo Liu Main Book Citation: Wang, F., & Liu, L. (2023). Computational Methods and GIS Applications in Social Science (3rd ed.). CRC Press. https://doi.org/10.1201/9781003292302 KNIME Lab Manual Citation: Liu, L., & Wang, F. (2023). Computational Methods and GIS Applications in Social Science - Lab Manual. CRC Press. https://doi.org/10.1201/9781003304357 KNIME Hub Dataset and Workflow for Computational Methods and GIS Applications in Social Science-Lab Manual Update Log If Python package not found in Package Management, use ArcGIS Pro's Python Command Prompt to install them, e.g., conda install -c conda-forge python-igraph leidenalg NetworkCommDetPro in CMGIS-V3-Tools was updated on July 10,2024 Add spatial adjacency table into Florida on June 29,2024 The dataset and tool for ABM Crime Simulation were updated on August 3, 2023, The toolkits in CMGIS-V3-Tools was updated on August 3rd,2023. Report Issues on GitHub https://github.com/UrbanGISer/Computational-Methods-and-GIS-Applications-in-Social-Science Following the website of Fahui Wang : http://faculty.lsu.edu/fahui Contents Chapter 1. Getting Started with ArcGIS: Data Management and Basic Spatial Analysis Tools Case Study 1: Mapping and Analyzing Population Density Pattern in Baton Rouge, Louisiana Chapter 2. Measuring Distance and Travel Time and Analyzing Distance Decay Behavior Case Study 2A: Estimating Drive Time and Transit Time in Baton Rouge, Louisiana Case Study 2B: Analyzing Distance Decay Behavior for Hospitalization in Florida Chapter 3. Spatial Smoothing and Spatial Interpolation Case Study 3A: Mapping Place Names in Guangxi, China Case Study 3B: Area-Based Interpolations of Population in Baton Rouge, Louisiana Case Study 3C: Detecting Spatiotemporal Crime Hotspots in Baton Rouge, Louisiana Chapter 4. Delineating Functional Regions and Applications in Health Geography Case Study 4A: Defining Service Areas of Acute Hospitals in Baton Rouge, Louisiana Case Study 4B: Automated Delineation of Hospital Service Areas in Florida Chapter 5. GIS-Based Measures of Spatial Accessibility and Application in Examining Healthcare Disparity Case Study 5: Measuring Accessibility of Primary Care Physicians in Baton Rouge Chapter 6. Function Fittings by Regressions and Application in Analyzing Urban Density Patterns Case Study 6: Analyzing Population Density Patterns in Chicago Urban Area >Chapter 7. Principal Components, Factor and Cluster Analyses and Application in Social Area Analysis Case Study 7: Social Area Analysis in Beijing Chapter 8. Spatial Statistics and Applications in Cultural and Crime Geography Case Study 8A: Spatial Distribution and Clusters of Place Names in Yunnan, China Case Study 8B: Detecting Colocation Between Crime Incidents and Facilities Case Study 8C: Spatial Cluster and Regression Analyses of Homicide Patterns in Chicago Chapter 9. Regionalization Methods and Application in Analysis of Cancer Data Case Study 9: Constructing Geographical Areas for Mapping Cancer Rates in Louisiana Chapter 10. System of Linear Equations and Application of Garin-Lowry in Simulating Urban Population and Employment Patterns Case Study 10: Simulating Population and Service Employment Distributions in a Hypothetical City Chapter 11. Linear and Quadratic Programming and Applications in Examining Wasteful Commuting and Allocating Healthcare Providers Case Study 11A: Measuring Wasteful Commuting in Columbus, Ohio Case Study 11B: Location-Allocation Analysis of Hospitals in Rural China Chapter 12. Monte Carlo Method and Applications in Urban Population and Traffic Simulations Case Study 12A. Examining Zonal Effect on Urban Population Density Functions in Chicago by Monte Carlo Simulation Case Study 12B: Monte Carlo-Based Traffic Simulation in Baton Rouge, Louisiana Chapter 13. Agent-Based Model and Application in Crime Simulation Case Study 13: Agent-Based Crime Simulation in Baton Rouge, Louisiana Chapter 14. Spatiotemporal Big Data Analytics and Application in Urban Studies Case Study 14A: Exploring Taxi Trajectory in ArcGIS Case Study 14B: Identifying High Traffic Corridors and Destinations in Shanghai Dataset File Structure 1 BatonRouge Census.gdb BR.gdb 2A BatonRouge BR_Road.gdb Hosp_Address.csv TransitNetworkTemplate.xml BR_GTFS Google API Pro.tbx 2B Florida FL_HSA.gdb R_ArcGIS_Tools.tbx (RegressionR) 3A China_GX GX.gdb 3B BatonRouge BR.gdb 3C BatonRouge BRcrime R_ArcGIS_Tools.tbx (STKDE) 4A BatonRouge BRRoad.gdb 4B Florida FL_HSA.gdb HSA Delineation Pro.tbx Huff Model Pro.tbx FLplgnAdjAppend.csv 5 BRMSA BRMSA.gdb Accessibility Pro.tbx 6 Chicago ChiUrArea.gdb R_ArcGIS_Tools.tbx (RegressionR) 7 Beijing BJSA.gdb bjattr.csv R_ArcGIS_Tools.tbx (PCAandFA, BasicClustering) 8A Yunnan YN.gdb R_ArcGIS_Tools.tbx (SaTScanR) 8B Jiangsu JS.gdb 8C Chicago ChiCity.gdb cityattr.csv ...
Facebook
TwitterThere are a suite of powerful open source python libraries that can be used to work with spatial data. Learn how to use geopandas, rasterio and matplotlib to plot and manipulate spatial data in Python.
Facebook
TwitterYou will learn to work with ArcPy, the Esri-developed site package that integrates Python scripts into ArcGIS Desktop.Goals Create Python scripts to perform geoprocessing tasks. Access lists of datasets and loop through lists to test for a condition. Create dynamic scripts that allow users to interactively specify their own parameter values. Create tools to share your Python scripts.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Second of four zipfiles providing all data and Python code necessary to replicate any of the 13 development potential indexes (DPIs) described within Oakleaf et al. (2019), “Mapping global development potential for renewable energy, fossil fuels, mining and agriculture sectors”. A README.pdf guides users on setting up environment necessary to use data and run Python code.
To run Python code with accompanying spatial data, 64 GBs of disk space is required. Additionally ArcPY, a python module associated with ESRI’s ArcGIS Desktop, and an accompanying Spatial Analyst extension license are required to run Python code. All code was created by J.R. Oakleaf during 2018 and is licensed under Creative Commons Attribution-NonCommercial 4.0 International License http://creativecommons.org/licenses/by-nc/4.0/.
Facebook
TwitterOver 40,000 road crossings in Maine are maintained by Maine Department of Transportation (MaineDOT) managers, emergency managers, natural resource planners, and municipalities. Resource managers need a way to quickly and comprehensively assess, during the planning stages of potential transportation-related projects, how ecological, hydrologic, and structural characteristics of bridges and culverts and their watersheds could adversely affect project schedules and budgets. Factors that are critical to evaluate and incorporate into overall assessments of project risk include basin, land-use, and climatic characteristics; vulnerability to specific events, such as floods; and complicating factors in the watershed, such as endangered species, evacuation routes, and historical sites. A Python script tool has been built for ArcGIS Pro as an automated screening tool that draws on existing geographic information system (GIS) data layers to identify potential risk factors and quantify risk scores for bridges and culverts. This tool can help resource managers quickly evaluate projects, during early planning, in terms of variables that may adversely affect schedules or budgets.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
scripts.zip
arcgisTools.atbx: terrainDerivatives: make terrain derivatives from digital terrain model (Band 1 = TPI (50 m radius circle), Band 2 = square root of slope, Band 3 = TPI (annulus), Band 4 = hillshade, Band 5 = multidirectional hillshades, Band 6 = slopeshade). rasterizeFeatures: convert vector polygons to raster masks (1 = feature, 0 = background).
makeChips.R: R function to break terrain derivatives and chips into image chips of a defined size. makeTerrainDerivatives.R: R function to generated 6-band terrain derivatives from digital terrain data (same as ArcGIS Pro tool). merge_logs.R: R script to merge training logs into a single file. predictToExtents.ipynb: Python notebook to use trained model to predict to new data. trainExperiments.ipynb: Python notebook used to train semantic segmentation models using PyTorch and the Segmentation Models package. assessmentExperiments.ipynb: Python code to generate assessment metrics using PyTorch and the torchmetrics library. graphs_results.R: R code to make graphs with ggplot2 to summarize results. makeChipsList.R: R code to generate lists of chips in a directory. makeMasks.R: R function to make raster masks from vector data (same as rasterizeFeatures ArcGIS Pro tool).
terraceDL.zip
dems: LiDAR DTM data partitioned into training, testing, and validation datasets based on HUC8 watershed boundaries. Original DTM data were provided by the Iowa BMP mapping project: https://www.gis.iastate.edu/BMPs. extents: extents of the training, testing, and validation areas as defined by HUC 8 watershed boundaries. vectors: vector features representing agricultural terraces and partitioned into separate training, testing, and validation datasets. Original digitized features were provided by the Iowa BMP Mapping Project: https://www.gis.iastate.edu/BMPs.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
For complete collection of data and models, see https://doi.org/10.21942/uva.c.5290546.Original model developed in 2016-17 in ArcGIS by Henk Pieter Sterk (www.rfase.org), with minor updates in 2021 by Stacy Shinneman and Henk Pieter Sterk. Model used to generate publication results:Hierarchical geomorphological mapping in mountainous areas Matheus G.G. De Jong, Henk Pieter Sterk, Stacy Shinneman & Arie C. Seijmonsbergen. Submitted to Journal of Maps 2020, revisions made in 2021.This model creates tiers (columns) of geomorphological features (Tier 1, Tier 2 and Tier 3) in the landscape of Vorarlberg, Austria, each with an increasing level of detail. The input dataset needed to create this 'three-tier-legend' is a geomorphological map of Vorarlberg with a Tier 3 category (e.g. 1111, for glacially eroded bedrock). The model then automatically adds Tier 1, Tier 2 and Tier 3 categories based on the Tier 3 code in the 'Geomorph' field. The model replaces the input file with an updated shapefile of the geomorphology of Vorarlberg, now including three tiers of geomorphological features. Python script files and .lyr symbology files are also provided here.
Facebook
TwitterA 40-minute tutorial to use OGC webservices offered by the Mission Atlantic GeoNode in your data analysis. The workshop makes use of Python Notebooks and common GIS Software (ArcGIS and QGIS), basic knowledge of Python and/or GIS software is recommended. • Introduction to OGC services • Search through metadata using the OGC Catalogue Service (CSW) • Visualize data using OGC Web Mapping Service (WMS) • Subset and download data using OGC Web Feature and Coverage Services (WFS/WCS) • Use OGC services with QGIS and/or ArcGIS
Facebook
TwitterThe dataset has combined the Parcels and Computer-Assisted Mass Appraisal (CAMA) data for 2023 into a single dataset. This dataset is designed to make it easier for stakeholders and the GIS community to use and access the information as a geospatial dataset. Included in this dataset are geometries for all 169 municipalities and attribution from the CAMA data for all but one municipality. Pursuant to Section 7-100l of the Connecticut General Statutes, each municipality is required to transmit a digital parcel file and an accompanying assessor’s database file (known as a CAMA report), to its respective regional council of governments (COG) by May 1 annually.
These data were gathered from the CT municipalities by the COGs and then submitted to CT OPM. This dataset was created on 12/08/2023 from data collected in 2022-2023. Data was processed using Python scripts and ArcGIS Pro, ensuring standardization and integration of the data.
CAMA Notes:
The CAMA underwent several steps to standardize and consolidate the information. Python scripts were used to concatenate fields and create a unique identifier for each entry. The resulting dataset contains 1,353,595 entries and information on property assessments and other relevant attributes.
CAMA was provided by the towns.
Canaan parcels are viewable, but no additional information is available since no CAMA data was submitted.
Spatial Data Notes:
Data processing involved merging the parcels from different municipalities using ArcGIS Pro and Python. The resulting dataset contains 1,247,506 parcels.
No alteration has been made to the spatial geometry of the data.
Fields that are associated with CAMA data were provided by towns.
The data fields that have information from the CAMA were sourced from the towns’ CAMA data.
If no field for the parcels was provided for linking back to the CAMA by the town a new field within the original data was selected if it had a match rate above 50%, that joined back to the CAMA.
Linking fields were renamed to "Link".
All linking fields had a census town code added to the beginning of the value to create a unique identifier per town.
Any field that was not town name, Location, Editor, Edit Date, or a field associated back to the CAMA, was not used in the creation of this Dataset.
Only the fields related to town name, location, editor, edit date, and link fields associated with the towns’ CAMA were included in the creation of this dataset. Any other field provided in the original data was deleted or not used.
Field names for town (Muni, Municipality) were renamed to "Town Name".
The attributes included in the data:
Town Name
Owner
Co-Owner
Link
Editor
Edit Date
Collection year – year the parcels were submitted
Location
Mailing Address
Mailing City
Mailing State
Assessed Total
Assessed Land
Assessed Building
Pre-Year Assessed Total
Appraised Land
Appraised Building
Appraised Outbuilding
Condition
Model
Valuation
Zone
State Use
State Use Description
Living Area
Effective Area
Total rooms
Number of bedrooms
Number of Baths
Number of Half-Baths
Sale Price
Sale Date
Qualified
Occupancy
Prior Sale Price
Prior Sale Date
Prior Book and Page
Planning Region
*Please note that not all parcels have a link to a CAMA entry.
*If any discrepancies are discovered within the data, whether pertaining to geographical inaccuracies or attribute inaccuracy, please directly contact the respective municipalities to request any necessary amendments
As of 2/15/2023 - Occupancy, State Use, State Use Description, and Mailing State added to dataset
Additional information about the specifics of data availability and compliance will be coming soon.
Facebook
TwitterCalibration of hydraulic models require careful selection of input parameters to provide the best possible modeling outcome. Currently the selection of hydraulic resistance or 'n' values for these models is a subjective process potentially exposing models to critical review . A process is needed to objectively estimate n-values so everyone responsible for model calibration arrives at the same answer. Use of standard elevation products can support this effort. This dataset is presented as supplemental information for a journal article describing the process of using the root mean square average of elevation standard deviation from lidar derived 1 m rasters to objectively estimate the boundary roughness conditions of a watershed bare earth surface as part of the total hydraulic roughness needed for modeling overland flows in forested drainages. A GIS tool has beeen developed to support investigators who need to objectively estimate boundary roughness for their hydraulic modeling applications.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Pursuant to Section 7-100l of the Connecticut General Statutes, each municipality is required to transmit a digital parcel file and an accompanying assessor’s database file (known as a CAMA report), to its respective regional council of governments (COG) by May 1 annually. The dataset has combined the Parcels and Computer-Assisted Mass Appraisal (CAMA) data for 2025 into a single dataset. This dataset is designed to make it easier for stakeholders and the GIS community to use and access the information as a geospatial dataset. Included in this dataset are geometries for all 169 municipalities and attribution from the CAMA data for all but one municipality. These data were gathered from the CT municipalities by the COGs and then submitted to CT OPM. This dataset was created on September 2025 from data collected in 2024-2025. Data was processed using Python scripts and ArcGIS Pro for standardization and integration of the data. To learn more about Parcel and CAMA in CT visit our Parcels Page in the Geodata Portal.Coordinate system: This dataset is provided in NAD 83 Connecticut State Plane (2011) (EPSG 2234) projection as it was for 2024. Prior versions were provided at WGS 1984 Web Mercator Auxiliary Sphere (EPSG 3857). Ownership Suppression: The updated dataset includes parcel data for all towns across the state, with some towns featuring fully suppressed ownership information. In these instances, the owner’s name was replaced with the label "Current Owner," the co-owner’s name will be listed as "Current Co-Owner," and the mailing address will appear as the property address itself. For towns with fully suppressed ownership data, please note that no "Suppression" field was included in the submission to confirm these details and this labeling approach was implemented as the solution.New Data Fields:The new dataset introduces the “Property Zip” and “Mailing Zip” fields, which will display the zip codes for the owner and property.Service URL:In 2024, we implemented a stable URL to maintain public access to the most up-to-date data layer. Users are strongly encouraged to transition to the new service as soon as possible to ensure uninterrupted workflows. This URL will remain persistent, providing long-term stability for your applications and integrations. Once you’ve transitioned to the new service, no further URL changes will be necessary.CAMA Notes:The CAMA underwent several steps to standardize and consolidate the information. Python scripts were used to concatenate fields and create a unique identifier for each entry. The resulting dataset contains 1,354,720 entries and information on property assessments and other relevant attributes.CAMA was provided by the towns.Spatial Data Notes:Data processing involved merging the parcels from different municipalities using ArcGIS Pro and Python. The resulting dataset contains 1,282,833 parcels.No alteration has been made to the spatial geometry of the data.Fields that are associated with CAMA data were provided by towns.The data fields that have information from the CAMA were sourced from the towns’ CAMA data.If no field for the parcels was provided for linking back to the CAMA by the town a new field within the original data was selected if it had a match rate above 50%, that joined back to the CAMA.Linking fields were renamed to "Link".All linking fields had a census town code added to the beginning of the value to create a unique identifier per town.Any field that was not town name, Location, Editor, Edit Date, or a field associated back to the CAMA, was not used in the creation of this Dataset.Only the fields related to town name, location, editor, edit date, and link fields associated with the towns’ CAMA were included in the creation of this dataset. Any other field provided in the original data was deleted or not used.Field names for town (Muni, Municipality) were renamed to "Town Name".Attributes included in the data: Town Name OwnerCo-OwnerLinkEditorEdit DateCollection year – year the parcels were submittedLocationProperty ZipMailing AddressMailing CityMailing StateMailing ZipAssessed TotalAssessed LandAssessed BuildingPre-Year Assessed Total Appraised LandAppraised BuildingAppraised OutbuildingConditionModelValuationZoneState UseState Use DescriptionLand Acre Living AreaEffective AreaTotal roomsNumber of bedroomsNumber of BathsNumber of Half-BathsSale PriceSale DateQualifiedOccupancyPrior Sale PricePrior Sale DatePrior Book and PagePlanning RegionFIPS Code *Please note that not all parcels have a link to a CAMA entry.*If any discrepancies are discovered within the data, whether pertaining to geographical inaccuracies or attribute inaccuracy, please directly contact the respective municipalities to request any necessary amendmentsAdditional information about the specifics of data availability and compliance will be coming soon.If you need a WFS service for use in specific applications : Please Click HereContact: opm.giso@ct.gov
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this course, you will explore a variety of open-source technologies for working with geosptial data, performing spatial analysis, and undertaking general data science. The first component of the class focuses on the use of QGIS and associated technologies (GDAL, PROJ, GRASS, SAGA, and Orfeo Toolbox). The second component of the class introduces Python and associated open-source libraries and modules (NumPy, Pandas, Matplotlib, Seaborn, GeoPandas, Rasterio, WhiteboxTools, and Scikit-Learn) used by geospatial scientists and data scientists. We also provide an introduction to Structured Query Language (SQL) for performing table and spatial queries. This course is designed for individuals that have a background in GIS, such as working in the ArcGIS environment, but no prior experience using open-source software and/or coding. You will be asked to work through a series of lecture modules and videos broken into several topic areas, as outlined below. Fourteen assignments and the required data have been provided as hands-on opportunites to work with data and the discussed technologies and methods. If you have any questions or suggestions, feel free to contact us. We hope to continue to update and improve this course. This course was produced by West Virginia View (http://www.wvview.org/) with support from AmericaView (https://americaview.org/). This material is based upon work supported by the U.S. Geological Survey under Grant/Cooperative Agreement No. G18AP00077. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the opinions or policies of the U.S. Geological Survey. Mention of trade names or commercial products does not constitute their endorsement by the U.S. Geological Survey. After completing this course you will be able to: apply QGIS to visualize, query, and analyze vector and raster spatial data. use available resources to further expand your knowledge of open-source technologies. describe and use a variety of open data formats. code in Python at an intermediate-level. read, summarize, visualize, and analyze data using open Python libraries. create spatial predictive models using Python and associated libraries. use SQL to perform table and spatial queries at an intermediate-level.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data was prepared as input for the Selkie GIS-TE tool. This GIS tool aids site selection, logistics optimization and financial analysis of wave or tidal farms in the Irish and Welsh maritime areas. Read more here: https://www.selkie-project.eu/selkie-tools-gis-technoeconomic-model/
This research was funded by the Science Foundation Ireland (SFI) through MaREI, the SFI Research Centre for Energy, Climate and the Marine and by the Sustainable Energy Authority of Ireland (SEAI). Support was also received from the European Union's European Regional Development Fund through the Ireland Wales Cooperation Programme as part of the Selkie project.
File Formats
Results are presented in three file formats:
tif Can be imported into a GIS software (such as ARC GIS) csv Human-readable text format, which can also be opened in Excel png Image files that can be viewed in standard desktop software and give a spatial view of results
Input Data
All calculations use open-source data from the Copernicus store and the open-source software Python. The Python xarray library is used to read the data.
Hourly Data from 2000 to 2019
Wind -
Copernicus ERA5 dataset
17 by 27.5 km grid
10m wind speed
Wave - Copernicus Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis dataset 3 by 5 km grid
Accessibility
The maximum limits for Hs and wind speed are applied when mapping the accessibility of a site.
The Accessibility layer shows the percentage of time the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5) are below these limits for the month.
Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined by checking if
the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total number of hours for the month.
Environmental data is from the Copernicus data store (https://cds.climate.copernicus.eu/). Wave hourly data is from the 'Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis' dataset.
Wind hourly data is from the ERA 5 dataset.
Availability
A device's availability to produce electricity depends on the device's reliability and the time to repair any failures. The repair time depends on weather
windows and other logistical factors (for example, the availability of repair vessels and personnel.). A 2013 study by O'Connor et al. determined the
relationship between the accessibility and availability of a wave energy device. The resulting graph (see Fig. 1 of their paper) shows the correlation between
accessibility at Hs of 2m and wind speed of 15.0m/s and availability. This graph is used to calculate the availability layer from the accessibility layer.
The input value, accessibility, measures how accessible a site is for installation or operation and maintenance activities. It is the percentage time the
environmental conditions, i.e. the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5), are below operational limits.
Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined
by checking if the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total
number of hours for the month. Once the accessibility was known, the percentage availability was calculated using the O'Connor et al. graph of the relationship
between the two. A mature technology reliability was assumed.
Weather Window
The weather window availability is the percentage of possible x-duration windows where weather conditions (Hs, wind speed) are below maximum limits for the
given duration for the month.
The resolution of the wave dataset (0.05° × 0.05°) is higher than that of the wind dataset
(0.25° x 0.25°), so the nearest wind value is used for each wave data point. The weather window layer is at the resolution of the wave layer.
The first step in calculating the weather window for a particular set of inputs (Hs, wind speed and duration) is to calculate the accessibility at each timestep.
The accessibility is based on a simple boolean evaluation: are the wave and wind conditions within the required limits at the given timestep?
Once the time series of accessibility is calculated, the next step is to look for periods of sustained favourable environmental conditions, i.e. the weather
windows. Here all possible operating periods with a duration matching the required weather-window value are assessed to see if the weather conditions remain
suitable for the entire period. The percentage availability of the weather window is calculated based on the percentage of x-duration windows with suitable
weather conditions for their entire duration.The weather window availability can be considered as the probability of having the required weather window available
at any given point in the month.
Extreme Wind and Wave
The Extreme wave layers show the highest significant wave height expected to occur during the given return period. The Extreme wind layers show the highest wind speed expected to occur during the given return period.
To predict extreme values, we use Extreme Value Analysis (EVA). EVA focuses on the extreme part of the data and seeks to determine a model to fit this reduced
portion accurately. EVA consists of three main stages. The first stage is the selection of extreme values from a time series. The next step is to fit a model
that best approximates the selected extremes by determining the shape parameters for a suitable probability distribution. The model then predicts extreme values
for the selected return period. All calculations use the python pyextremes library. Two methods are used - Block Maxima and Peaks over threshold.
The Block Maxima methods selects the annual maxima and fits a GEVD probability distribution.
The peaks_over_threshold method has two variable calculation parameters. The first is the percentile above which values must be to be selected as extreme (0.9 or 0.998). The
second input is the time difference between extreme values for them to be considered independent (3 days). A Generalised Pareto Distribution is fitted to the selected
extremes and used to calculate the extreme value for the selected return period.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Do you spend a lot of time repeating workflows, such as copying data, editing files, and setting up map documents? Did you know that you can use Python to automate data reproduction, data management, map document display, and many of your other daily tasks in ArcGIS?This course provides the building blocks needed to use Python. You will create and run scripts using these building blocks and can apply them directly inside of ArcGIS and to your own workflows.After completing this course, you will be able to:Determine where to write and run a Python script.Differentiate Python language elements and determine where to apply them.Follow a script workflow.Develop a Python script to run statements and functions.Solve common syntax errors.
Facebook
TwitterThe classification of point cloud datasets to identify distribution wires is useful for identifying vegetation encroachment around power lines. Such workflows are important for preventing fires and power outages and are typically manual, recurring, and labor-intensive. This model is designed to extract distribution wires at the street level. Its predictions for high-tension transmission wires are less consistent with changes in geography as compared to street-level distribution wires. In the case of high-tension transmission wires, a lower ‘recall’ value is observed as compared to the value observed for low-lying street wires and poles.Using the modelFollow the guide to use the model. The model can be used with ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.InputThe model accepts unclassified point clouds with point geometry (X, Y and Z values). Note: The model is not dependent on any additional attributes such as Intensity, Number of Returns, etc. This model is trained to work on unclassified point clouds that are in a projected coordinate system, in which the units of X, Y and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: Classcode Class Description 0 Background Class 14 Distribution Wires 15 Distribution Tower/PolesApplicable geographiesThe model is expected to work within any geography. It's seen to produce favorable results as shown here in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Model architectureThis model uses the RandLANet model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Background (0) 0.999679 0.999876 0.999778 Distribution Wires (14) 0.955085 0.936825 0.945867 Distribution Poles (15) 0.707983 0.553888 0.621527Training dataThis model is trained on manually classified training dataset provided to Esri by AAM group. The training data used has the following characteristics: X, Y, and Z linear unitmeter Z range-240.34 m to 731.17 m Number of Returns1 to 5 Intensity1 to 4095 Point spacing0.2 ± 0.1 Scan angle-42 to +35 Maximum points per block20000 Extra attributesNone Class structure[0, 14, 15]Sample resultsHere are a few results from the model.
Facebook
TwitterThis submission includes the final project report of the Snake River Plain Play Fairway Analysis project as well as a separate appendix for the final report. The final report outlines the application of Play Fairway Analysis (PFA) to geothermal exploration, specifically within the Snake River Plain volcanic province. The goals of the report are to use PFA to lower risk and cost of geothermal exploration and stimulate development of geothermal power resources in Idaho. Further use of this report could include the application of PFA for geothermal exploration throughout the geothermal industry. The report utilizes ArcGIS and Python for data analysis which used to developed a systematic workflow to automate data analysis. The appendix for the report includes ArcGIS maps and data compilation information regarding the report.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This resource was created by Esri Canada Education and Research. To browse our full collection of higher-education learning resources, please visit https://hed.esri.ca/resourcefinder/.This tutorial introduces you to using Python code in a Jupyter Notebook, an open source web application that enables you to create and share documents that contain rich text, equations and multimedia, alongside executable code and visualization of analysis outputs. The tutorial begins by stepping through the basics of setting up and being productive with Python notebooks. You will be introduced to ArcGIS Notebooks, which are Python Notebooks that are well-integrated within the ArcGIS platform. Finally, you will be guided through a series of ArcGIS Notebooks that illustrate how to create compelling notebooks for data science that integrate your own Python scripts using the ArcGIS API for Python and ArcPy in combination with thousands of open source Python libraries to enhance your analysis and visualization.To download the dataset Labs, click the Open button to the top right. This will automatically download a ZIP file containing all files and data required.You can also clone the tutorial documents and datasets for this GitHub repo: https://github.com/highered-esricanada/arcgis-notebooks-tutorial.git.Software & Solutions Used: Required: This tutorial was last tested on August 27th, 2024, using ArcGIS Pro 3.3. If you're using a different version of ArcGIS Pro, you may encounter different functionality and results.Recommended: ArcGIS Online subscription account with permissions to use advanced Notebooks and GeoEnrichmentOptional: Notebook Server for ArcGIS Enterprise 11.3+Time to Complete: 2 h (excludes processing time)File Size: 196 MBDate Created: January 2022Last Updated: August 27, 2024