Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ArcGIS has many analysis and geoprocessing tools that can help you solve real-world problems with your data. In some cases, you are able to run individual tools to complete an analysis. But sometimes you may require a more comprehensive way to create, share, and document your analysis workflow.In these situations, you can use a built-in application called ModelBuilder to create a workflow that you can reuse, modify, save, and share with others.In this course, you will learn the basics of working with ModelBuilder and creating models. Models contain many different elements, many of which you will learn about. You will also learn how to work with models that others create and share with you. Sharing models is one of the major advantages of working with ModelBuilder and models in general. You will learn how to prepare a model for sharing by setting various model parameters.After completing this course, you will be able to:Identify model elements and states.Describe a prebuilt model's processes and outputs.Create and document models for site selection and network analysis.Define model parameters and prepare a model for sharing.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this course, you will explore a variety of open-source technologies for working with geosptial data, performing spatial analysis, and undertaking general data science. The first component of the class focuses on the use of QGIS and associated technologies (GDAL, PROJ, GRASS, SAGA, and Orfeo Toolbox). The second component of the class introduces Python and associated open-source libraries and modules (NumPy, Pandas, Matplotlib, Seaborn, GeoPandas, Rasterio, WhiteboxTools, and Scikit-Learn) used by geospatial scientists and data scientists. We also provide an introduction to Structured Query Language (SQL) for performing table and spatial queries. This course is designed for individuals that have a background in GIS, such as working in the ArcGIS environment, but no prior experience using open-source software and/or coding. You will be asked to work through a series of lecture modules and videos broken into several topic areas, as outlined below. Fourteen assignments and the required data have been provided as hands-on opportunites to work with data and the discussed technologies and methods. If you have any questions or suggestions, feel free to contact us. We hope to continue to update and improve this course. This course was produced by West Virginia View (http://www.wvview.org/) with support from AmericaView (https://americaview.org/). This material is based upon work supported by the U.S. Geological Survey under Grant/Cooperative Agreement No. G18AP00077. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the opinions or policies of the U.S. Geological Survey. Mention of trade names or commercial products does not constitute their endorsement by the U.S. Geological Survey. After completing this course you will be able to: apply QGIS to visualize, query, and analyze vector and raster spatial data. use available resources to further expand your knowledge of open-source technologies. describe and use a variety of open data formats. code in Python at an intermediate-level. read, summarize, visualize, and analyze data using open Python libraries. create spatial predictive models using Python and associated libraries. use SQL to perform table and spatial queries at an intermediate-level.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
GIS project files and imagery data required to complete the Introduction to Planetary Image Analysis and Geologic Mapping in ArcGIS Pro tutorial. These data cover the area in and around Jezero crater, Mars.
Needing to answer the question of “where” sat at the forefront of everyone’s mind, and using a geographic information system (GIS) for real-time surveillance transformed possibly overwhelming data into location intelligence that provided agencies and civic leaders with valuable insights.This book highlights best practices, key GIS capabilities, and lessons learned during the COVID-19 response that can help communities prepare for the next crisis.GIS has empowered:Organizations to use human mobility data to estimate the adherence to social distancing guidelinesCommunities to monitor their health care systems’ capacity through spatially enabled surge toolsGovernments to use location-allocation methods to site new resources (i.e., testing sites and augmented care sites) in ways that account for at-risk and vulnerable populationsCommunities to use maps and spatial analysis to review case trends at local levels to support reopening of economiesOrganizations to think spatially as they consider “back-to-the-workplace” plans that account for physical distancing and employee safety needsLearning from COVID-19 also includes a “next steps” section that provides ideas, strategies, tools, and actions to help jump-start your own use of GIS, either as a citizen scientist or a health professional. A collection of online resources, including additional stories, videos, new ideas and concepts, and downloadable tools and content, complements this book.Now is the time to use science and data to make informed decisions for our future, and this book shows us how we can do it.Dr. Este GeraghtyDr. Este Geraghty is the chief medical officer and health solutions director at Esri where she leads business development for the Health and Human Services sector.Matt ArtzMatt Artz is a content strategist for Esri Press. He brings a wide breadth of experience in environmental science, technology, and marketing.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This seminar is an applied study of deep learning methods for extracting information from geospatial data, such as aerial imagery, multispectral imagery, digital terrain data, and other digital cartographic representations. We first provide an introduction and conceptualization of artificial neural networks (ANNs). Next, we explore appropriate loss and assessment metrics for different use cases followed by the tensor data model, which is central to applying deep learning methods. Convolutional neural networks (CNNs) are then conceptualized with scene classification use cases. Lastly, we explore semantic segmentation, object detection, and instance segmentation. The primary focus of this course is semantic segmenation for pixel-level classification.
The associated GitHub repo provides a series of applied examples. We hope to continue to add examples as methods and technologies further develop. These examples make use of a vareity of datasets (e.g., SAT-6, topoDL, Inria, LandCover.ai, vfillDL, and wvlcDL). Please see the repo for links to the data and associated papers. All examples have associated videos that walk through the process, which are also linked to the repo. A variety of deep learning architectures are explored including UNet, UNet++, DeepLabv3+, and Mask R-CNN. Currenlty, two examples use ArcGIS Pro and require no coding. The remaining five examples require coding and make use of PyTorch, Python, and R within the RStudio IDE. It is assumed that you have prior knowledge of coding in the Python and R enviroinments. If you do not have experience coding, please take a look at our Open-Source GIScience and Open-Source Spatial Analytics (R) courses, which explore coding in Python and R, respectively.
After completing this seminar you will be able to:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Video based training seminar.
The Minnesota DNR Toolbox and Hydro Tools provide a number of convenience geoprocessing tools used regularly by MNDNR staff. Many of these may be useful to the wider public. However, some tools may rely on data that is not available outside of the DNR. All tools require at least ArcGIS 10+.
If you create a GDRS using GDRS Manager and include this toolbox resource and MNDNR Quick Layers, the DNR toolboxes will automatically be added to the ArcToolbox window whenever Quick Layers GDRS Location is set to the GDRS location that has the toolboxes.
Toolsets included in MNDNR Tools V10:
- Analysis Tools
- Conversion Tools
- Division Tools
- General Tools
- Hydrology Tools
- LiDAR and DEM Tools
- Raster Tools
- Sampling Tools
These toolboxes are provided free of charge and are not warrantied for any specific use. We do not provide support or assistance in downloading or using these tools. We do, however, strive to produce high-quality tools and appreciate comments you have about them.
Prior experience of GIS is variable, but a number of PGCE students and in-service teachers reported negative prior experiences with geospatial technology. Common complaints include a course focussed on data students found irrelevant, with learning exercises in the form of list-like instructions. The complexity of desktop GIS software is also often mentioned as off-putting.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
You have been assigned a new project, which you have researched, and you have identified the data that you need.The next step is to gather, organize, and potentially create the data that you need for your project analysis.In this course, you will learn how to gather and organize data using ArcGIS Pro. You will also create a file geodatabase where you will store the data that you import and create.After completing this course, you will be able to perform the following tasks:Create a geodatabase in ArcGIS Pro.Create feature classes in ArcGIS Pro by exporting and importing data.Create a new, empty feature class in ArcGIS Pro.
Land cover describes the surface of the earth. Land-cover maps are useful in urban planning, resource management, change detection, agriculture, and a variety of other applications in which information related to the earth's surface is required. Land-cover classification is a complex exercise and is difficult to capture using traditional means. Deep learning models are highly capable of learning these complex semantics and can produce superior results.There are a few public datasets for land cover, but the spatial and temporal coverage of these public datasets may not always meet the user’s requirements. It is also difficult to create datasets for a specific time, as it requires expertise and time. Use this deep learning model to automate the manual process and reduce the required time and effort significantly.Using the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Fine-tuning the modelThis model can be fine-tuned using the Train Deep Learning Model tool. Follow the guide to fine-tune this model.Input8-bit, 3-band very high-resolution (10 cm) imagery.OutputClassified raster with the 8 classes as in the LA county landcover dataset.Applicable geographiesThe model is expected to work well in the United States and will produce the best results in the urban areas of California.Model architectureThis model uses the UNet model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an overall accuracy of 84.8%. The table below summarizes the precision, recall and F1-score of the model on the validation dataset: ClassPrecisionRecallF1 ScoreTree Canopy0.8043890.8461520.824742Grass/Shrubs0.7199930.6272780.670445Bare Soil0.89270.9099580.901246Water0.9808850.9874990.984181Buildings0.9222020.9450320.933478Roads/Railroads0.8696370.8629210.866266Other Paved0.8114650.8119610.811713Tall Shrubs0.7076740.6382740.671185Training dataThis model has been trained on very high-resolution Landcover dataset (produced by LA County).LimitationsSince the model is trained on imagery of urban areas of LA County it will work best in urban areas of California or similar geography.Model is trained on limited classes and may lead to misclassification for other types of LULC classes.Sample resultsHere are a few results from the model.
The files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. We converted the photointerpreted data into a format usable in a geographic information system (GIS) by employing three fundamental processes: (1) orthorectify, (2) digitize, and (3) develop the geodatabase. All digital map automation was projected in Universal Transverse Mercator (UTM), Zone 16, using the North American Datum of 1983 (NAD83). Orthorectify: We orthorectified the interpreted overlays by using OrthoMapper, a softcopy photogrammetric software for GIS. One function of OrthoMapper is to create orthorectified imagery from scanned and unrectified imagery (Image Processing Software, Inc., 2002). The software features a method of visual orientation involving a point-and-click operation that uses existing orthorectified horizontal and vertical base maps. Of primary importance to us, OrthoMapper also has the capability to orthorectify the photointerpreted overlays of each photograph based on the reference information provided. Digitize: To produce a polygon vector layer for use in ArcGIS (Environmental Systems Research Institute [ESRI], Redlands, California), we converted each raster-based image mosaic of orthorectified overlays containing the photointerpreted data into a grid format by using ArcGIS. In ArcGIS, we used the ArcScan extension to trace the raster data and produce ESRI shapefiles. We digitally assigned map-attribute codes (both map-class codes and physiognomic modifier codes) to the polygons and checked the digital data against the photointerpreted overlays for line and attribute consistency. Ultimately, we merged the individual layers into a seamless layer. Geodatabase: At this stage, the map layer has only map-attribute codes assigned to each polygon. To assign meaningful information to each polygon (e.g., map-class names, physiognomic definitions, links to NVCS types), we produced a feature-class table, along with other supportive tables and subsequently related them together via an ArcGIS Geodatabase. This geodatabase also links the map to other feature-class layers produced from this project, including vegetation sample plots, accuracy assessment (AA) sites, aerial photo locations, and project boundary extent. A geodatabase provides access to a variety of interlocking data sets, is expandable, and equips resource managers and researchers with a powerful GIS tool.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
scripts.zip
arcgisTools.atbx: terrainDerivatives: make terrain derivatives from digital terrain model (Band 1 = TPI (50 m radius circle), Band 2 = square root of slope, Band 3 = TPI (annulus), Band 4 = hillshade, Band 5 = multidirectional hillshades, Band 6 = slopeshade). rasterizeFeatures: convert vector polygons to raster masks (1 = feature, 0 = background).
makeChips.R: R function to break terrain derivatives and chips into image chips of a defined size. makeTerrainDerivatives.R: R function to generated 6-band terrain derivatives from digital terrain data (same as ArcGIS Pro tool). merge_logs.R: R script to merge training logs into a single file. predictToExtents.ipynb: Python notebook to use trained model to predict to new data. trainExperiments.ipynb: Python notebook used to train semantic segmentation models using PyTorch and the Segmentation Models package. assessmentExperiments.ipynb: Python code to generate assessment metrics using PyTorch and the torchmetrics library. graphs_results.R: R code to make graphs with ggplot2 to summarize results. makeChipsList.R: R code to generate lists of chips in a directory. makeMasks.R: R function to make raster masks from vector data (same as rasterizeFeatures ArcGIS Pro tool).
terraceDL.zip
dems: LiDAR DTM data partitioned into training, testing, and validation datasets based on HUC8 watershed boundaries. Original DTM data were provided by the Iowa BMP mapping project: https://www.gis.iastate.edu/BMPs. extents: extents of the training, testing, and validation areas as defined by HUC 8 watershed boundaries. vectors: vector features representing agricultural terraces and partitioned into separate training, testing, and validation datasets. Original digitized features were provided by the Iowa BMP Mapping Project: https://www.gis.iastate.edu/BMPs.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into building and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Building is useful in applications such as high-quality 3D basemap creation, urban planning, and planning climate change response.Building could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Building in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.The model is trained with classified LiDAR that follows the The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 6 BuildingApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Auckland, Christchurch, Kapiti, Wellington Testing dataset - Auckland, WellingtonValidation/Evaluation dataset - Hutt City Dataset City Training Auckland, Christchurch, Kapiti, Wellington Testing Auckland, Wellington Validating HuttModel architectureThis model uses the SemanticQueryNetwork model architecture implemented in ArcGIS Pro.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.984921 0.975853 0.979762 Building 0.951285 0.967563 0.9584Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 75~%, Test: 25~%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-137.74 m to 410.50 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-17 to +17 Maximum points per block8192 Block Size50 Meters Class structure[0, 6]Sample resultsModel to classify a dataset with 23pts/m density Wellington city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story
Coconuts and coconut products are an important commodity in the Tongan economy. Plantations, such as the one in the town of Kolovai, have thousands of trees. Inventorying each of these trees by hand would require lots of time and manpower. Alternatively, tree health and location can be surveyed using remote sensing and deep learning. In this lesson, you'll use the Deep Learning tools in ArcGIS Pro to create training samples and run a deep learning model to identify the trees on the plantation. Then, you'll estimate tree health using a Visible Atmospherically Resistant Index (VARI) calculation to determine which trees may need inspection or maintenance.
To detect palm trees and calculate vegetation health, you only need ArcGIS Pro with the Image Analyst extension. To publish the palm tree health data as a feature service, you need ArcGIS Online and the Spatial Analyst extension.
In this lesson you will build skills in these areas:
Learn ArcGIS is a hands-on, problem-based learning website using real-world scenarios. Our mission is to encourage critical thinking, and to develop resources that support STEM education.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into tree and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Trees is useful in applications such as high-quality 3D basemap creation, urban planning, forestry workflows, and planning climate change response.Trees could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Tree in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.InputThe model is trained with classified LiDAR that follows the LINZ base specification. The input data should be similar to this specification.Note: The model is dependent on additional attributes such as Intensity, Number of Returns, etc, similar to the LINZ base specification. This model is trained to work on classified and unclassified point clouds that are in a projected coordinate system, in which the units of X, Y and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 5 Trees / High-vegetationApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Wellington CityTesting dataset - Tawa CityValidation/Evaluation dataset - Christchurch City Dataset City Training Wellington Testing Tawa Validating ChristchurchModel architectureThis model uses the PointCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.991200 0.975404 0.983239 High Vegetation 0.933569 0.975559 0.954102Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 80%, Test: 20%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-121.69 m to 26.84 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-15 to +15 Maximum points per block8192 Block Size20 Meters Class structure[0, 5]Sample resultsModel to classify a dataset with 5pts/m density Christchurch city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Class Other: Indicates transporting freight or storage of multiple hazard classes.
Summary This feature class documents the fire history on CMR from 1964 - present. This is 1 of 2 feature classes, a polygon and a point. This data has a variety of different origins which leads to differing quality of data. Within the polygon feature class, this contains perimeters that were mapped using a GPS, hand digitized, on-screen digitized, and buffered circles to the estimated acreage. These 2 files should be kept together. Within the point feature class, fires with only a location of latitude/longitude, UTM coordinate, TRS and no estimated acreage were mapped using a point location. GPS started being used in 1992 when the technology became available. Records from FMIS (Fire Management Information System) were reviewed and compared to refuge records. Polygon data in FMIS only occurs from 2012 to current and many acreage estimates did not match. This dataset includes ALL fires no matter the size. This feature class documents the fire history on CMR from 1964 - present. This is 1 of 2 feature classes, a polygon and a point. This data has a variety of different origins which leads to differing quality of data. Within the polygon feature class, this contains perimeters that were mapped using a GPS, hand digitized, on-screen digitized, and buffered circles to the estimated acreage. These 2 files should be kept together. Within the point feature class, fires with only a location of latitude/longitude, UTM coordinate, TRS and no estimated acreage were mapped using a point location. GPS started being used in 1992 when the technology became available. Data origins include: Data origins include: 1) GPS Polygon-data (Best), 2) GPS Lat/Long or UTM, 3)TRS QS, 4)TRS Point, 6)Hand digitized from topo map, 7) Circle buffer, 8)Screen digitized, 9) FMIS Lat/Long. Started compiling fire history of CMR in 2007. This has been a 10 year process.FMIS doesn't include fires polygons that are less than 10 acres. This dataset has been sent to FMIS for FMIS records to be updated with correct information. The spreadsheet contains 10-15 records without spatial information and weren't included in either feature class. Fire information from 1964 - 1980 came from records Larry Eichhorn, BLM, provided to CMR staff. Mike Granger, CMR Fire Management Officer, tracked fires on an 11x17 legal pad and all this information was brought into Excel and ArcGIS. Frequently, other information about the fires were missing which made it difficult to back track and fill in missing data. Time was spent verifiying locations that were occasionally recorded incorrectly (DMS vs DD) and converting TRS into Lat/Long and/or UTM. CMR is divided into 2 different UTM zones, zone 12 and zone 13. This occasionally caused errors in projecting. Naming conventions caused confusion. Fires are frequently names by location and there are several "Soda Creek", "Rock Creek", etc fires. Fire numbers were occasionally missing or incorrect. Fires on BLM were included if they were "Assists". Also, fires on satellite refuges and the district were also included. Acreages from GIS were compared to FMIS acres. Please see documentation in ServCat (URL) to see how these were handled.
In this course, you will learn about some common types of data used for GIS mapping and analysis, and practice adding data to a file geodatabase to support a planned project.Goals Create a file geodatabase. Add data to a file geodatabase. Create an empty geodatabase feature class.
OVERVIEWThis site is dedicated to raising the level of spatial and data literacy used in public policy. We invite you to explore curated content, training, best practices, and datasets that can provide a baseline for your research, analysis, and policy recommendations. Learn about emerging policy questions and how GIS can be used to help come up with solutions to those questions.EXPLOREGo to your area of interest and explore hundreds of maps about various topics such as social equity, economic opportunity, public safety, and more. Browse and view the maps, or collect them and share via a simple URL. Sharing a collection of maps is an easy way to use maps as a tool for understanding. Help policymakers and stakeholders use data as a driving factor for policy decisions in your area.ISSUESBrowse different categories to find data layers, maps, and tools. Use this set of content as a driving force for your GIS workflows related to policy. RESOURCESTo maximize your experience with the Policy Maps, we’ve assembled education, training, best practices, and industry perspectives that help raise your data literacy, provide you with models, and connect you with the work of your peers.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ArcGIS has many analysis and geoprocessing tools that can help you solve real-world problems with your data. In some cases, you are able to run individual tools to complete an analysis. But sometimes you may require a more comprehensive way to create, share, and document your analysis workflow.In these situations, you can use a built-in application called ModelBuilder to create a workflow that you can reuse, modify, save, and share with others.In this course, you will learn the basics of working with ModelBuilder and creating models. Models contain many different elements, many of which you will learn about. You will also learn how to work with models that others create and share with you. Sharing models is one of the major advantages of working with ModelBuilder and models in general. You will learn how to prepare a model for sharing by setting various model parameters.After completing this course, you will be able to:Identify model elements and states.Describe a prebuilt model's processes and outputs.Create and document models for site selection and network analysis.Define model parameters and prepare a model for sharing.