Deprecation notice: This tool is deprecated because this functionality is now available with out-of-the-box tools in ArcGIS Pro. The tool author will no longer be making further enhancements or fixing major bugs.Use Add GTFS to a Network Dataset to incorporate transit data into a network dataset so you can perform schedule-aware analyses using the Network Analyst tools in ArcMap.After creating your network dataset, you can use the ArcGIS Network Analyst tools, like Service Area and OD Cost Matrix, to perform transit/pedestrian accessibility analyses, make decisions about where to locate new facilities, find populations underserved by transit or particular types of facilities, or visualize the areas reachable from your business at different times of day. You can also publish services in ArcGIS Server that use your network dataset.The Add GTFS to a Network Dataset tool suite consists of a toolbox to pre-process the GTFS data to prepare it for use in the network dataset and a custom GTFS transit evaluator you must install that helps the network dataset read the GTFS schedules. A user's guide is included to help you set up your network dataset and run analyses.Instructions:Download the tool. It will be a zip file.Unzip the file and put it in a permanent location on your machine where you won't lose it. Do not save the unzipped tool folder on a network drive, the Desktop, or any other special reserved Windows folders (like C:\Program Files) because this could cause problems later.The unzipped file contains an installer, AddGTFStoaNetworkDataset_Installer.exe. Double-click this to run it. The installation should proceed quickly, and it should say "Completed" when finished.Read the User's Guide for instructions on creating and using your network dataset.System requirements:ArcMap 10.1 or higher with a Desktop Standard (ArcEditor) license. (You can still use it if you have a Desktop Basic license, but you will have to find an alternate method for one of the pre-processing tools.) ArcMap 10.6 or higher is recommended because you will be able to construct your network dataset much more easily using a template rather than having to do it manually step by step. This tool does not work in ArcGIS Pro. See the User's Guide for more information.Network Analyst extensionThe necessary permissions to install something on your computer.Data requirements:Street data for the area covered by your transit system, preferably data including pedestrian attributes. If you need help preparing high-quality street data for your network, please review this tutorial.A valid GTFS dataset. If your GTFS dataset has blank values for arrival_time and departure_time in stop_times.txt, you will not be able to run this tool. You can download and use the Interpolate Blank Stop Times tool to estimate blank arrival_time and departure_time values for your dataset if you still want to use it.Help forum
The risk of natural disasters, many of which are amplified by climate change, requires the protection of emergency evacuation routes to permit evacuees safe passage. California has recognized the need through the AB 747 Planning and Zoning Law, which requires each county and city in California to update their - general plans to include safety elements from unreasonable risks associated with various hazards, specifically evacuation routes and their capacity, safety, and viability under a range of emergency scenarios. These routes must be identified in advance and maintained so they can support evacuations. Today, there is a lack of a centralized database of the identified routes or their general assessment. Consequently, this proposal responds to Caltrans’ research priority for “GIS Mapping of Emergency Evacuation Routes.†Specifically, the project objectives are: 1) create a centralized GIS database, by collecting and compiling available evacuation route GIS layers, and the safety eleme..., The project used the following public datasets: • Open Street Map. The team collected the road network arcs and nodes of the selected localities and the team will make public the graph used for each locality. • National Risk Index (NRI): The team used the NRI obtained publicly from FEMA at the census tract level. • American Community Survey (ACS): The team used ACS data to estimate the Social Vulnerability Index at the census block level. Then the author developed a measurement to estimate the road network performance risk at the node level, by estimating the Hansen accessibility index, betweenness centrality and the NRI. Create a set of CSV files with the risk for more than 450 localities in California, on around 18 natural hazards. I also have graphs of the RNP risk at the regional level showing the directionality of the risk., , # Data from: Improving public safety through spatial synthesis, mapping, modeling, and performance analysis of emergency evacuation routes in California localities
https://doi.org/10.5061/dryad.w9ghx3g0j
For this project’s analysis, the team obtained data from FEMA's National Risk Index, including the Social Vulnerability Index (SOVI).
To estimate SOVI, the team used data from the American Community Survey (ACS) to calculate SOVI at the census block level.
Using the graphs obtained from OpenStreetMap (OSM), the authors estimated the Hansen Accessibility Index (Ai) and the normalized betweenness centrality (BC) for each node in the graph.
The authors estimated the Road Network Performance (RNP) risk at the node level by combining NRI, Ai, and BC. They then grouped the RNP to determine the RNP risk at the regional level and generated the radial histogram. Finally, the authors calculated each ana...
This dataset combines the work of several different projects to create a seamless data set for the contiguous United States. Data from four regional Gap Analysis Projects and the LANDFIRE project were combined to make this dataset. In the northwestern United States (Idaho, Oregon, Montana, Washington and Wyoming) data in this map came from the Northwest Gap Analysis Project. In the southwestern United States (Colorado, Arizona, Nevada, New Mexico, and Utah) data used in this map came from the Southwest Gap Analysis Project. The data for Alabama, Florida, Georgia, Kentucky, North Carolina, South Carolina, Mississippi, Tennessee, and Virginia came from the Southeast Gap Analysis Project and the California data was generated by the updated California Gap land cover project. The Hawaii Gap Analysis project provided the data for Hawaii. In areas of the county (central U.S., Northeast, Alaska) that have not yet been covered by a regional Gap Analysis Project, data from the Landfire project was used. Similarities in the methods used by these projects made possible the combining of the data they derived into one seamless coverage. They all used multi-season satellite imagery (Landsat ETM+) from 1999-2001 in conjunction with digital elevation model (DEM) derived datasets (e.g. elevation, landform) to model natural and semi-natural vegetation. Vegetation classes were drawn from NatureServe's Ecological System Classification (Comer et al. 2003) or classes developed by the Hawaii Gap project. Additionally, all of the projects included land use classes that were employed to describe areas where natural vegetation has been altered. In many areas of the country these classes were derived from the National Land Cover Dataset (NLCD). For the majority of classes and, in most areas of the country, a decision tree classifier was used to discriminate ecological system types. In some areas of the country, more manual techniques were used to discriminate small patch systems and systems not distinguishable through topography. The data contains multiple levels of thematic detail. At the most detailed level natural vegetation is represented by NatureServe's Ecological System classification (or in Hawaii the Hawaii GAP classification). These most detailed classifications have been crosswalked to the five highest levels of the National Vegetation Classification (NVC), Class, Subclass, Formation, Division and Macrogroup. This crosswalk allows users to display and analyze the data at different levels of thematic resolution. Developed areas, or areas dominated by introduced species, timber harvest, or water are represented by other classes, collectively refered to as land use classes; these land use classes occur at each of the thematic levels. Raster data in both ArcGIS Grid and ERDAS Imagine format is available for download at http://gis1.usgs.gov/csas/gap/viewer/land_cover/Map.aspx Six layer files are included in the download packages to assist the user in displaying the data at each of the Thematic levels in ArcGIS. In adition to the raster datasets the data is available in Web Mapping Services (WMS) format for each of the six NVC classification levels (Class, Subclass, Formation, Division, Macrogroup, Ecological System) at the following links. http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Class_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Subclass_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Formation_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Division_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Macrogroup_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_Ecological_Systems_Landuse/MapServer
This dataset represents point locations of cities and towns in Arizona. The data contains point locations for incorporated cities, Census Designated Places and populated places. Several data sets were used as inputs to construct this data set. A subset of the Geographic Names Information System (GNIS) national dataset for the state of Arizona was used for the base location of most of the points. Polygon files of the Census Designated Places (CDP), from the U.S. Census Bureau and an incorporated city boundary database developed and maintained by the Arizona State Land Department were also used for reference during development. Every incorporated city is represented by a point, originally derived from GNIS. Some of these points were moved based on local knowledge of the GIS Analyst constructing the data set. Some of the CDP points were also moved and while most CDP's of the Census Bureau have one point location in this data set, some inconsistencies were allowed in order to facilitate the use of the data for mapping purposes. Population estimates were derived from data collected during the 2010 Census. During development, an additional attribute field was added to provide additional functionality to the users of this data. This field, named 'DEF_CAT', implies definition category, and will allow users to easily view, and create custom layers or datasets from this file. For example, new layers may created to include only incorporated cities (DEF_CAT = Incorporated), Census designated places (DEF_CAT = Incorporated OR DEF_CAT = CDP), or all cities that are neither CDP's or incorporated (DEF_CAT= Other). This data is current as of February 2012. At this time, there is no planned maintenance or update process for this dataset.This data is created to serve as base information for use in GIS systems for a variety of planning, reference, and analysis purposes. This data does not represent a legal record.
The establishment of a BES Multi-User Geodatabase (BES-MUG) allows for the storage, management, and distribution of geospatial data associated with the Baltimore Ecosystem Study. At present, BES data is distributed over the internet via the BES website. While having geospatial data available for download is a vast improvement over having the data housed at individual research institutions, it still suffers from some limitations. BES-MUG overcomes these limitations; improving the quality of the geospatial data available to BES researches, thereby leading to more informed decision-making.
BES-MUG builds on Environmental Systems Research Institute's (ESRI) ArcGIS and ArcSDE technology. ESRI was selected because its geospatial software offers robust capabilities. ArcGIS is implemented agency-wide within the USDA and is the predominant geospatial software package used by collaborating institutions.
Commercially available enterprise database packages (DB2, Oracle, SQL) provide an efficient means to store, manage, and share large datasets. However, standard database capabilities are limited with respect to geographic datasets because they lack the ability to deal with complex spatial relationships. By using ESRI's ArcSDE (Spatial Database Engine) in conjunction with database software, geospatial data can be handled much more effectively through the implementation of the Geodatabase model. Through ArcSDE and the Geodatabase model the database's capabilities are expanded, allowing for multiuser editing, intelligent feature types, and the establishment of rules and relationships. ArcSDE also allows users to connect to the database using ArcGIS software without being burdened by the intricacies of the database itself.
For an example of how BES-MUG will help improve the quality and timeless of BES geospatial data consider a census block group layer that is in need of updating. Rather than the researcher downloading the dataset, editing it, and resubmitting to through ORS, access rules will allow the authorized user to edit the dataset over the network. Established rules will ensure that the attribute and topological integrity is maintained, so that key fields are not left blank and that the block group boundaries stay within tract boundaries. Metadata will automatically be updated showing who edited the dataset and when they did in the event any questions arise.
Currently, a functioning prototype Multi-User Database has been developed for BES at the University of Vermont Spatial Analysis Lab, using Arc SDE and IBM's DB2 Enterprise Database as a back end architecture. This database, which is currently only accessible to those on the UVM campus network, will shortly be migrated to a Linux server where it will be accessible for database connections over the Internet. Passwords can then be handed out to all interested researchers on the project, who will be able to make a database connection through the Geographic Information Systems software interface on their desktop computer.
This database will include a very large number of thematic layers. Those layers are currently divided into biophysical, socio-economic and imagery categories. Biophysical includes data on topography, soils, forest cover, habitat areas, hydrology and toxics. Socio-economics includes political and administrative boundaries, transportation and infrastructure networks, property data, census data, household survey data, parks, protected areas, land use/land cover, zoning, public health and historic land use change. Imagery includes a variety of aerial and satellite imagery.
See the readme: http://96.56.36.108/geodatabase_SAL/readme.txt
See the file listing: http://96.56.36.108/geodatabase_SAL/diroutput.txt
Data set that contains information on archaeological remains of the pre historic settlement of the Letolo valley on Savaii on Samoa. It is built in ArcMap from ESRI and is based on previously unpublished surveys made by the Peace Corps Volonteer Gregory Jackmond in 1976-78, and in a lesser degree on excavations made by Helene Martinsson Wallin and Paul Wallin. The settlement was in use from at least 1000 AD to about 1700- 1800. Since abandonment it has been covered by thick jungle. However by the time of the survey by Jackmond (1976-78) it was grazed by cattle and the remains was visible. The survey is at file at Auckland War Memorial Museum and has hitherto been unpublished. A copy of the survey has been accessed by Olof Håkansson through Martinsson Wallin and Wallin and as part of a Masters Thesis in Archeology at Uppsala University it has been digitised.
Olof Håkansson has built the data base structure in the software from ESRI, and digitised the data in 2015 to 2017. One of the aims of the Masters Thesis was to discuss hierarchies. To do this, subsets of the data have been displayed in various ways on maps. Another aim was to discuss archaeological methodology when working with spatial data, but the data in itself can be used without regard to the questions asked in the Masters Thesis. All data that was unclear has been removed in an effort to avoid errors being introduced. Even so, if there is mistakes in the data set it is to be blamed on the researcher, Olof Håkansson. A more comprehensive account of the aim, questions, purpose, method, as well the results of the research, is to be found in the Masters Thesis itself. Direkt link http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1149265&dswid=9472
Purpose:
The purpose is to examine hierarchies in prehistoric Samoa. The purpose is further to make the produced data sets available for study.
Prehistoric remains of the settlement of Letolo on the Island of Savaii in Samoa in Polynesia
CAP’s Analyst Shopping Center dataset is the most comprehensive resource available for analyzing the Canadian shopping center landscape. Covering over 3,500 shopping centers across the country, this dataset provides a full horizontal and vertical view, enabling analysts, data scientists, solution providers, and application developers to gain unparalleled insights into market trends, tenant distribution, and operational efficiencies.
Comprehensive Data Coverage The Analyst Shopping Center dataset contains everything included in the Premium dataset, expanding to a total of 39 attributes. These attributes enable a deep dive into deriving key metrics and extracting valuable information about the shopping center ecosystem.
Advanced Geospatial Insights A key feature of this dataset is its multi-stage geocoding process, developed exclusively by CAP. This process ensures the most precise map points available, allowing for highly accurate spatial analysis. Whether for market assessments, location planning, or competitive analysis, this dataset provides geospatial precision that is unmatched.
Rich Developer & Ownership Details Understanding ownership and development trends is critical for investment and planning. This dataset includes detailed developer and owner information, covering aspects such as: Center Type (Operational, Proposed, or Redeveloped) Year Built & Remodeled Owner/Developer Profiles Operational Status & Redevelopment Plans
Geographic & Classification Variables The dataset also includes various geographic classification variables, offering deeper context for segmentation and regional analysis. These variables help professionals in: Identifying prime locations for expansion Analyzing the distribution of shopping centers across different regions Benchmarking against national and local trends
Enhanced Data for Decision-Making Other insightful elements of the dataset include Placekey integration, which ensures consistency in location-based analytics, and additional attributes that allow consultants, data scientists, and business strategists to make more informed decisions. With the CAP Analyst Shopping Center dataset, users gain a data-driven competitive edge, optimizing their ability to assess market opportunities, streamline operations, and drive strategic growth in the retail and commercial real estate sectors.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Tool and data set of road networks for 80 of the most populated urban areas in the world. The data consist of a graph edge list for each city and two corresponding GIS shapefiles (i.e., links and nodes).Make your own data with our ArcGIS, QGIS, and python tools available at: http://csun.uic.edu/codes/GISF2E.htmlPlease cite: Karduni,A., Kermanshah, A., and Derrible, S., 2016, "A protocol to convert spatial polyline data to network formats and applications to world urban road networks", Scientific Data, 3:160046, Available at http://www.nature.com/articles/sdata201646
Geographic Information System (GIS) analyses are an essential part of natural resource management and research. Calculating and summarizing data within intersecting GIS layers is common practice for analysts and researchers. However, the various tools and steps required to complete this process are slow and tedious, requiring many tools iterating over hundreds, or even thousands of datasets. USGS scientists will combine a series of ArcGIS geoprocessing capabilities with custom scripts to create tools that will calculate, summarize, and organize large amounts of data that can span many temporal and spatial scales with minimal user input. The tools work with polygons, lines, points, and rasters to calculate relevant summary data and combine them into a single output table that can be easily incorporated into statistical analyses. These tools are useful for anyone interested in using an automated script to quickly compile summary information within all areas of interest in a GIS dataset.
Toolbox Use
License
Creative Commons-PDDC
Recommended Citation
Welty JL, Jeffries MI, Arkle RS, Pilliod DS, Kemp SK. 2021. GIS Clipping and Summarization Toolbox: U.S. Geological Survey Software Release. https://doi.org/10.5066/P99X8558
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into building and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Building is useful in applications such as high-quality 3D basemap creation, urban planning, and planning climate change response.Building could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Building in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.The model is trained with classified LiDAR that follows the The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 6 BuildingApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Auckland, Christchurch, Kapiti, Wellington Testing dataset - Auckland, WellingtonValidation/Evaluation dataset - Hutt City Dataset City Training Auckland, Christchurch, Kapiti, Wellington Testing Auckland, Wellington Validating HuttModel architectureThis model uses the SemanticQueryNetwork model architecture implemented in ArcGIS Pro.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.984921 0.975853 0.979762 Building 0.951285 0.967563 0.9584Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 75~%, Test: 25~%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-137.74 m to 410.50 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-17 to +17 Maximum points per block8192 Block Size50 Meters Class structure[0, 6]Sample resultsModel to classify a dataset with 23pts/m density Wellington city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story
Crime data assembled by census block group for the MSA from the Applied Geographic Solutions' (AGS) 1999 and 2005 'CrimeRisk' databases distributed by the Tetrad Computer Applications Inc. CrimeRisk is the result of an extensive analysis of FBI crime statistics. Based on detailed modeling of the relationships between crime and demographics, CrimeRisk provides an accurate view of the relative risk of specific crime types at the block group level. Data from 1990 - 1996,1999, and 2004-2005 were used to compute the attributes, please refer to the 'Supplemental Information' section of the metadata for more details. Attributes are available for two categories of crimes, personal crimes and property crimes, along with total and personal crime indices. Attributes for personal crimes include murder, rape, robbery, and assault. Attributes for property crimes include burglary, larceny, and mother vehicle theft. 12 block groups have no attribute information. CrimeRisk is a block group and higher level geographic database consisting of a series of standardized indexes for a range of serious crimes against both persons and property. It is derived from an extensive analysis of several years of crime reports from the vast majority of law enforcement jurisdictions nationwide. The crimes included in the database are the "Part I" crimes and include murder, rape, robbery, assault, burglary, theft, and motor vehicle theft. These categories are the primary reporting categories used by the FBI in its Uniform Crime Report (UCR), with the exception of Arson, for which data is very inconsistently reported at the jurisdictional level. Part II crimes are not reported in the detail databases and are generally available only for selected areas or at high levels of geography. In accordance with the reporting procedures using in the UCR reports, aggregate indexes have been prepared for personal and property crimes separately, as well as a total index. While this provides a useful measure of the relative "overall" crime rate in an area, it must be recognized that these are unweighted indexes, in that a murder is weighted no more heavily than a purse snatching in the computation. For this reason, caution is advised when using any of the aggregate index values. The block group boundaries used in the dataset come from TeleAtlas's (formerly GDT) Dynamap data, and are consistent with all other block group boundaries in the BES geodatabase.
This is part of a collection of 221 Baltimore Ecosystem Study metadata records that point to a geodatabase.
The geodatabase is available online and is considerably large. Upon request, and under certain arrangements, it can be shipped on media, such as a usb hard drive.
The geodatabase is roughly 51.4 Gb in size, consisting of 4,914 files in 160 folders.
Although this metadata record and the others like it are not rich with attributes, it is nonetheless made available because the data that it represents could be indeed useful.
This is part of a collection of 221 Baltimore Ecosystem Study metadata records that point to a geodatabase.
The geodatabase is available online and is considerably large. Upon request, and under certain arrangements, it can be shipped on media, such as a usb hard drive.
The geodatabase is roughly 51.4 Gb in size, consisting of 4,914 files in 160 folders.
Although this metadata record and the others like it are not rich with attributes, it is nonetheless made available because the data that it represents could be indeed useful.
Abstract:
The National Hydrography Dataset (NHD) is a feature-based database that interconnects and uniquely identifies the stream segments or reaches that make up the nation's surface water drainage system. NHD data was originally developed at 1:100,000-scale and exists at that scale for the whole country. This high-resolution NHD, generally developed at 1:24,000/1:12,000 scale, adds detail to the original 1:100,000-scale NHD. (Data for Alaska, Puerto Rico and the Virgin Islands was developed at high-resolution, not 1:100,000 scale.) Local resolution NHD is being developed where partners and data exist. The NHD contains reach codes for networked features, flow direction, names, and centerline representations for areal water bodies. Reaches are also defined on waterbodies and the approximate shorelines of the Great Lakes, the Atlantic and Pacific Oceans and the Gulf of Mexico. The NHD also incorporates the National Spatial Data Infrastructure framework criteria established by the Federal Geographic Data Committee.
Purpose:
The NHD is a national framework for assigning reach addresses to water-related entities, such as industrial discharges, drinking water supplies, fish habitat areas, wild and scenic rivers. Reach addresses establish the locations of these entities relative to one another within the NHD surface water drainage network, much like addresses on streets. Once linked to the NHD by their reach addresses, the upstream/downstream relationships of these water-related entities--and any associated information about them--can be analyzed using software tools ranging from spreadsheets to geographic information systems (GIS). GIS can also be used to combine NHD-based network analysis with other data layers, such as soils, land use and population, to help understand and display their respective effects upon one another. Furthermore, because the NHD provides a nationally consistent framework for addressing and analysis, water-related information linked to reach addresses by one organization (national, state, local) can be shared with other organizations and easily integrated into many different types of applications to the benefit of all.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.
This resource contains raster datasets created using ArcGIS to analyse groundwater levels in the Namoi subregion.
This is an update to some of the data that is registered here: http://data.bioregionalassessments.gov.au/dataset/7604087e-859c-4a92-8548-0aa274e8a226
These data layers were created in ArcGIS as part of the analysis to investigate surface water - groundwater connectivity in the Namoi subregion. The data layers provide several of the figures presented in the Namoi 2.1.5 Surface water - groundwater interactions report.
Extracted points inside Namoi subregion boundary. Converted bore and pipe values to Hydrocode format, changed heading of 'Value' column to 'Waterlevel' and removed unnecessary columns then joined to Updated_NSW_GroundWaterLevel_data_analysis_v01\NGIS_NSW_Bore_Join_Hydmeas_unique_bores.shp clipped to only include those bores within the Namoi subregion.
Selected only those bores with sample dates between >=26/4/2012 and <31/7/2012. Then removed 4 gauges due to anomalous ref_pt_height values or WaterElev values higher than Land_Elev values.
Then added new columns of calculations:
WaterElev = TsRefElev - Water_Leve
DepthWater = WaterElev - Ref_pt_height
Ref_pt_height = TsRefElev - LandElev
Alternatively - Selected only those bores with sample dates between >=1/5/2006 and <1/7/2006
2012_Wat_Elev - This raster was created by interpolating Water_Elev field points from HydmeasJune2012_only.shp, using Spatial Analyst - Topo to Raster tool. And using the alluvium boundary (NAM_113_Aquifer1_NamoiAlluviums.shp) as a boundary input source.
12_dw_olp_enf - Select out only those bores that are in both source files.
Then using depthwater in Topo to Raster, with alluvium as the boundary, ENFORCE field chosen, and using only those bores present in 2012 and 2006 dataset.
2012dw1km_alu - Clipped the 'watercourselines' layer to the Namoi Subregion, then selected 'Major' water courses only. Then used the Geoprocessing 'Buffer' tool to create a polygon delineating an area 1km around all the major streams in the Namoi subregion.
selected points from HydmeasJune2012_only.shp that were within 1km of features the WatercourseLines then used the selected points and the 1km buffer around the major water courses and the Topo to Raster tool in Spatial analyst to create the raster.
Then used the alluvium boundary to truncate the raster, to limit to the area of interest.
12_minus_06 - Select out bores from the 2006 dataset that are also in the 2012 dataset. Then create a raster using depth_water in topo to raster, with ENFORCE field chosen to remove sinks, and alluvium as boundary. Then, using Map Algebra - Raster Calculator, subtract the raster just created from 12_dw_olp_enf
Bioregional Assessment Programme (2017) Namoi bore analysis rasters - updated. Bioregional Assessment Derived Dataset. Viewed 10 December 2018, http://data.bioregionalassessments.gov.au/dataset/effa0039-ba15-459e-9211-232640609d44.
Derived From Bioregional Assessment areas v02
Derived From Gippsland Project boundary
Derived From Bioregional Assessment areas v04
Derived From Upper Namoi groundwater management zones
Derived From Natural Resource Management (NRM) Regions 2010
Derived From Bioregional Assessment areas v03
Derived From Victoria - Seamless Geology 2014
Derived From GIS analysis of HYDMEAS - Hydstra Groundwater Measurement Update: NSW Office of Water - Nov2013
Derived From Bioregional Assessment areas v01
Derived From GEODATA TOPO 250K Series 3, File Geodatabase format (.gdb)
Derived From GEODATA TOPO 250K Series 3
Derived From NSW Catchment Management Authority Boundaries 20130917
Derived From Geological Provinces - Full Extent
Derived From Hydstra Groundwater Measurement Update - NSW Office of Water, Nov2013
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this course, you will learn to work within the free and open-source R environment with a specific focus on working with and analyzing geospatial data. We will cover a wide variety of data and spatial data analytics topics, and you will learn how to code in R along the way. The Introduction module provides more background info about the course and course set up. This course is designed for someone with some prior GIS knowledge. For example, you should know the basics of working with maps, map projections, and vector and raster data. You should be able to perform common spatial analysis tasks and make map layouts. If you do not have a GIS background, we would recommend checking out the West Virginia View GIScience class. We do not assume that you have any prior experience with R or with coding. So, don't worry if you haven't developed these skill sets yet. That is a major goal in this course. Background material will be provided using code examples, videos, and presentations. We have provided assignments to offer hands-on learning opportunities. Data links for the lecture modules are provided within each module while data for the assignments are linked to the assignment buttons below. Please see the sequencing document for our suggested order in which to work through the material. After completing this course you will be able to: prepare, manipulate, query, and generally work with data in R. perform data summarization, comparisons, and statistical tests. create quality graphs, map layouts, and interactive web maps to visualize data and findings. present your research, methods, results, and code as web pages to foster reproducible research. work with spatial data in R. analyze vector and raster geospatial data to answer a question with a spatial component. make spatial models and predictions using regression and machine learning. code in the R language at an intermediate level.
RTB Maps is a cloud-based electronic Atlas. We used ArGIS 10 for Desktop with Spatial Analysis Extension, ArcGIS 10 for Server on-premise, ArcGIS API for Javascript, IIS web services based on .NET, and ArcGIS Online combining data on the cloud with data and applications on our local server to develop an Atlas that brings together many of the map themes related to development of roots, tubers and banana crops. The Atlas is structured to allow our participating scientists to understand the distribution of the crops and observe the spatial distribution of many of the obstacles to production of these crops. The Atlas also includes an application to allow our partners to evaluate the importance of different factors when setting priorities for research and development. The application uses weighted overlay analysis within a multi-criteria decision analysis framework to rate the importance of factors when establishing geographic priorities for research and development.Datasets of crop distribution maps, agroecology maps, biotic and abiotic constraints to crop production, poverty maps and other demographic indicators are used as a key inputs to multi-objective criteria analysis.Further metadata/references can be found here: http://gisweb.ciat.cgiar.org/RTBmaps/DataAvailability_RTBMaps.htmlDISCLAIMER, ACKNOWLEDGMENTS AND PERMISSIONS:This service is provided by Roots, Tubers and Bananas CGIAR Research Program as a public service. Use of this service to retrieve information constitutes your awareness and agreement to the following conditions of use.This online resource displays GIS data and query tools subject to continuous updates and adjustments. The GIS data has been taken from various, mostly public, sources and is supplied in good faith.RTBMaps GIS Data Disclaimer• The data used to show the Base Maps is supplied by ESRI.• The data used to show the photos over the map is supplied by Flickr.• The data used to show the videos over the map is supplied by Youtube.• The population map is supplied to us by CIESIN, Columbia University and CIAT.• The Accessibility map is provided by Global Environment Monitoring Unit - Joint Research Centre of the European Commission. Accessibility maps are made for a specific purpose and they cannot be used as a generic dataset to represent "the accessibility" for a given study area.• Harvested area and yield for banana, cassava, potato, sweet potato and yam for the year 200, is provided by EarthSat (University of Minnesota’s Institute on the Environment-Global Landscapes initiative and McGill University’s Land Use and the Global Environment lab). Dataset from Monfreda C., Ramankutty N., and Foley J.A. 2008.• Agroecology dataset: global edapho-climatic zones for cassava based on mean growing season, temperature, number of dry season months, daily temperature range and seasonality. Dataset from CIAT (Carter et al. 1992)• Demography indicators: Total and Rural Population from Center for International Earth Science Information Network (CIESIN) and CIAT 2004.• The FGGD prevalence of stunting map is a global raster datalayer with a resolution of 5 arc-minutes. The percentage of stunted children under five years old is reported according to the lowest available sub-national administrative units: all pixels within the unit boundaries will have the same value. Data have been compiled by FAO from different sources: Demographic and Health Surveys (DHS), UNICEF MICS, WHO Global Database on Child Growth and Malnutrition, and national surveys. Data provided by FAO – GIS Unit 2007.• Poverty dataset: Global poverty headcount and absolute number of poor. Number of people living on less than $1.25 or $2.00 per day. Dataset from IFPRI and CIATTHE RTBMAPS GROUP MAKES NO WARRANTIES OR GUARANTEES, EITHER EXPRESSED OR IMPLIED AS TO THE COMPLETENESS, ACCURACY, OR CORRECTNESS OF THE DATA PORTRAYED IN THIS PRODUCT NOR ACCEPTS ANY LIABILITY, ARISING FROM ANY INCORRECT, INCOMPLETE OR MISLEADING INFORMATION CONTAINED THEREIN. ALL INFORMATION, DATA AND DATABASES ARE PROVIDED "AS IS" WITH NO WARRANTY, EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO, FITNESS FOR A PARTICULAR PURPOSE. By accessing this website and/or data contained within the databases, you hereby release the RTB group and CGCenters, its employees, agents, contractors, sponsors and suppliers from any and all responsibility and liability associated with its use. In no event shall the RTB Group or its officers or employees be liable for any damages arising in any way out of the use of the website, or use of the information contained in the databases herein including, but not limited to the RTBMaps online Atlas product.APPLICATION DEVELOPMENT:• Desktop and web development - Ernesto Giron E. (GeoSpatial Consultant) e.giron.e@gmail.com• GIS Analyst - Elizabeth Barona. (Independent Consultant) barona.elizabeth@gmail.comCollaborators:Glenn Hyman, Bernardo Creamer, Jesus David Hoyos, Diana Carolina Giraldo Soroush Parsa, Jagath Shanthalal, Herlin Rodolfo Espinosa, Carlos Navarro, Jorge Cardona and Beatriz Vanessa Herrera at CIAT, Tunrayo Alabi and Joseph Rusike from IITA, Guy Hareau, Reinhard Simon, Henry Juarez, Ulrich Kleinwechter, Greg Forbes, Adam Sparks from CIP, and David Brown and Charles Staver from Bioversity International.Please note these services may be unavailable at times due to maintenance work.Please feel free to contact us with any questions or problems you may be having with RTBMaps.
Australia's Land Borders is a product within the Foundation Spatial Data Framework (FSDF) suite of datasets. It is endorsed by the ANZLIC - the Spatial Information Council and the Intergovernmental Committee on Surveying and Mapping (ICSM) as a nationally consistent and topologically correct representation of the land borders published by the Australian states and territories.
The purpose of this product is to provide: (i) a building block which enables development of other national datasets; (ii) integration with other geospatial frameworks in support of data analysis; and (iii) visualisation of these borders as cartographic depiction on a map. Although this dataset depicts land borders, it is not nor does it suggests to be a legal definition of these borders. Therefore it cannot and must not be used for those use-cases pertaining to legal context.
This product is constructed by Geoscience Australia (GA), on behalf of the ICSM, from authoritative open data published by the land mapping agencies in their respective Australian state and territory jurisdictions. Construction of a nationally consistent dataset required harmonisation and mediation of data issues at abutting land borders. In order to make informed and consistent determinations, other datasets were used as visual aid in determining which elements of published jurisdictional data to promote into the national product. These datasets include, but are not restricted to: (i) PSMA Australia's commercial products such as the cadastral (property) boundaries (CadLite) and Geocoded National Address File (GNAF); (ii) Esri's World Imagery and Imagery with Labels base maps; and (iii) Geoscience Australia's GEODATA TOPO 250K Series 3. Where practical, Land Borders do not cross cadastral boundaries and are logically consistent with addressing data in GNAF.
It is important to reaffirm that although third-party commercial datasets are used for validation, which is within remit of the licence agreement between PSMA and GA, no commercially licenced data has been promoted into the product. Australian Land Borders are constructed exclusively from published open data originating from state, territory and federal agencies.
This foundation dataset consists of edges (polylines) representing mediated segments of state and/or territory borders, connected at the nodes and terminated at the coastline defined as the Mean High Water Mark (MHWM) tidal boundary. These polylines are attributed to convey information about provenance of the source. It is envisaged that land borders will be topologically interoperable with the future national coastline dataset/s, currently being built through the ICSM coastline capture collaboration program. Topological interoperability will enable closure of land mass polygon, permitting spatial analysis operations such as vector overly, intersect, or raster map algebra. In addition to polylines, the product incorporates a number of well-known survey-monumented corners which have historical and cultural significance associated with the place name.
This foundation dataset is constructed from the best-available data, as published by relevant custodian in state and territory jurisdiction. It should be noted that some custodians - in particular the Northern Territory and New South Wales - have opted out or to rely on data from abutting jurisdiction as an agreed portrayal of their border. Accuracy and precision of land borders as depicted by spatial objects (features) may vary according to custodian specifications, although there is topological coherence across all the objects within this integrated product. The guaranteed minimum nominal scale for all use-cases, applying to complete spatial coverage of this product, is 1:25 000. In some areas the accuracy is much better and maybe approaching cadastre survey specification, however, this is an artefact of data assembly from disparate sources, rather than the product design. As the principle, no data was generalised or spatially degraded in the process of constructing this product.
Some use-cases for this product are: general digital and web map-making applications; a reference dataset to use for cartographic generalisation for a smaller-scale map applications; constraining geometric objects for revision and updates to the Mesh Blocks, the building blocks for the larger regions of the Australian Statistical Geography Standard (ASGS) framework; rapid resolution of cross-border data issues to enable construction and visual display of a common operating picture, etc.
This foundation dataset will be maintained at irregular intervals, for example if a state or territory jurisdiction decides to publish or republish their land borders. If there is a new version of this dataset, past version will be archived and information about the changes will be made available in the change log.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
ForestOwn_v1 is a 250-meter spatial resolution raster geospatial dataset of forest ownership of the conterminous United States (CONUS). The dataset was prepared by the Forest Inventory and Analysis (FIA) program, Northern Research Station, Forest Service, United States Department of Agriculture (USDA), and differentiates forest from non-forest land and water, public and private ownership, and the percent of private forest land in corporate ownership. The forest/non-forest land/water classification is derived from the USDA Forest Service's CONUS Forest/Nonforest dataset. Public and private land ownership class is derived from the Protected Areas Database of the United States, Version 1.1 (CBI Edition). Corporate ownership of private forest land is derived from the Forest Service's 2007 Resources Planning Act (RPA) dataset, summarized over the Environmental Protection Agency's Original Environmental Monitoring & Assessment Program (EMAP) grid 648 square kilometer hexagon dataset.The ForestOwn_v1 dataset is designed for conducting geospatial analyses and for producing cartographic products over regional to national geographic extents.A corresponding Research Map (RMAP) has been produced to cartographically portray this dataset.
Original metadata date was 02/09/2011. Minor metadata updates were made on 05/10/2013, 04/16/2014, 12/21/2016, and 02/06/2017. Additional minor metadata updates were made on 04/20/2023.
On 07/23/2020 a newer version of these data became available (Sass et al. 2020).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data publication contains 2014 high-resolution land cover data for each of the 66 counties within South Dakota. These data are a digital representation of land cover derived from 1-meter aerial imagery from the National Agriculture Imagery Program (NAIP). There is a separate file for each county. Data are intended for use in rural areas and therefore do not include land cover in cities and towns. Land cover classes (tree cover, other land cover, water, or city/town) were mapped using an object-based image analysis approach and supervised classification.These data are designed for conducting geospatial analyses and for producing cartographic products. In particular, these data are intended to depict the location of tree cover in the county. The mapping procedures were developed specifically for agricultural landscapes that are dominated by annual crops, rangeland, and pasture and where tree cover is often found in narrow configurations, such as windbreaks and riparian corridors. Because much of the tree cover in agricultural areas of the United States occurs in windbreaks and narrow riparian corridors, many geospatial datasets derived from coarser-resolution satellite data (such as Landsat), do not capture these landscape features. This dataset and others in this series are intended to address this particular data gap.This metadata file contains documentation for the entire set of land cover county files. Individual metadata documents containing detailed information specific (e.g., spatial) to each county are included with the data files.
The Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) dataset provides a 7.5 arcsecond (approximately 250 meter resolution) digital elevation model with world-wide coverage at a resolution suitable for regional to continental scale analyses. This layer provides access to a 250m cell-sized raster created from the Global Multi-resolution Terrain Elevation Data 2010 7.5 arcsecond mean elevation product. The dataset represents a compilation and synthesis of 11 different existing raster data sources. The data were published in 2011 by the USGS and the National Geospatial-Intelligence Agency. The dataset is documented in the publication: Danielson and Gesch. 2011. Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). U.S. Geological Survey Open-File Report 2011–1073, 26 p. Dataset SummaryAnalysis: Restricted single source analysis. Maximum size of analysis is 16,000 x 16,000 pixels. What can you do with this layer?This layer is suitable for both visualization and analysis. It can be used in ArcGIS Online in web maps and applications and can be used in ArcGIS Desktop. Restricted single source analysis means this layer has size constraints for analysis and it is not recommended for use with other layers in multisource analysis. This layer has query, identify, and export image services available. This layer is restricted to a maximum area of 16,000 x 16,000 pixels - an area 4,000 kilometers on a side or an area approximately the size of Europe. The source data for this layer are available here. This layer is part of a larger collection of landscape layers that you can use to perform a wide variety of mapping and analysis tasks. The Living Atlas of the World provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics. Geonet is a good resource for learning more about landscape layers and the Living Atlas of the World. To get started see the Living Atlas Discussion Group. The Esri Insider Blog provides an introduction to the Ecophysiographic Mapping project.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement. This resource contains raster datasets created using ArcGIS to analyse groundwater levels in the Namoi subregion. Purpose These data layers were created in ArcGIS as part of the analysis to …Show full descriptionAbstract The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement. This resource contains raster datasets created using ArcGIS to analyse groundwater levels in the Namoi subregion. Purpose These data layers were created in ArcGIS as part of the analysis to investigate surface water - groundwater connectivity in the Namoi subregion. The data layers provide several of the figures presented in the Namoi 2.1.5 Surface water - groundwater interactions report. Dataset History Extracted points inside Namoi subregion boundary. Converted bore and pipe values to Hydrocode format, changed heading of 'Value' column to 'Waterlevel' and removed unnecessary columns then joined to Updated_NSW_GroundWaterLevel_data_analysis_v01\NGIS_NSW_Bore_Join_Hydmeas_unique_bores.shp clipped to only include those bores within the Namoi subregion. Selected only those bores with sample dates between >=26/4/2012 and <31/7/2012. Then removed 4 gauges due to anomalous ref_pt_height values or WaterElev values higher than Land_Elev values. Then added new columns of calculations: WaterElev = TsRefElev - Water_Leve DepthWater = WaterElev - Ref_pt_height Ref_pt_height = TsRefElev - LandElev Alternatively - Selected only those bores with sample dates between >=1/5/2006 and <1/7/2006 2012_Wat_Elev - This raster was created by interpolating Water_Elev field points from HydmeasJune2012_only.shp, using Spatial Analyst - Topo to Raster tool. And using the alluvium boundary (NAM_113_Aquifer1_NamoiAlluviums.shp) as a boundary input source. 12_dw_olp_enf - Select out only those bores that are in both source files. Then using depthwater in Topo to Raster, with alluvium as the boundary, ENFORCE field chosen, and using only those bores present in 2012 and 2006 dataset. 2012dw1km_alu - Clipped the 'watercourselines' layer to the Namoi Subregion, then selected 'Major' water courses only. Then used the Geoprocessing 'Buffer' tool to create a polygon delineating an area 1km around all the major streams in the Namoi subregion. selected points from HydmeasJune2012_only.shp that were within 1km of features the WatercourseLines then used the selected points and the 1km buffer around the major water courses and the Topo to Raster tool in Spatial analyst to create the raster. Then used the alluvium boundary to truncate the raster, to limit to the area of interest. 12_minus_06 - Select out bores from the 2006 dataset that are also in the 2012 dataset. Then create a raster using depth_water in topo to raster, with ENFORCE field chosen to remove sinks, and alluvium as boundary. Then, using Map Algebra - Raster Calculator, subtract the raster just created from 12_dw_olp_enf Dataset Citation Bioregional Assessment Programme (2017) Namoi bore analysis rasters. Bioregional Assessment Derived Dataset. Viewed 10 December 2018, http://data.bioregionalassessments.gov.au/dataset/7604087e-859c-4a92-8548-0aa274e8a226. Dataset Ancestors Derived From Bioregional Assessment areas v02 Derived From Gippsland Project boundary Derived From Bioregional Assessment areas v04 Derived From Upper Namoi groundwater management zones Derived From Natural Resource Management (NRM) Regions 2010 Derived From Bioregional Assessment areas v03 Derived From Victoria - Seamless Geology 2014 Derived From GIS analysis of HYDMEAS - Hydstra Groundwater Measurement Update: NSW Office of Water - Nov2013 Derived From Bioregional Assessment areas v01 Derived From GEODATA TOPO 250K Series 3, File Geodatabase format (.gdb) Derived From GEODATA TOPO 250K Series 3 Derived From NSW Catchment Management Authority Boundaries 20130917 Derived From Geological Provinces - Full Extent Derived From Hydstra Groundwater Measurement Update - NSW Office of Water, Nov2013
Deprecation notice: This tool is deprecated because this functionality is now available with out-of-the-box tools in ArcGIS Pro. The tool author will no longer be making further enhancements or fixing major bugs.Use Add GTFS to a Network Dataset to incorporate transit data into a network dataset so you can perform schedule-aware analyses using the Network Analyst tools in ArcMap.After creating your network dataset, you can use the ArcGIS Network Analyst tools, like Service Area and OD Cost Matrix, to perform transit/pedestrian accessibility analyses, make decisions about where to locate new facilities, find populations underserved by transit or particular types of facilities, or visualize the areas reachable from your business at different times of day. You can also publish services in ArcGIS Server that use your network dataset.The Add GTFS to a Network Dataset tool suite consists of a toolbox to pre-process the GTFS data to prepare it for use in the network dataset and a custom GTFS transit evaluator you must install that helps the network dataset read the GTFS schedules. A user's guide is included to help you set up your network dataset and run analyses.Instructions:Download the tool. It will be a zip file.Unzip the file and put it in a permanent location on your machine where you won't lose it. Do not save the unzipped tool folder on a network drive, the Desktop, or any other special reserved Windows folders (like C:\Program Files) because this could cause problems later.The unzipped file contains an installer, AddGTFStoaNetworkDataset_Installer.exe. Double-click this to run it. The installation should proceed quickly, and it should say "Completed" when finished.Read the User's Guide for instructions on creating and using your network dataset.System requirements:ArcMap 10.1 or higher with a Desktop Standard (ArcEditor) license. (You can still use it if you have a Desktop Basic license, but you will have to find an alternate method for one of the pre-processing tools.) ArcMap 10.6 or higher is recommended because you will be able to construct your network dataset much more easily using a template rather than having to do it manually step by step. This tool does not work in ArcGIS Pro. See the User's Guide for more information.Network Analyst extensionThe necessary permissions to install something on your computer.Data requirements:Street data for the area covered by your transit system, preferably data including pedestrian attributes. If you need help preparing high-quality street data for your network, please review this tutorial.A valid GTFS dataset. If your GTFS dataset has blank values for arrival_time and departure_time in stop_times.txt, you will not be able to run this tool. You can download and use the Interpolate Blank Stop Times tool to estimate blank arrival_time and departure_time values for your dataset if you still want to use it.Help forum