Facebook
TwitterThis data release contains the analytical results and evaluated source data files of geospatial analyses for identifying areas in Alaska that may be prospective for different types of lode gold deposits, including orogenic, reduced-intrusion-related, epithermal, and gold-bearing porphyry. The spatial analysis is based on queries of statewide source datasets of aeromagnetic surveys, Alaska Geochemical Database (AGDB3), Alaska Resource Data File (ARDF), and Alaska Geologic Map (SIM3340) within areas defined by 12-digit HUCs (subwatersheds) from the National Watershed Boundary dataset. The packages of files available for download are: 1. LodeGold_Results_gdb.zip - The analytical results in geodatabase polygon feature classes which contain the scores for each source dataset layer query, the accumulative score, and a designation for high, medium, or low potential and high, medium, or low certainty for a deposit type within the HUC. The data is described by FGDC metadata. An mxd file, and cartographic feature classes are provided for display of the results in ArcMap. An included README file describes the complete contents of the zip file. 2. LodeGold_Results_shape.zip - Copies of the results from the geodatabase are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file. 3. LodeGold_SourceData_gdb.zip - The source datasets in geodatabase and geotiff format. Data layers include aeromagnetic surveys, AGDB3, ARDF, lithology from SIM3340, and HUC subwatersheds. The data is described by FGDC metadata. An mxd file and cartographic feature classes are provided for display of the source data in ArcMap. Also included are the python scripts used to perform the analyses. Users may modify the scripts to design their own analyses. The included README files describe the complete contents of the zip file and explain the usage of the scripts. 4. LodeGold_SourceData_shape.zip - Copies of the geodatabase source dataset derivatives from ARDF and lithology from SIM3340 created for this analysis are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file.
Facebook
TwitterDataset for the textbook Computational Methods and GIS Applications in Social Science (3rd Edition), 2023 Fahui Wang, Lingbo Liu Main Book Citation: Wang, F., & Liu, L. (2023). Computational Methods and GIS Applications in Social Science (3rd ed.). CRC Press. https://doi.org/10.1201/9781003292302 KNIME Lab Manual Citation: Liu, L., & Wang, F. (2023). Computational Methods and GIS Applications in Social Science - Lab Manual. CRC Press. https://doi.org/10.1201/9781003304357 KNIME Hub Dataset and Workflow for Computational Methods and GIS Applications in Social Science-Lab Manual Update Log If Python package not found in Package Management, use ArcGIS Pro's Python Command Prompt to install them, e.g., conda install -c conda-forge python-igraph leidenalg NetworkCommDetPro in CMGIS-V3-Tools was updated on July 10,2024 Add spatial adjacency table into Florida on June 29,2024 The dataset and tool for ABM Crime Simulation were updated on August 3, 2023, The toolkits in CMGIS-V3-Tools was updated on August 3rd,2023. Report Issues on GitHub https://github.com/UrbanGISer/Computational-Methods-and-GIS-Applications-in-Social-Science Following the website of Fahui Wang : http://faculty.lsu.edu/fahui Contents Chapter 1. Getting Started with ArcGIS: Data Management and Basic Spatial Analysis Tools Case Study 1: Mapping and Analyzing Population Density Pattern in Baton Rouge, Louisiana Chapter 2. Measuring Distance and Travel Time and Analyzing Distance Decay Behavior Case Study 2A: Estimating Drive Time and Transit Time in Baton Rouge, Louisiana Case Study 2B: Analyzing Distance Decay Behavior for Hospitalization in Florida Chapter 3. Spatial Smoothing and Spatial Interpolation Case Study 3A: Mapping Place Names in Guangxi, China Case Study 3B: Area-Based Interpolations of Population in Baton Rouge, Louisiana Case Study 3C: Detecting Spatiotemporal Crime Hotspots in Baton Rouge, Louisiana Chapter 4. Delineating Functional Regions and Applications in Health Geography Case Study 4A: Defining Service Areas of Acute Hospitals in Baton Rouge, Louisiana Case Study 4B: Automated Delineation of Hospital Service Areas in Florida Chapter 5. GIS-Based Measures of Spatial Accessibility and Application in Examining Healthcare Disparity Case Study 5: Measuring Accessibility of Primary Care Physicians in Baton Rouge Chapter 6. Function Fittings by Regressions and Application in Analyzing Urban Density Patterns Case Study 6: Analyzing Population Density Patterns in Chicago Urban Area >Chapter 7. Principal Components, Factor and Cluster Analyses and Application in Social Area Analysis Case Study 7: Social Area Analysis in Beijing Chapter 8. Spatial Statistics and Applications in Cultural and Crime Geography Case Study 8A: Spatial Distribution and Clusters of Place Names in Yunnan, China Case Study 8B: Detecting Colocation Between Crime Incidents and Facilities Case Study 8C: Spatial Cluster and Regression Analyses of Homicide Patterns in Chicago Chapter 9. Regionalization Methods and Application in Analysis of Cancer Data Case Study 9: Constructing Geographical Areas for Mapping Cancer Rates in Louisiana Chapter 10. System of Linear Equations and Application of Garin-Lowry in Simulating Urban Population and Employment Patterns Case Study 10: Simulating Population and Service Employment Distributions in a Hypothetical City Chapter 11. Linear and Quadratic Programming and Applications in Examining Wasteful Commuting and Allocating Healthcare Providers Case Study 11A: Measuring Wasteful Commuting in Columbus, Ohio Case Study 11B: Location-Allocation Analysis of Hospitals in Rural China Chapter 12. Monte Carlo Method and Applications in Urban Population and Traffic Simulations Case Study 12A. Examining Zonal Effect on Urban Population Density Functions in Chicago by Monte Carlo Simulation Case Study 12B: Monte Carlo-Based Traffic Simulation in Baton Rouge, Louisiana Chapter 13. Agent-Based Model and Application in Crime Simulation Case Study 13: Agent-Based Crime Simulation in Baton Rouge, Louisiana Chapter 14. Spatiotemporal Big Data Analytics and Application in Urban Studies Case Study 14A: Exploring Taxi Trajectory in ArcGIS Case Study 14B: Identifying High Traffic Corridors and Destinations in Shanghai Dataset File Structure 1 BatonRouge Census.gdb BR.gdb 2A BatonRouge BR_Road.gdb Hosp_Address.csv TransitNetworkTemplate.xml BR_GTFS Google API Pro.tbx 2B Florida FL_HSA.gdb R_ArcGIS_Tools.tbx (RegressionR) 3A China_GX GX.gdb 3B BatonRouge BR.gdb 3C BatonRouge BRcrime R_ArcGIS_Tools.tbx (STKDE) 4A BatonRouge BRRoad.gdb 4B Florida FL_HSA.gdb HSA Delineation Pro.tbx Huff Model Pro.tbx FLplgnAdjAppend.csv 5 BRMSA BRMSA.gdb Accessibility Pro.tbx 6 Chicago ChiUrArea.gdb R_ArcGIS_Tools.tbx (RegressionR) 7 Beijing BJSA.gdb bjattr.csv R_ArcGIS_Tools.tbx (PCAandFA, BasicClustering) 8A Yunnan YN.gdb R_ArcGIS_Tools.tbx (SaTScanR) 8B Jiangsu JS.gdb 8C Chicago ChiCity.gdb cityattr.csv ...
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Optimized for Geospatial and Big Data Analysis
This dataset is a refined and enhanced version of the original DataCo SMART SUPPLY CHAIN FOR BIG DATA ANALYSIS dataset, specifically designed for advanced geospatial and big data analysis. It incorporates geocoded information, language translations, and cleaned data to enable applications in logistics optimization, supply chain visualization, and performance analytics.
src_points.geojson: Source point geometries. dest_points.geojson: Destination point geometries. routes.geojson: Line geometries representing source-destination routes. DataCoSupplyChainDatasetRefined.csv
src_points.geojson
dest_points.geojson
routes.geojson
This dataset is based on the original dataset published by Fabian Constante, Fernando Silva, and António Pereira:
Constante, Fabian; Silva, Fernando; Pereira, António (2019), “DataCo SMART SUPPLY CHAIN FOR BIG DATA ANALYSIS”, Mendeley Data, V5, doi: 10.17632/8gx2fvg2k6.5.
Refinements include geospatial processing, translation, and additional cleaning by the uploader to enhance usability and analytical potential.
This dataset is designed to empower data scientists, researchers, and business professionals to explore the intersection of geospatial intelligence and supply chain optimization.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset used for the research presented in the following paper: Takayuki Hiraoka, Takashi Kirimura, Naoya Fujiwara (2024) "Geospatial analysis of toponyms in geo-tagged social media posts".
We collected georeferenced Twitter posts tagged to coordinates inside the bounding box of Japan between 2012-2018. The present dataset represents the spatial distributions of all geotagged posts as well as posts containing in the text each of 24 domestic toponyms, 12 common nouns, and 6 foreign toponyms. The code used to analyze the data is available on GitHub.
selected_geotagged_tweet_data/: Number of geotagged twitter posts in each grid cell. Each csv file under this directory associates each grid cell (spanning 30 seconds of latitude and 45 secoonds of longitude, which is approximately a 1km x 1km square, specified by an 8 digit code m3code) with the number of geotagged tweets tagged to the coordinates inside that cell (tweetcount). file_names.json relates each of the toponyms studied in this work to the corresponding datafile (all denotes the full data). population/population_center_2020.xlsx: Center of population of each municipality based on the 2020 census. Derived from data published by the Statistics Bureau of Japan on their website (Japanese)population/census2015mesh3_totalpop_setai.csv: Resident population in each grid cell based on the 2015 census. Derived from data published by the Statistics Bureau of Japan on e-stat (Japanese)population/economiccensus2016mesh3_jigyosyo_jugyosya.csv: Employed population in each grid cell based on the 2016 Economic Census. Derived from data published by the Statistics Bureau of Japan on e-stat (Japanese)japan_MetropolitanEmploymentArea2015map/: Shape file for the boundaries of Metropolitan Employment Areas (MEA) in Japan. See this website for details of MEA.ward_shapefiles/: Shape files for the boundaries of wards in large cities, published by the Statistics Bureau of Japan on e-stat
Facebook
TwitterXverum’s Point of Interest (POI) Data is a comprehensive dataset containing 230M+ verified locations across 5000 business categories. Our dataset delivers structured geographic data, business attributes, location intelligence, and mapping insights, making it an essential tool for GIS applications, market research, urban planning, and competitive analysis.
With regular updates and continuous POI discovery, Xverum ensures accurate, up-to-date information on businesses, landmarks, retail stores, and more. Delivered in bulk to S3 Bucket and cloud storage, our dataset integrates seamlessly into mapping, geographic information systems, and analytics platforms.
🔥 Key Features:
Extensive POI Coverage: ✅ 230M+ Points of Interest worldwide, covering 5000 business categories. ✅ Includes retail stores, restaurants, corporate offices, landmarks, and service providers.
Geographic & Location Intelligence Data: ✅ Latitude & longitude coordinates for mapping and navigation applications. ✅ Geographic classification, including country, state, city, and postal code. ✅ Business status tracking – Open, temporarily closed, or permanently closed.
Continuous Discovery & Regular Updates: ✅ New POIs continuously added through discovery processes. ✅ Regular updates ensure data accuracy, reflecting new openings and closures.
Rich Business Insights: ✅ Detailed business attributes, including company name, category, and subcategories. ✅ Contact details, including phone number and website (if available). ✅ Consumer review insights, including rating distribution and total number of reviews (additional feature). ✅ Operating hours where available.
Ideal for Mapping & Location Analytics: ✅ Supports geospatial analysis & GIS applications. ✅ Enhances mapping & navigation solutions with structured POI data. ✅ Provides location intelligence for site selection & business expansion strategies.
Bulk Data Delivery (NO API): ✅ Delivered in bulk via S3 Bucket or cloud storage. ✅ Available in structured format (.json) for seamless integration.
🏆Primary Use Cases:
Mapping & Geographic Analysis: 🔹 Power GIS platforms & navigation systems with precise POI data. 🔹 Enhance digital maps with accurate business locations & categories.
Retail Expansion & Market Research: 🔹 Identify key business locations & competitors for market analysis. 🔹 Assess brand presence across different industries & geographies.
Business Intelligence & Competitive Analysis: 🔹 Benchmark competitor locations & regional business density. 🔹 Analyze market trends through POI growth & closure tracking.
Smart City & Urban Planning: 🔹 Support public infrastructure projects with accurate POI data. 🔹 Improve accessibility & zoning decisions for government & businesses.
💡 Why Choose Xverum’s POI Data?
Access Xverum’s 230M+ POI dataset for mapping, geographic analysis, and location intelligence. Request a free sample or contact us to customize your dataset today!
Facebook
TwitterThis dataset represents point locations of cities and towns in Arizona. The data contains point locations for incorporated cities, Census Designated Places and populated places. Several data sets were used as inputs to construct this data set. A subset of the Geographic Names Information System (GNIS) national dataset for the state of Arizona was used for the base location of most of the points. Polygon files of the Census Designated Places (CDP), from the U.S. Census Bureau and an incorporated city boundary database developed and maintained by the Arizona State Land Department were also used for reference during development. Every incorporated city is represented by a point, originally derived from GNIS. Some of these points were moved based on local knowledge of the GIS Analyst constructing the data set. Some of the CDP points were also moved and while most CDP's of the Census Bureau have one point location in this data set, some inconsistencies were allowed in order to facilitate the use of the data for mapping purposes. Population estimates were derived from data collected during the 2010 Census. During development, an additional attribute field was added to provide additional functionality to the users of this data. This field, named 'DEF_CAT', implies definition category, and will allow users to easily view, and create custom layers or datasets from this file. For example, new layers may created to include only incorporated cities (DEF_CAT = Incorporated), Census designated places (DEF_CAT = Incorporated OR DEF_CAT = CDP), or all cities that are neither CDP's or incorporated (DEF_CAT= Other). This data is current as of February 2012. At this time, there is no planned maintenance or update process for this dataset.This data is created to serve as base information for use in GIS systems for a variety of planning, reference, and analysis purposes. This data does not represent a legal record.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This project explores the integration of Geographic Information Systems (GIS) and Natural Language Processing (NLP) to improve job–candidate matching in recruitment. Traditional AI-based e-recruitment systems often ignore geographic constraints. Our hybrid model addresses this gap by incorporating both textual similarity and spatial relevance in matching candidates to job postings.Data UsedCandidate Data (CVs)Source: Scraped from emploi.maSize: 1000 CVs after cleaningContent: Candidate names (anonymized), skills, experiences, locations (coordinates), availability, etc.Job DescriptionsSource: Publicly available dataset from KaggleSize: we took 1000 job postings using category :MoroccoContent: Titles, descriptions, required skills, sector labels, and office locations...All datasets have been cleaned and anonymized for privacy and research ethics compliance.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22121490%2F7189944f8fc292a094c90daa799d08ca%2FChatGPT%20Image%2015%20Kas%202025%2014_07_37.png?generation=1763204959770660&alt=media" alt="">
This synthetic dataset simulates 300 global cities across 6 major geographic regions, designed specifically for unsupervised machine learning and clustering analysis. It explores how economic status, environmental quality, infrastructure, and digital access shape urban lifestyles worldwide.
| Feature | Description | Range |
|---|---|---|
| 10 Features | Economic, environmental & social indicators | Realistically scaled |
| 300 Cities | Europe, Asia, Americas, Africa, Oceania | Diverse distributions |
| Strong Correlations | Income ↔ Rent (+0.8), Density ↔ Pollution (+0.6) | ML-ready |
| No Missing Values | Clean, preprocessed data | Ready for analysis |
| 4-5 Natural Clusters | Metropolitan hubs, eco-towns, developing centers | Pre-validated |
✅ Realistic Correlations: Income strongly predicts rent (+0.8), internet access (+0.7), and happiness (+0.6)
✅ Regional Diversity: Each region has distinct economic and environmental characteristics
✅ Clustering-Ready: Naturally separable into 4-5 lifestyle archetypes
✅ Beginner-Friendly: No data cleaning required, includes example code
✅ Documented: Comprehensive README with methodology and use cases
import pandas as pd
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
# Load and prepare
df = pd.read_csv('city_lifestyle_dataset.csv')
X = df.drop(['city_name', 'country'], axis=1)
X_scaled = StandardScaler().fit_transform(X)
# Cluster
kmeans = KMeans(n_clusters=5, random_state=42)
df['cluster'] = kmeans.fit_predict(X_scaled)
# Analyze
print(df.groupby('cluster').mean())
After working with this dataset, you will be able to: 1. Apply K-Means, DBSCAN, and Hierarchical Clustering 2. Use PCA for dimensionality reduction and visualization 3. Interpret correlation matrices and feature relationships 4. Create geographic visualizations with cluster assignments 5. Profile and name discovered clusters based on characteristics
| Cluster | Characteristics | Example Cities |
|---|---|---|
| Metropolitan Tech Hubs | High income, density, rent | Silicon Valley, Singapore |
| Eco-Friendly Towns | Low density, clean air, high happiness | Nordic cities |
| Developing Centers | Mid income, high density, poor air | Emerging markets |
| Low-Income Suburban | Low infrastructure, income | Rural areas |
| Industrial Mega-Cities | Very high density, pollution | Manufacturing hubs |
Unlike random synthetic data, this dataset was carefully engineered with: - ✨ Realistic correlation structures based on urban research - 🌍 Regional characteristics matching real-world patterns - 🎯 Optimal cluster separability (validated via silhouette scores) - 📚 Comprehensive documentation and starter code
✓ Learn clustering without data cleaning hassles
✓ Practice PCA and dimensionality reduction
✓ Create beautiful geographic visualizations
✓ Understand feature correlation in real-world contexts
✓ Build a portfolio project with clear business insights
This dataset was designed for educational purposes in machine learning and data science. While synthetic, it reflects real patterns observed in global urban development research.
Happy Clustering! 🎉
Facebook
TwitterParcel boundary lines in this dataset are published once a year, after the boundary adjustments have been approved by Planning and Zoning and certified through the Assessor's Office. Attribute data is published at different times throughout the year, as detailed below.
*Attribute data excludes ownership and address data in this dataset. If you wish to have these data, please fill out the Public Information request form found in the Download Datasets page of the GIS Portal and email to lfrederick@co.valley.id.us.
ATTRIBUTE DATA - MONTHLY UPDATES
These fields are updated in the dataset monthly. After the public table updates are run by the Assessor's Office, Valley County GIS analyst exports the tables to append/update the new data values.
ATTRIBUTE DATA - ANNUAL UPDATES
These fields are updated annually after certification of parcel boundaries and valuation have been completed.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Tool and data set of road networks for 80 of the most populated urban areas in the world. The data consist of a graph edge list for each city and two corresponding GIS shapefiles (i.e., links and nodes).Make your own data with our ArcGIS, QGIS, and python tools available at: http://csun.uic.edu/codes/GISF2E.htmlPlease cite: Karduni,A., Kermanshah, A., and Derrible, S., 2016, "A protocol to convert spatial polyline data to network formats and applications to world urban road networks", Scientific Data, 3:160046, Available at http://www.nature.com/articles/sdata201646
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this course, you will learn to work within the free and open-source R environment with a specific focus on working with and analyzing geospatial data. We will cover a wide variety of data and spatial data analytics topics, and you will learn how to code in R along the way. The Introduction module provides more background info about the course and course set up. This course is designed for someone with some prior GIS knowledge. For example, you should know the basics of working with maps, map projections, and vector and raster data. You should be able to perform common spatial analysis tasks and make map layouts. If you do not have a GIS background, we would recommend checking out the West Virginia View GIScience class. We do not assume that you have any prior experience with R or with coding. So, don't worry if you haven't developed these skill sets yet. That is a major goal in this course. Background material will be provided using code examples, videos, and presentations. We have provided assignments to offer hands-on learning opportunities. Data links for the lecture modules are provided within each module while data for the assignments are linked to the assignment buttons below. Please see the sequencing document for our suggested order in which to work through the material. After completing this course you will be able to: prepare, manipulate, query, and generally work with data in R. perform data summarization, comparisons, and statistical tests. create quality graphs, map layouts, and interactive web maps to visualize data and findings. present your research, methods, results, and code as web pages to foster reproducible research. work with spatial data in R. analyze vector and raster geospatial data to answer a question with a spatial component. make spatial models and predictions using regression and machine learning. code in the R language at an intermediate level.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Long Island Sound Study developed these digital data from 1:100,000-scale National Oceanic & Atmospheric Administration (NOAA) and United States Geological Survey (USGS) maps as a general reference to the depth of water in Long Island Sound. In 1996, these data were digitized from paper maps by the Long Island Sound Study (http://www.longislandsoundstudy.net) and incorporated into a Long Island Sound GIS database. Not intended for maps printed at map scales greater or more detailed than 1:100,000 scale (1 inch = 1,578 feet.) Dataset credit: Applied Geographics, Inc. of Boston, Massachussets was contracted by the Long Island Sound Study to automate and digitize these bathymetry data for Long Island Sound. Linda Bischoff, GIS Analyst, digitized the data and created the orginal metadata.
Facebook
TwitterData set that contains information on archaeological remains of the pre historic settlement of the Letolo valley on Savaii on Samoa. It is built in ArcMap from ESRI and is based on previously unpublished surveys made by the Peace Corps Volonteer Gregory Jackmond in 1976-78, and in a lesser degree on excavations made by Helene Martinsson Wallin and Paul Wallin. The settlement was in use from at least 1000 AD to about 1700- 1800. Since abandonment it has been covered by thick jungle. However by the time of the survey by Jackmond (1976-78) it was grazed by cattle and the remains was visible. The survey is at file at Auckland War Memorial Museum and has hitherto been unpublished. A copy of the survey has been accessed by Olof Håkansson through Martinsson Wallin and Wallin and as part of a Masters Thesis in Archeology at Uppsala University it has been digitised.
Olof Håkansson has built the data base structure in the software from ESRI, and digitised the data in 2015 to 2017. One of the aims of the Masters Thesis was to discuss hierarchies. To do this, subsets of the data have been displayed in various ways on maps. Another aim was to discuss archaeological methodology when working with spatial data, but the data in itself can be used without regard to the questions asked in the Masters Thesis. All data that was unclear has been removed in an effort to avoid errors being introduced. Even so, if there is mistakes in the data set it is to be blamed on the researcher, Olof Håkansson. A more comprehensive account of the aim, questions, purpose, method, as well the results of the research, is to be found in the Masters Thesis itself. Direkt link http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1149265&dswid=9472
Purpose:
The purpose is to examine hierarchies in prehistoric Samoa. The purpose is further to make the produced data sets available for study.
Prehistoric remains of the settlement of Letolo on the Island of Savaii in Samoa in Polynesia
Facebook
TwitterThe dataset used for geospatial data analysis, including remote sensing, GPS, and RFID data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about books. It has 1 row and is filtered where the book is Learning GIS using open source software : an applied guide for geo-spatial analysis. It features 7 columns including author, publication date, language, and book publisher.
Facebook
TwitterGIS project files and imagery data required to complete the Introduction to Planetary Image Analysis and Geologic Mapping in ArcGIS Pro tutorial. These data cover the area in and around Jezero crater, Mars.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The code is used for measuring the vulnerability of urban street networks based on large-scale region segmentation. Corresponding paper entitiled "Vulnerability analysis of urban street networks: A large-scale region segmentation approach" has been submitted to IJGIS.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
The risk of natural disasters, many of which are amplified by climate change, requires the protection of emergency evacuation routes to permit evacuees safe passage. California has recognized the need through the AB 747 Planning and Zoning Law, which requires each county and city in California to update their - general plans to include safety elements from unreasonable risks associated with various hazards, specifically evacuation routes and their capacity, safety, and viability under a range of emergency scenarios. These routes must be identified in advance and maintained so they can support evacuations. Today, there is a lack of a centralized database of the identified routes or their general assessment. Consequently, this proposal responds to Caltrans’ research priority for “GIS Mapping of Emergency Evacuation Routes.” Specifically, the project objectives are: 1) create a centralized GIS database, by collecting and compiling available evacuation route GIS layers, and the safety element of the evacuation routes from different jurisdictions as well as their use in various types of evacuation scenarios such as wildfire, flooding, or landslides. 2) Perform network analyses and modeling based on the team’s experience with road network performance, access restoration, and critical infrastructure modeling, for a set of case studies, as well as, assessing their performance considering the latest evacuation research. 3) Analyze how well current bus and rail routes align with evacuation routes; and for a series of case studies, using data from previous evacuations, evaluate how well aligned the safety elements of the emerging plans are, relative to previous evacuation routes. And 4) analyze different metrics about the performance of the evacuation routes for different segments of the population (e.g., elderly, mobility constrained, non-vehicle households, and disadvantaged communities). The database and assessments will help inform infrastructure investment decisions and to develop recommendations on how best to maintain State transportation assets and secure safe evacuation routes, as they will identify the road segments with the largest impact on the evacuation route/network performance. The project will deliver a GIS of the compiled plans, a report summarizing the creation of the database and the analyses and will make a final presentation of the study results. Methods The project used the following public datasets: • Open Street Map. The team collected the road network arcs and nodes of the selected localities and the team will make public the graph used for each locality. • National Risk Index (NRI): The team used the NRI obtained publicly from FEMA at the census tract level. • American Community Survey (ACS): The team used ACS data to estimate the Social Vulnerability Index at the census block level. Then the author developed a measurement to estimate the road network performance risk at the node level, by estimating the Hansen accessibility index, betweenness centrality and the NRI. Create a set of CSV files with the risk for more than 450 localities in California, on around 18 natural hazards. I also have graphs of the RNP risk at the regional level showing the directionality of the risk.
Facebook
TwitterThis dataset combines the work of several different projects to create a seamless data set for the contiguous United States. Data from four regional Gap Analysis Projects and the LANDFIRE project were combined to make this dataset. In the northwestern United States (Idaho, Oregon, Montana, Washington and Wyoming) data in this map came from the Northwest Gap Analysis Project. In the southwestern United States (Colorado, Arizona, Nevada, New Mexico, and Utah) data used in this map came from the Southwest Gap Analysis Project. The data for Alabama, Florida, Georgia, Kentucky, North Carolina, South Carolina, Mississippi, Tennessee, and Virginia came from the Southeast Gap Analysis Project and the California data was generated by the updated California Gap land cover project. The Hawaii Gap Analysis project provided the data for Hawaii. In areas of the county (central U.S., Northeast, Alaska) that have not yet been covered by a regional Gap Analysis Project, data from the Landfire project was used. Similarities in the methods used by these projects made possible the combining of the data they derived into one seamless coverage. They all used multi-season satellite imagery (Landsat ETM+) from 1999-2001 in conjunction with digital elevation model (DEM) derived datasets (e.g. elevation, landform) to model natural and semi-natural vegetation. Vegetation classes were drawn from NatureServe's Ecological System Classification (Comer et al. 2003) or classes developed by the Hawaii Gap project. Additionally, all of the projects included land use classes that were employed to describe areas where natural vegetation has been altered. In many areas of the country these classes were derived from the National Land Cover Dataset (NLCD). For the majority of classes and, in most areas of the country, a decision tree classifier was used to discriminate ecological system types. In some areas of the country, more manual techniques were used to discriminate small patch systems and systems not distinguishable through topography. The data contains multiple levels of thematic detail. At the most detailed level natural vegetation is represented by NatureServe's Ecological System classification (or in Hawaii the Hawaii GAP classification). These most detailed classifications have been crosswalked to the five highest levels of the National Vegetation Classification (NVC), Class, Subclass, Formation, Division and Macrogroup. This crosswalk allows users to display and analyze the data at different levels of thematic resolution. Developed areas, or areas dominated by introduced species, timber harvest, or water are represented by other classes, collectively refered to as land use classes; these land use classes occur at each of the thematic levels. Raster data in both ArcGIS Grid and ERDAS Imagine format is available for download at http://gis1.usgs.gov/csas/gap/viewer/land_cover/Map.aspx Six layer files are included in the download packages to assist the user in displaying the data at each of the Thematic levels in ArcGIS. In adition to the raster datasets the data is available in Web Mapping Services (WMS) format for each of the six NVC classification levels (Class, Subclass, Formation, Division, Macrogroup, Ecological System) at the following links. http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Class_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Subclass_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Formation_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Division_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Macrogroup_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_Ecological_Systems_Landuse/MapServer
Facebook
TwitterTraffic Analysis Zones (TAZ) for the COG/TPB Modeled Region from Metropolitan Washington Council of Governments. The TAZ dataset is used to join several types of zone-based transportation modeling data. For more information, visit https://plandc.dc.gov/page/traffic-analysis-zone.
Facebook
TwitterThis data release contains the analytical results and evaluated source data files of geospatial analyses for identifying areas in Alaska that may be prospective for different types of lode gold deposits, including orogenic, reduced-intrusion-related, epithermal, and gold-bearing porphyry. The spatial analysis is based on queries of statewide source datasets of aeromagnetic surveys, Alaska Geochemical Database (AGDB3), Alaska Resource Data File (ARDF), and Alaska Geologic Map (SIM3340) within areas defined by 12-digit HUCs (subwatersheds) from the National Watershed Boundary dataset. The packages of files available for download are: 1. LodeGold_Results_gdb.zip - The analytical results in geodatabase polygon feature classes which contain the scores for each source dataset layer query, the accumulative score, and a designation for high, medium, or low potential and high, medium, or low certainty for a deposit type within the HUC. The data is described by FGDC metadata. An mxd file, and cartographic feature classes are provided for display of the results in ArcMap. An included README file describes the complete contents of the zip file. 2. LodeGold_Results_shape.zip - Copies of the results from the geodatabase are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file. 3. LodeGold_SourceData_gdb.zip - The source datasets in geodatabase and geotiff format. Data layers include aeromagnetic surveys, AGDB3, ARDF, lithology from SIM3340, and HUC subwatersheds. The data is described by FGDC metadata. An mxd file and cartographic feature classes are provided for display of the source data in ArcMap. Also included are the python scripts used to perform the analyses. Users may modify the scripts to design their own analyses. The included README files describe the complete contents of the zip file and explain the usage of the scripts. 4. LodeGold_SourceData_shape.zip - Copies of the geodatabase source dataset derivatives from ARDF and lithology from SIM3340 created for this analysis are also provided in shapefile and CSV formats. The included README file describes the complete contents of the zip file.