Facebook
TwitterOpen Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Have you ever wanted to create your own maps, or integrate and visualize spatial datasets to examine changes in trends between locations and over time? Follow along with these training tutorials on QGIS, an open source geographic information system (GIS) and learn key concepts, procedures and skills for performing common GIS tasks – such as creating maps, as well as joining, overlaying and visualizing spatial datasets. These tutorials are geared towards new GIS users. We’ll start with foundational concepts, and build towards more advanced topics throughout – demonstrating how with a few relatively easy steps you can get quite a lot out of GIS. You can then extend these skills to datasets of thematic relevance to you in addressing tasks faced in your day-to-day work.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this course, you will learn to work within the free and open-source R environment with a specific focus on working with and analyzing geospatial data. We will cover a wide variety of data and spatial data analytics topics, and you will learn how to code in R along the way. The Introduction module provides more background info about the course and course set up. This course is designed for someone with some prior GIS knowledge. For example, you should know the basics of working with maps, map projections, and vector and raster data. You should be able to perform common spatial analysis tasks and make map layouts. If you do not have a GIS background, we would recommend checking out the West Virginia View GIScience class. We do not assume that you have any prior experience with R or with coding. So, don't worry if you haven't developed these skill sets yet. That is a major goal in this course. Background material will be provided using code examples, videos, and presentations. We have provided assignments to offer hands-on learning opportunities. Data links for the lecture modules are provided within each module while data for the assignments are linked to the assignment buttons below. Please see the sequencing document for our suggested order in which to work through the material. After completing this course you will be able to: prepare, manipulate, query, and generally work with data in R. perform data summarization, comparisons, and statistical tests. create quality graphs, map layouts, and interactive web maps to visualize data and findings. present your research, methods, results, and code as web pages to foster reproducible research. work with spatial data in R. analyze vector and raster geospatial data to answer a question with a spatial component. make spatial models and predictions using regression and machine learning. code in the R language at an intermediate level.
Facebook
TwitterThe National Hydrography Dataset Plus (NHDplus) maps the lakes, ponds, streams, rivers and other surface waters of the United States. Created by the US EPA Office of Water and the US Geological Survey, the NHDPlus provides mean annual and monthly flow estimates for rivers and streams. Additional attributes provide connections between features facilitating complicated analyses. For more information on the NHDPlus dataset see the NHDPlus v2 User Guide.Dataset SummaryPhenomenon Mapped: Surface waters and related features of the United States and associated territories not including Alaska.Geographic Extent: The United States not including Alaska, Puerto Rico, Guam, US Virgin Islands, Marshall Islands, Northern Marianas Islands, Palau, Federated States of Micronesia, and American SamoaProjection: Web Mercator Auxiliary Sphere Visible Scale: Visible at all scales but layer draws best at scales larger than 1:1,000,000Source: EPA and USGSUpdate Frequency: There is new new data since this 2019 version, so no updates planned in the futurePublication Date: March 13, 2019Prior to publication, the NHDPlus network and non-network flowline feature classes were combined into a single flowline layer. Similarly, the NHDPlus Area and Waterbody feature classes were merged under a single schema.Attribute fields were added to the flowline and waterbody layers to simplify symbology and enhance the layer's pop-ups. Fields added include Pop-up Title, Pop-up Subtitle, On or Off Network (flowlines only), Esri Symbology (waterbodies only), and Feature Code Description. All other attributes are from the original NHDPlus dataset. No data values -9999 and -9998 were converted to Null values for many of the flowline fields.What can you do with this layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer is limited to scales of approximately 1:1,000,000 or larger but a vector tile layer created from the same data can be used at smaller scales to produce a webmap that displays across the full range of scales. The layer or a map containing it can be used in an application. Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Apply filters. For example you can set a filter to show larger streams and rivers using the mean annual flow attribute or the stream order attribute. Change the layer’s style and symbologyAdd labels and set their propertiesCustomize the pop-upUse as an input to the ArcGIS Online analysis tools. This layer works well as a reference layer with the trace downstream and watershed tools. The buffer tool can be used to draw protective boundaries around streams and the extract data tool can be used to create copies of portions of the data.ArcGIS ProAdd this layer to a 2d or 3d map. Use as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class. Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the ArcGIS Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about books. It has 1 row and is filtered where the book is Learning GIS using open source software : an applied guide for geo-spatial analysis. It features 7 columns including author, publication date, language, and book publisher.
Facebook
TwitterDataset for the textbook Computational Methods and GIS Applications in Social Science (3rd Edition), 2023 Fahui Wang, Lingbo Liu Main Book Citation: Wang, F., & Liu, L. (2023). Computational Methods and GIS Applications in Social Science (3rd ed.). CRC Press. https://doi.org/10.1201/9781003292302 KNIME Lab Manual Citation: Liu, L., & Wang, F. (2023). Computational Methods and GIS Applications in Social Science - Lab Manual. CRC Press. https://doi.org/10.1201/9781003304357 KNIME Hub Dataset and Workflow for Computational Methods and GIS Applications in Social Science-Lab Manual Update Log If Python package not found in Package Management, use ArcGIS Pro's Python Command Prompt to install them, e.g., conda install -c conda-forge python-igraph leidenalg NetworkCommDetPro in CMGIS-V3-Tools was updated on July 10,2024 Add spatial adjacency table into Florida on June 29,2024 The dataset and tool for ABM Crime Simulation were updated on August 3, 2023, The toolkits in CMGIS-V3-Tools was updated on August 3rd,2023. Report Issues on GitHub https://github.com/UrbanGISer/Computational-Methods-and-GIS-Applications-in-Social-Science Following the website of Fahui Wang : http://faculty.lsu.edu/fahui Contents Chapter 1. Getting Started with ArcGIS: Data Management and Basic Spatial Analysis Tools Case Study 1: Mapping and Analyzing Population Density Pattern in Baton Rouge, Louisiana Chapter 2. Measuring Distance and Travel Time and Analyzing Distance Decay Behavior Case Study 2A: Estimating Drive Time and Transit Time in Baton Rouge, Louisiana Case Study 2B: Analyzing Distance Decay Behavior for Hospitalization in Florida Chapter 3. Spatial Smoothing and Spatial Interpolation Case Study 3A: Mapping Place Names in Guangxi, China Case Study 3B: Area-Based Interpolations of Population in Baton Rouge, Louisiana Case Study 3C: Detecting Spatiotemporal Crime Hotspots in Baton Rouge, Louisiana Chapter 4. Delineating Functional Regions and Applications in Health Geography Case Study 4A: Defining Service Areas of Acute Hospitals in Baton Rouge, Louisiana Case Study 4B: Automated Delineation of Hospital Service Areas in Florida Chapter 5. GIS-Based Measures of Spatial Accessibility and Application in Examining Healthcare Disparity Case Study 5: Measuring Accessibility of Primary Care Physicians in Baton Rouge Chapter 6. Function Fittings by Regressions and Application in Analyzing Urban Density Patterns Case Study 6: Analyzing Population Density Patterns in Chicago Urban Area >Chapter 7. Principal Components, Factor and Cluster Analyses and Application in Social Area Analysis Case Study 7: Social Area Analysis in Beijing Chapter 8. Spatial Statistics and Applications in Cultural and Crime Geography Case Study 8A: Spatial Distribution and Clusters of Place Names in Yunnan, China Case Study 8B: Detecting Colocation Between Crime Incidents and Facilities Case Study 8C: Spatial Cluster and Regression Analyses of Homicide Patterns in Chicago Chapter 9. Regionalization Methods and Application in Analysis of Cancer Data Case Study 9: Constructing Geographical Areas for Mapping Cancer Rates in Louisiana Chapter 10. System of Linear Equations and Application of Garin-Lowry in Simulating Urban Population and Employment Patterns Case Study 10: Simulating Population and Service Employment Distributions in a Hypothetical City Chapter 11. Linear and Quadratic Programming and Applications in Examining Wasteful Commuting and Allocating Healthcare Providers Case Study 11A: Measuring Wasteful Commuting in Columbus, Ohio Case Study 11B: Location-Allocation Analysis of Hospitals in Rural China Chapter 12. Monte Carlo Method and Applications in Urban Population and Traffic Simulations Case Study 12A. Examining Zonal Effect on Urban Population Density Functions in Chicago by Monte Carlo Simulation Case Study 12B: Monte Carlo-Based Traffic Simulation in Baton Rouge, Louisiana Chapter 13. Agent-Based Model and Application in Crime Simulation Case Study 13: Agent-Based Crime Simulation in Baton Rouge, Louisiana Chapter 14. Spatiotemporal Big Data Analytics and Application in Urban Studies Case Study 14A: Exploring Taxi Trajectory in ArcGIS Case Study 14B: Identifying High Traffic Corridors and Destinations in Shanghai Dataset File Structure 1 BatonRouge Census.gdb BR.gdb 2A BatonRouge BR_Road.gdb Hosp_Address.csv TransitNetworkTemplate.xml BR_GTFS Google API Pro.tbx 2B Florida FL_HSA.gdb R_ArcGIS_Tools.tbx (RegressionR) 3A China_GX GX.gdb 3B BatonRouge BR.gdb 3C BatonRouge BRcrime R_ArcGIS_Tools.tbx (STKDE) 4A BatonRouge BRRoad.gdb 4B Florida FL_HSA.gdb HSA Delineation Pro.tbx Huff Model Pro.tbx FLplgnAdjAppend.csv 5 BRMSA BRMSA.gdb Accessibility Pro.tbx 6 Chicago ChiUrArea.gdb R_ArcGIS_Tools.tbx (RegressionR) 7 Beijing BJSA.gdb bjattr.csv R_ArcGIS_Tools.tbx (PCAandFA, BasicClustering) 8A Yunnan YN.gdb R_ArcGIS_Tools.tbx (SaTScanR) 8B Jiangsu JS.gdb 8C Chicago ChiCity.gdb cityattr.csv ...
Facebook
TwitterThis dataset combines the work of several different projects to create a seamless data set for the contiguous United States. Data from four regional Gap Analysis Projects and the LANDFIRE project were combined to make this dataset. In the northwestern United States (Idaho, Oregon, Montana, Washington and Wyoming) data in this map came from the Northwest Gap Analysis Project. In the southwestern United States (Colorado, Arizona, Nevada, New Mexico, and Utah) data used in this map came from the Southwest Gap Analysis Project. The data for Alabama, Florida, Georgia, Kentucky, North Carolina, South Carolina, Mississippi, Tennessee, and Virginia came from the Southeast Gap Analysis Project and the California data was generated by the updated California Gap land cover project. The Hawaii Gap Analysis project provided the data for Hawaii. In areas of the county (central U.S., Northeast, Alaska) that have not yet been covered by a regional Gap Analysis Project, data from the Landfire project was used. Similarities in the methods used by these projects made possible the combining of the data they derived into one seamless coverage. They all used multi-season satellite imagery (Landsat ETM+) from 1999-2001 in conjunction with digital elevation model (DEM) derived datasets (e.g. elevation, landform) to model natural and semi-natural vegetation. Vegetation classes were drawn from NatureServe's Ecological System Classification (Comer et al. 2003) or classes developed by the Hawaii Gap project. Additionally, all of the projects included land use classes that were employed to describe areas where natural vegetation has been altered. In many areas of the country these classes were derived from the National Land Cover Dataset (NLCD). For the majority of classes and, in most areas of the country, a decision tree classifier was used to discriminate ecological system types. In some areas of the country, more manual techniques were used to discriminate small patch systems and systems not distinguishable through topography. The data contains multiple levels of thematic detail. At the most detailed level natural vegetation is represented by NatureServe's Ecological System classification (or in Hawaii the Hawaii GAP classification). These most detailed classifications have been crosswalked to the five highest levels of the National Vegetation Classification (NVC), Class, Subclass, Formation, Division and Macrogroup. This crosswalk allows users to display and analyze the data at different levels of thematic resolution. Developed areas, or areas dominated by introduced species, timber harvest, or water are represented by other classes, collectively refered to as land use classes; these land use classes occur at each of the thematic levels. Raster data in both ArcGIS Grid and ERDAS Imagine format is available for download at http://gis1.usgs.gov/csas/gap/viewer/land_cover/Map.aspx Six layer files are included in the download packages to assist the user in displaying the data at each of the Thematic levels in ArcGIS. In adition to the raster datasets the data is available in Web Mapping Services (WMS) format for each of the six NVC classification levels (Class, Subclass, Formation, Division, Macrogroup, Ecological System) at the following links. http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Class_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Subclass_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Formation_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Division_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Macrogroup_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_Ecological_Systems_Landuse/MapServer
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains the recreation opportunity information that the Forest Service collects through the Recreation Portal and shares with the public on https://www.recreation.gov, the Forest Service World Wide Web pages (https://www.fs.usda.gov/) and the Interactive Visitor Map. This recreation data contains detailed descriptions of recreational sites, areas, activities & facilities. This published dataset consists of one point feature class for recreational areas, one spatial view and three related tables such as activities, facilities & rec area advisories. The purpose of each related table is described below RECAREAACTIVITIES: This related table contains information about the activities that are associated with the rec area.RECAREAFACILITIES: This related table contains information about the amenities that are associated with the rec area. RECAREAADVISORIES: This table contains events, news, alerts and warnings that are associated with the rec area.RECAREAACTIVITIES_V: This spatial view/feature class is generated by joining the RECAREAACTIVITIES table to the RECREATION OPPORTUNITIES Feature Class. Please note that the RECAREAID is the unique identifier present in point feature class and in the related tables as well. The RECAREAID is used as foreign key to access relate records.This published data is updated nightly from an XML feed maintained by the CIO Rec Portal team. This data is intended for public use and distribution. Metadata
Facebook
TwitterHigh resolution land cover dataset for City of Boston, MA. Seven land cover classes were mapped: (1) tree canopy, (2) grass/shrub, (3) bare earth, (4) water, (5) buildings, (6) roads, and (7) other paved surfaces. The primary sources used to derive this land cover layer were 2013 LiDAR data, 2014 Orthoimagery, and 2016 NAIP imagery. Ancillary data sources included GIS data provided by City of Boston, MA or created by the UVM Spatial Analysis Laboratory. Object-based image analysis techniques (OBIA) were employed to extract land cover information using the best available remotely sensed and vector GIS datasets. OBIA systems work by grouping pixels into meaningful objects based on their spectral and spatial properties, while taking into account boundaries imposed by existing vector datasets. Within the OBIA environment a rule-based expert system was designed to effectively mimic the process of manual image analysis by incorporating the elements of image interpretation (color/tone, texture, pattern, location, size, and shape) into the classification process. A series of morphological procedures were employed to insure that the end product is both accurate and cartographically pleasing. Following the automated OBIA mapping a detailed manual review of the dataset was carried out at a scale of 1:2500 and all observable errors were corrected.
High resolution land cover dataset for City of Boston, MA. Seven land cover classes were mapped: (1) tree canopy, (2) grass/shrub, (3) bare earth, (4) water, (5) buildings, (6) roads, and (7) other paved surfaces. The primary sources used to derive this land cover layer were 2013 LiDAR data, 2014 Orthoimagery, and 2016 NAIP imagery. Ancillary data sources included GIS data provided by City of Boston, MA or created by the UVM Spatial Analysis Laboratory. Object-based image analysis techniques (OBIA) were employed to extract land cover information using the best available remotely sensed and vector GIS datasets. OBIA systems work by grouping pixels into meaningful objects based on their spectral and spatial properties, while taking into account boundaries imposed by existing vector datasets. Within the OBIA environment a rule-based expert system was designed to effectively mimic the process of manual image analysis by incorporating the elements of image interpretation (color/tone, texture, pattern, location, size, and shape) into the classification process. A series of morphological procedures were employed to insure that the end product is both accurate and cartographically pleasing. Following the automated OBIA mapping a detailed manual review of the dataset was carried out at a scale of 1:2500 and all observable errors were corrected.
Credits: University of Vermont Spatial Analysis Laboratory in collaboration with the City of Boston, Trust for Public Lands, and City of Cambridge.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this course, you will explore a variety of open-source technologies for working with geosptial data, performing spatial analysis, and undertaking general data science. The first component of the class focuses on the use of QGIS and associated technologies (GDAL, PROJ, GRASS, SAGA, and Orfeo Toolbox). The second component of the class introduces Python and associated open-source libraries and modules (NumPy, Pandas, Matplotlib, Seaborn, GeoPandas, Rasterio, WhiteboxTools, and Scikit-Learn) used by geospatial scientists and data scientists. We also provide an introduction to Structured Query Language (SQL) for performing table and spatial queries. This course is designed for individuals that have a background in GIS, such as working in the ArcGIS environment, but no prior experience using open-source software and/or coding. You will be asked to work through a series of lecture modules and videos broken into several topic areas, as outlined below. Fourteen assignments and the required data have been provided as hands-on opportunites to work with data and the discussed technologies and methods. If you have any questions or suggestions, feel free to contact us. We hope to continue to update and improve this course. This course was produced by West Virginia View (http://www.wvview.org/) with support from AmericaView (https://americaview.org/). This material is based upon work supported by the U.S. Geological Survey under Grant/Cooperative Agreement No. G18AP00077. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the opinions or policies of the U.S. Geological Survey. Mention of trade names or commercial products does not constitute their endorsement by the U.S. Geological Survey. After completing this course you will be able to: apply QGIS to visualize, query, and analyze vector and raster spatial data. use available resources to further expand your knowledge of open-source technologies. describe and use a variety of open data formats. code in Python at an intermediate-level. read, summarize, visualize, and analyze data using open Python libraries. create spatial predictive models using Python and associated libraries. use SQL to perform table and spatial queries at an intermediate-level.
Facebook
TwitterThis specialized location dataset delivers detailed information about marina establishments. Maritime industry professionals, coastal planners, and tourism researchers can leverage precise location insights to understand maritime infrastructure, analyze recreational boating landscapes, and develop targeted strategies.
How Do We Create Polygons?
-All our polygons are manually crafted using advanced GIS tools like QGIS, ArcGIS, and similar applications. This involves leveraging aerial imagery, satellite data, and street-level views to ensure precision. -Beyond visual data, our expert GIS data engineers integrate venue layout/elevation plans sourced from official company websites to construct highly detailed polygons. This meticulous process ensures maximum accuracy and consistency. -We verify our polygons through multiple quality assurance checks, focusing on accuracy, relevance, and completeness.
What's More?
-Custom Polygon Creation: Our team can build polygons for any location or category based on your requirements. Whether it’s a new retail chain, transportation hub, or niche point of interest, we’ve got you covered. -Enhanced Customization: In addition to polygons, we capture critical details such as entry and exit points, parking areas, and adjacent pathways, adding greater context to your geospatial data. -Flexible Data Delivery Formats: We provide datasets in industry-standard GIS formats like WKT, GeoJSON, Shapefile, and GDB, making them compatible with various systems and tools. -Regular Data Updates: Stay ahead with our customizable refresh schedules, ensuring your polygon data is always up-to-date for evolving business needs.
Unlock the Power of POI and Geospatial Data
With our robust polygon datasets and point-of-interest data, you can: -Perform detailed market and location analyses to identify growth opportunities. -Pinpoint the ideal locations for your next store or business expansion. -Decode consumer behavior patterns using geospatial insights. -Execute location-based marketing campaigns for better ROI. -Gain an edge over competitors by leveraging geofencing and spatial intelligence.
Why Choose LocationsXYZ?
LocationsXYZ is trusted by leading brands to unlock actionable business insights with our accurate and comprehensive spatial data solutions. Join our growing network of successful clients who have scaled their operations with precise polygon and POI datasets. Request your free sample today and explore how we can help accelerate your business growth.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Literature review dataset
This table lists the surveyed papers concerning the application of spatial analysis, GIS (Geographic Information Systems) as well as general geographic approaches and geostatistics, to the assessment of CoViD-19 dynamics. The period of survey is from January 1st, 2020 to December 15th, 2020. The first column lists the reference. The second lists the date of publication (preferably, the date of online publication). The third column lists the Country or the Countries and/or the subnational entities investigated. The fourth column lists the epidemiological data utilized in each paper. The fifth column lists other types of data utilized for the analysis. The sixth column lists the more traditionally statistically-based methods, if utilized. The seventh column lists the geo-statistical, GIS or geographic methods, if utilized. The eight column sums up the findings of each paper. The papers are also classified within seven thematic categories. The full references are available at the end of the table in alphabetical order.
This table was the basis for the realization of a comprehensive geographic literature review. It aims to be a useful tool to ease the "due-diligence" activity of all the researchers interested in the spatial analysis of the pandemic.
The reference to cite the related paper is the following:
Pranzo, A.M.R., Dai Prà, E. & Besana, A. Epidemiological geography at work: An exploratory review about the overall findings of spatial analysis applied to the study of CoViD-19 propagation along the first pandemic year. GeoJournal (2022). https://doi.org/10.1007/s10708-022-10601-y
To read the manuscript please follow this link: https://doi.org/10.1007/s10708-022-10601-y
Facebook
TwitterThe Local Employment Dynamics (LED) Partnership is a voluntary federal-state enterprise created for the purpose of merging employee, and employer data to provide a set of enhanced labor market statistics known collectively as Quarterly Workforce Indicators (QWI). The QWI are a set of economic indicators including employment, job creation, earnings, and other measures of employment flows. For the purposes of this dataset, LED data for 2018 is aggregated to Census Summary Level 070 (State + County + County Subdivision + Place/Remainder), and joined with the Emergency Solutions Grantee (ESG) areas spatial dataset for FY2018. The Emergency Solutions Grants (ESG), formally the Emergency Shelter Grants, program is designed to identify sheltered and unsheltered homeless persons, as well as those at risk of homelessness, and provide the services necessary to help those persons quickly regain stability in permanent housing after experiencing a housing crisis and/or homelessness. The ESG is a non-competitive formula grant awarded to recipients which are state governments, large cities, urban counties, and U.S. territories. Recipients make these funds available to eligible sub-recipients, which can be either local government agencies or private nonprofit organizations. The recipient agencies and organizations, which actually run the homeless assistance projects, apply for ESG funds to the governmental grantee, and not directly to HUD. Please note that this version of the data does not include Community Planning and Development (CPD) entitlement grantees. LED data for CPD entitlement areas can be obtained from the LED for CDBG Grantee Areas feature service. To learn more about the Local Employment Dynamics (LED) Partnership visit: https://lehd.ces.census.gov/, for questions about the spatial attribution of this dataset, please reach out to us at GISHelpdesk@hud.gov. Data Dictionary: DD_LED for ESG Grantee Areas
Date of Coverage: ESG-2021/LED-2018
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Note: The schema changed in February 2025 - please see below. We will post a roadmap of upcoming changes, but service URLs and schema are now stable. For deployment status of new services beginning in February 2025, see https://gis.data.ca.gov/pages/city-and-county-boundary-data-status. Additional roadmap and status links at the bottom of this metadata.This dataset is regularly updated as the source data from CDTFA is updated, as often as many times a month. If you require unchanging point-in-time data, export a copy for your own use rather than using the service directly in your applications. PurposeCounty boundaries along with third party identifiers used to join in external data. Boundaries are from the California Department of Tax and Fee Administration (CDTFA). These boundaries are the best available statewide data source in that CDTFA receives changes in incorporation and boundary lines from the Board of Equalization, who receives them from local jurisdictions for tax purposes. Boundary accuracy is not guaranteed, and though CDTFA works to align boundaries based on historical records and local changes, errors will exist. If you require a legal assessment of boundary location, contact a licensed surveyor.This dataset joins in multiple attributes and identifiers from the US Census Bureau and Board on Geographic Names to facilitate adding additional third party data sources. In addition, we attach attributes of our own to ease and reduce common processing needs and questions. Finally, coastal buffers are separated into separate polygons, leaving the land-based portions of jurisdictions and coastal buffers in adjacent polygons. This feature layer is for public use. Related LayersThis dataset is part of a grouping of many datasets:Cities: Only the city boundaries and attributes, without any unincorporated areasWith Coastal BuffersWithout Coastal BuffersCounties: Full county boundaries and attributes, including all cities within as a single polygonWith Coastal Buffers (this dataset)Without Coastal BuffersCities and Full Counties: A merge of the other two layers, so polygons overlap within city boundaries. Some customers require this behavior, so we provide it as a separate service.With Coastal BuffersWithout Coastal BuffersCity and County AbbreviationsUnincorporated Areas (Coming Soon)Census Designated PlacesCartographic CoastlinePolygonLine source (Coming Soon)State BoundaryWith Bay CutsWithout Bay Cuts Working with Coastal Buffers The dataset you are currently viewing includes the coastal buffers for cities and counties that have them in the source data from CDTFA. In the versions where they are included, they remain as a second polygon on cities or counties that have them, with all the same identifiers, and a value in the COASTAL field indicating if it"s an ocean or a bay buffer. If you wish to have a single polygon per jurisdiction that includes the coastal buffers, you can run a Dissolve on the version that has the coastal buffers on all the fields except OFFSHORE and AREA_SQMI to get a version with the correct identifiers. Point of ContactCalifornia Department of Technology, Office of Digital Services, gis@state.ca.gov Field and Abbreviation DefinitionsCDTFA_COUNTY: CDTFA county name. For counties, this will be the name of the polygon itself. For cities, it is the name of the county the city polygon is within.CDTFA_COPRI: county number followed by the 3-digit city primary number used in the Board of Equalization"s 6-digit tax rate area numbering system. The boundary data originate with CDTFA's teams managing tax rate information, so this field is preserved and flows into this dataset.CENSUS_GEOID: numeric geographic identifiers from the US Census BureauCENSUS_PLACE_TYPE: City, County, or Town, stripped off the census name for identification purpose.GNIS_PLACE_NAME: Board on Geographic Names authorized nomenclature for area names published in the Geographic Name Information SystemGNIS_ID: The numeric identifier from the Board on Geographic Names that can be used to join these boundaries to other datasets utilizing this identifier.CDT_COUNTY_ABBR: Abbreviations of county names - originally derived from CalTrans Division of Local Assistance and now managed by CDT. Abbreviations are 3 characters.CDT_NAME_SHORT: The name of the jurisdiction (city or county) with the word "City" or "County" stripped off the end. Some changes may come to how we process this value to make it more consistent.AREA_SQMI: The area of the administrative unit (city or county) in square miles, calculated in EPSG 3310 California Teale Albers.OFFSHORE: Indicates if the polygon is a coastal buffer. Null for land polygons. Additional values include "ocean" and "bay".PRIMARY_DOMAIN: Currently empty/null for all records. Placeholder field for official URL of the city or countyCENSUS_POPULATION: Currently null for all records. In the future, it will include the most recent US Census population estimate for the jurisdiction.GlobalID: While all of the layers we provide in this dataset include a GlobalID field with unique values, we do not recommend you make any use of it. The GlobalID field exists to support offline sync, but is not persistent, so data keyed to it will be orphaned at our next update. Use one of the other persistent identifiers, such as GNIS_ID or GEOID instead. Boundary AccuracyCounty boundaries were originally derived from a 1:24,000 accuracy dataset, with improvements made in some places to boundary alignments based on research into historical records and boundary changes as CDTFA learns of them. City boundary data are derived from pre-GIS tax maps, digitized at BOE and CDTFA, with adjustments made directly in GIS for new annexations, detachments, and corrections.Boundary accuracy within the dataset varies. While CDTFA strives to correctly include or exclude parcels from jurisdictions for accurate tax assessment, this dataset does not guarantee that a parcel is placed in the correct jurisdiction. When a parcel is in the correct jurisdiction, this dataset cannot guarantee accurate placement of boundary lines within or between parcels or rights of way. This dataset also provides no information on parcel boundaries. For exact jurisdictional or parcel boundary locations, please consult the county assessor's office and a licensed surveyor. CDTFA's data is used as the best available source because BOE and CDTFA receive information about changes in jurisdictions which otherwise need to be collected independently by an agency or company to compile into usable map boundaries. CDTFA maintains the best available statewide boundary information. CDTFA's source data notes the following about accuracy: City boundary changes and county boundary line adjustments filed with the Board of Equalization per Government Code 54900. This GIS layer contains the boundaries of the unincorporated county and incorporated cities within the state of California. The initial dataset was created in March of 2015 and was based on the State Board of Equalization tax rate area boundaries. As of April 1, 2024, the maintenance of this dataset is provided by the California Department of Tax and Fee Administration for the purpose of determining sales and use tax rates. The boundaries are continuously being revised to align with aerial imagery when areas of conflict are discovered between the original boundary provided by the California State Board of Equalization and the boundary made publicly available by local, state, and federal government. Some differences may occur between actual recorded boundaries and the boundaries used for sales and use tax purposes. The boundaries in this map are representations of taxing jurisdictions for the purpose of determining sales and use tax rates and should not be used to determine precise city or county boundary line locations. Boundary ProcessingThese data make a structural change from the source data. While the full boundaries provided by CDTFA include coastal buffers of varying sizes, many users need boundaries to end at the shoreline of the ocean or a bay. As a result, after examining existing city and county boundary layers, these datasets provide a coastline cut generally along the ocean facing coastline. For county boundaries in northern California, the cut runs near the Golden Gate Bridge, while for cities, we cut along the bay shoreline and into the edge of the Delta at the boundaries of Solano, Contra Costa, and Sacramento counties. In the services linked above, the versions that include the coastal buffers contain them as a second (or third) polygon for the city or county, with the value in the COASTAL field set to whether it"s a bay or ocean polygon. These can be processed back into a single polygon by dissolving on all the fields you wish to keep, since the attributes, other than the COASTAL field and geometry attributes (like areas) remain the same between the polygons for this purpose. SliversIn cases where a city or county"s boundary ends near a coastline, our coastline data may cross back and forth many times while roughly paralleling the jurisdiction"s boundary, resulting in many polygon slivers. We post-process the data to remove these slivers using a city/county boundary priority algorithm. That is, when the data run parallel to each other, we discard the coastline cut and keep the CDTFA-provided boundary, even if it extends into the ocean a small amount. This processing supports consistent boundaries for Fort Bragg, Point Arena, San Francisco, Pacifica, Half Moon Bay, and Capitola, in addition to others. More information on this algorithm will be provided soon. Coastline CaveatsSome cities have buffers extending into water bodies that we do not cut at the shoreline. These include South Lake Tahoe and Folsom, which extend into neighboring lakes, and San Diego and surrounding cities that extend into San Diego Bay, which our shoreline encloses. If you have feedback on the exclusion of these
Facebook
TwitterData set that contains information on archaeological remains of the pre historic settlement of the Letolo valley on Savaii on Samoa. It is built in ArcMap from ESRI and is based on previously unpublished surveys made by the Peace Corps Volonteer Gregory Jackmond in 1976-78, and in a lesser degree on excavations made by Helene Martinsson Wallin and Paul Wallin. The settlement was in use from at least 1000 AD to about 1700- 1800. Since abandonment it has been covered by thick jungle. However by the time of the survey by Jackmond (1976-78) it was grazed by cattle and the remains was visible. The survey is at file at Auckland War Memorial Museum and has hitherto been unpublished. A copy of the survey has been accessed by Olof Håkansson through Martinsson Wallin and Wallin and as part of a Masters Thesis in Archeology at Uppsala University it has been digitised.
Olof Håkansson has built the data base structure in the software from ESRI, and digitised the data in 2015 to 2017. One of the aims of the Masters Thesis was to discuss hierarchies. To do this, subsets of the data have been displayed in various ways on maps. Another aim was to discuss archaeological methodology when working with spatial data, but the data in itself can be used without regard to the questions asked in the Masters Thesis. All data that was unclear has been removed in an effort to avoid errors being introduced. Even so, if there is mistakes in the data set it is to be blamed on the researcher, Olof Håkansson. A more comprehensive account of the aim, questions, purpose, method, as well the results of the research, is to be found in the Masters Thesis itself. Direkt link http://uu.diva-portal.org/smash/record.jsf?pid=diva2%3A1149265&dswid=9472
Purpose:
The purpose is to examine hierarchies in prehistoric Samoa. The purpose is further to make the produced data sets available for study.
Prehistoric remains of the settlement of Letolo on the Island of Savaii in Samoa in Polynesia
Facebook
TwitterThe Digital Surficial Geologic-GIS Map of Saugus Iron Works National Historic Site, Massachusetts is composed of GIS data layers and GIS tables, and is available in the following GRI-supported GIS data formats: 1.) an ESRI file geodatabase (sair_surficial_geology.gdb), a 2.) Open Geospatial Consortium (OGC) geopackage, and 3.) 2.2 KMZ/KML file for use in Google Earth, however, this format version of the map is limited in data layers presented and in access to GRI ancillary table information. The file geodatabase format is supported with a 1.) ArcGIS Pro 3.X map file (.mapx) file (sair_surficial_geology.mapx) and individual Pro 3.X layer (.lyrx) files (for each GIS data layer). The OGC geopackage is supported with a QGIS project (.qgz) file. Upon request, the GIS data is also available in ESRI shapefile format. Contact Stephanie O'Meara (see contact information below) to acquire the GIS data in these GIS data formats. In addition to the GIS data and supporting GIS files, three additional files comprise a GRI digital geologic-GIS dataset or map: 1.) a readme file (sair_geology_gis_readme.pdf), 2.) the GRI ancillary map information document (.pdf) file (sair_geology.pdf) which contains geologic unit descriptions, as well as other ancillary map information and graphics from the source map(s) used by the GRI in the production of the GRI digital geologic-GIS data for the park, and 3.) a user-friendly FAQ PDF version of the metadata (sair_surficial_geology_metadata_faq.pdf). Please read the sair_geology_gis_readme.pdf for information pertaining to the proper extraction of the GIS data and other map files. Google Earth software is available for free at: https://www.google.com/earth/versions/. QGIS software is available for free at: https://www.qgis.org/en/site/. Users are encouraged to only use the Google Earth data for basic visualization, and to use the GIS data for any type of data analysis or investigation. The data were completed as a component of the Geologic Resources Inventory (GRI) program, a National Park Service (NPS) Inventory and Monitoring (I&M) Division funded program that is administered by the NPS Geologic Resources Division (GRD). For a complete listing of GRI products visit the GRI publications webpage: https://www.nps.gov/subjects/geology/geologic-resources-inventory-products.htm. For more information about the Geologic Resources Inventory Program visit the GRI webpage: https://www.nps.gov/subjects/geology/gri.htm. At the bottom of that webpage is a "Contact Us" link if you need additional information. You may also directly contact the program coordinator, Jason Kenworthy (jason_kenworthy@nps.gov). Source geologic maps and data used to complete this GRI digital dataset were provided by the following: U.S. Geological Survey. Detailed information concerning the sources used and their contribution the GRI product are listed in the Source Citation section(s) of this metadata record (sair_surficial_geology_metadata.txt or sair_surficial_geology_metadata_faq.pdf). Users of this data are cautioned about the locational accuracy of features within this dataset. Based on the source map scale of 1:24,000 and United States National Map Accuracy Standards features are within (horizontally) 12.2 meters or 40 feet of their actual location as presented by this dataset. Users of this data should thus not assume the location of features is exactly where they are portrayed in Google Earth, ArcGIS Pro, QGIS or other software used to display this dataset. All GIS and ancillary tables were produced as per the NPS GRI Geology-GIS Geodatabase Data Model v. 2.3. (available at: https://www.nps.gov/articles/gri-geodatabase-model.htm).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The datasets - Uganda Lakes, are sourced from the Ugandan Energy Sector GIS Working Group Open Data Site, developed and maintained by the Ugandan Energy Sector GIS Working Group. The Ugandan Energy Sector GIS Working Group’s mission is to develop a high quality GIS for the Energy Sector of Uganda in order to drive informed decision-making. As such, it brings datasets together in one place, organize them, keep them updated, and make public data available to all stakeholders. Link: http://data-energy-gis.opendata.arcgis.com/ The dataset is published on October 23, 2014
Facebook
TwitterThe National Hydrography Dataset Plus High Resolution (NHDplus High Resolution) maps the lakes, ponds, streams, rivers and other surface waters of the United States. Created by the US Geological Survey, NHDPlus High Resolution provides mean annual flow and velocity estimates for rivers and streams. Additional attributes provide connections between features facilitating complicated analyses.For more information on the NHDPlus High Resolution dataset see the User’s Guide for the National Hydrography Dataset Plus (NHDPlus) High Resolution.Dataset SummaryPhenomenon Mapped: Surface waters and related features of the United States and associated territoriesGeographic Extent: The Contiguous United States, Hawaii, portions of Alaska, Puerto Rico, Guam, US Virgin Islands, Northern Marianas Islands, and American SamoaProjection: Web Mercator Auxiliary Sphere Visible Scale: Visible at all scales but layer draws best at scales larger than 1:1,000,000Source: USGSUpdate Frequency: AnnualPublication Date: July 2022This layer was symbolized in the ArcGIS Map Viewer and while the features will draw in the Classic Map Viewer the advanced symbology will not. Prior to publication, the network and non-network flowline feature classes were combined into a single flowline layer. Similarly, the Area and Waterbody feature classes were merged under a single schema.Attribute fields were added to the flowline and waterbody layers to simplify symbology and enhance the layer's pop-ups. Fields added include Pop-up Title, Pop-up Subtitle, Esri Symbology (waterbodies only), and Feature Code Description. All other attributes are from the original dataset. No data values -9999 and -9998 were converted to Null values.What can you do with this layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer or a map containing it can be used in an application. Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Apply filters. For example you can set a filter to show larger streams and rivers using the mean annual flow attribute or the stream order attribute.Change the layer’s style and symbologyAdd labels and set their propertiesCustomize the pop-upUse as an input to the ArcGIS Online analysis tools. This layer works well as a reference layer with the trace downstream and watershed tools. The buffer tool can be used to draw protective boundaries around streams and the extract data tool can be used to create copies of portions of the data.ArcGIS ProAdd this layer to a 2d or 3d map.Use as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class.Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the ArcGIS Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.
Facebook
TwitterGeographic Information System (GIS) analyses are an essential part of natural resource management and research. Calculating and summarizing data within intersecting GIS layers is common practice for analysts and researchers. However, the various tools and steps required to complete this process are slow and tedious, requiring many tools iterating over hundreds, or even thousands of datasets. USGS scientists will combine a series of ArcGIS geoprocessing capabilities with custom scripts to create tools that will calculate, summarize, and organize large amounts of data that can span many temporal and spatial scales with minimal user input. The tools work with polygons, lines, points, and rasters to calculate relevant summary data and combine them into a single output table that can be easily incorporated into statistical analyses. These tools are useful for anyone interested in using an automated script to quickly compile summary information within all areas of interest in a GIS dataset.
Toolbox Use
License
Creative Commons-PDDC
Recommended Citation
Welty JL, Jeffries MI, Arkle RS, Pilliod DS, Kemp SK. 2021. GIS Clipping and Summarization Toolbox: U.S. Geological Survey Software Release. https://doi.org/10.5066/P99X8558
Facebook
TwitterRauer Group 1:50000 Topographic GIS dataset. Data conforms to SCAR Feature Catalogue which can be searched. 10 metre contour interval on rock, 20 metre contour interval on ice up to 100 metres, 100 metre contour interval on ice above 100 metres.
Facebook
TwitterThis dataset represents a unique compiled environmental data set for the circumpolar Arctic ocean region 45N to 90N region. It consists of 170 layers (mostly marine, some terrestrial) in ArcGIS 10 format to be used with a Geographic Information System (GIS) and which are listed below in detail. Most layers are long-term average raster GRIDs for the summer season, often by ocean depth, and represent value-added products easy to use. The sources of the data are manifold such as the World Ocean Atlas 2009 (WOA09), International Bathimetric Chart of the Arctic Ocean (IBCAO), Canadian Earth System Model 2 (CanESM2) data (the newest generation of models available) and data sources such as plankton databases and OBIS. Ocean layers were modeled and predicted into the future and zooplankton species were modeled based on future data: Calanus hyperboreus (AphiaID104467), Metridia longa (AphiaID 104632), M. pacifica (AphiaID 196784) and Thysanoessa raschii (AphiaID 110711). Some layers are derived within ArcGIS. Layers have pixel sizes between 1215.819573 meters and 25257.72929 meters for the best pooled model, and between 224881.2644 and 672240.4095 meters for future climate data. Data was then reprojected into North Pole Stereographic projection in meters (WGS84 as the geographic datum). Also, future layers are included as a selected subset of proposed future climate layers from the Canadian CanESM2 for the next 100 years (scenario runs rcp26 and rcp85). The following layer groups are available: bathymetry (depth, derived slope and aspect); proximity layers (to,glaciers,sea ice, protected areas, wetlands, shelf edge); dissolved oxygen, apparent oxygen, percent oxygen, nitrogen, phosphate, salinity, silicate (all for August and for 9 depth classes); runoff (proximity, annual and August); sea surface temperature; waterbody temperature (12 depth classes); modeled ocean boundary layers (H1, H2, H3 and Wx).This dataset is used for a M.Sc. thesis by the author, and freely available upon request. For questions and details we suggest contacting the authors. Process_Description: Please contact Moritz Schmid for the thesis and detailed explanations. Short version: We model predicted here for the first time ocean layers in the Arctic Ocean based on a unique dataset of physical oceanography. Moreover, we developed presence/random absence models that indicate where the studied zooplankton species are most likely to be present in the Arctic Ocean. Apart from that, we develop the first spatially explicit models known to science that describe the depth in which the studied zooplankton species are most likely to be at, as well as their distribution of life stages. We do not only do this for one present day scenario. We modeled five different scenarios and for future climate data. First, we model predicted ocean layers using the most up to date data from various open access sources, referred here as best-pooled model data. We decided to model this set of stratification layers after discussions and input of expert knowledge by Professor Igor Polyakov from the International Arctic Research Center at the University of Alaska Fairbanks. We predicted those stratification layers because those are the boundaries and layers that the plankton has to cross for diel vertical migration and a change in those would most likely affect the migration. I assigned 4 variables to the stratification layers. H1, H2, H3 and Wx. H1 is the lower boundary of the mixed layer depth. Above this layer a lot of atmospheric disturbance is causing mixing of the water, giving the mixed layer its name. H2, the middle of the halocline is important because in this part of the ocean a strong gradient in salinity and temperature separates water layers. H3, the isotherm is important, because beneath it flows denser and colder Atlantic water. Wx summarizes the overall width of the described water column. Ocean layers were predicted using machine learning algorithms (TreeNet, Salford Systems). Second, ocean layers were included as predictors and used to predict the presence/random absence, most likely depth and life stage layers for the zooplankton species: Calanus hyperboreus, Metridia longa, Metridia pacifica and Thysanoessa raschii, This process was repeated for future predictions based on the CanESM2 data (see in the data section). For zooplankton species the following layers were developed and for the future. C. hyperboreus: Best-pooled model as well as future predictions (rcp26 including ocean layer(also excluding), rcp85 including oocean layers (also excluding) for 2010 and 2100.For parameters: Presence/random absence, most likely depth and life stage layers M. longa: Best-pooled model as well as future predictions (rcp26 including ocean layer(also excluding), rcp85 including oocean layers (also excluding) for 2010 and 2100. For parameters: Presence/rand... Visit https://dataone.org/datasets/f63d0f6c-7d53-46ce-b755-42a368007601 for complete metadata about this dataset.
Facebook
TwitterOpen Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Have you ever wanted to create your own maps, or integrate and visualize spatial datasets to examine changes in trends between locations and over time? Follow along with these training tutorials on QGIS, an open source geographic information system (GIS) and learn key concepts, procedures and skills for performing common GIS tasks – such as creating maps, as well as joining, overlaying and visualizing spatial datasets. These tutorials are geared towards new GIS users. We’ll start with foundational concepts, and build towards more advanced topics throughout – demonstrating how with a few relatively easy steps you can get quite a lot out of GIS. You can then extend these skills to datasets of thematic relevance to you in addressing tasks faced in your day-to-day work.