Facebook
TwitterThe files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. An ArcInfo(tm) (ESRI) GIS database was designed for WICA using the National Park GIS Database Design, Layout, and Procedures created by the BOR. This was created through Arc Macro Language (AML) scripts that helped automate the transfer process and ensure that all spatial and attribute data was consistent and stored properly. Actual transfer of information from the interpreted aerial photographs to a digital, geo-referenced format involved two techniques, scanning (for the vegetation classes) and on-screen digitizing (for the land-use classes). Both techniques required the use of 14 digital black-and-white orthophoto quarter quadrangles (DOQQ's) covering the study area. Transferred information was used to create vegetation polygon coverages and ancillary linear coverages in ArcInfo(tm) for each WICA DOQQ. Attribute information including vegetation map unit, location, and aerial photo number was subsequently entered for all polygons.
Facebook
TwitterThe files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. An ArcInfo (copyright ESRI) GIS database was designed for THRO using the National Park GIS Database Design, Layout, and Procedures created by RSGIG. This was created through Arc Macro Language (AML) scripts that helped automate the transfer process and ensure that all spatial and attribute data was consistent and stored properly. Actual transfer of information from the interpreted aerial photographs to a digital, geo-referenced format involved two techniques, scanning (for the vegetation classes) and on-screen digitizing (for the land-use classes). Transferred information used to create vegetation polygon coverages and linear coverages in ArcInfo were based on quarter-quad borders. Attribute information including vegetation map unit, location, and aerial photo number was subsequently entered for all polygons. In addition, the spatial database has an FGDC-compliant metadata file.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract A number of research studies have analysed the production of social housing (SH) in Brazil, verifying its poor performanc, and its inability to satisfy residents’ needs, particularly if we consider the total time of permanence in the housing units. The designs lack both diversity and flexibility, having serious functional problems due to the lack of small dimensions of the different areas. Hence, the aim of this study is to develop a practical method to support decision making in SH projects. The investigation adopted the Design Science Research approach and was structured in five main steps: (1) identifying the problem, (2) understanding the theme, (3) proposition of tools, (4) evaluation of tools, and (5) organisation of the contributions. The conceptual method is oriented to decision making in projects developed with the BIM (Building Information Modeling), focusing on functionality and flexibility. The method consists of a set of articulated instruments that allow the evaluation and guidance of the designers so that they comply with the criteria of functionality and flexibility. The device demonstrated to be effective in use when applied in a design workshop with professionals. It was also evaluated by researchers specialised in the subject. The research contributed to a review and systematisation of functionality and flexibility requirements and, in the practical field, to a set of operational instruments intended for the use of social housing architecture professionals.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT Objective: To identify geographically the beneficiaries categorized as prone to Type 2 Diabetes Mellitus, using the recognition of patterns in a database of a health plan operator, through data mining. Method: The following steps were developed: the initial step, the information survey. Development, construction of the process of extraction, transformation, and loading of the database. Deployment, presentation of the geographical information through a georeferencing tool. Results: As a result, the mapping of Paraná according to its health care network and the concentration of Type 2 Diabetes Mellitus is presented, enabling the identification of cause-and-effect relationships. Conclusion: It is concluded that the analysis of georeferenced information, linked to health information obtained through the data mining technique, can be an excellent tool for the health management of a health plan operator, contributing to the decision-making process in Health.
Facebook
TwitterThe Engineered Geothermal System (EGS) Exploration Methodology Project is developing an exploration approach for EGS through the integration of geoscientific data. The Project chose the Dixie Valley Geothermal System in Nevada as a field laboratory site for methodlogy calibration purposes because, in the public domain, it is a highly characterized geothermal systems in the Basin and Range with a considerable amount of geoscience and most importantly, well data. This Baseline Conceptual Model report summarizes the results of the first three project tasks (1) collect and assess the existing public domain geoscience data, (2) design and populate a GIS database, and (3) develop a baseline (existing data) geothermal conceptual model, evaluate geostatistical relationships, and generate baseline, coupled EGS favorability/trust maps from +1km above sea level (asl) to -4km asl for the Calibration Area (Dixie Valley Geothermal Wellfield) to identify EGS drilling targets at a scale of 5km x 5km. It presents (1) an assessment of the readily available public domain data and some proprietary data provided by Terra-Gen Power, LLC, (2) a re-interpretation of these data as required, (3) an exploratory geostatistical data analysis, (4) the baseline geothermal conceptual model, and (5) the EGS favorability/trust mapping. The conceptual model presented applies to both the hydrothermal system and EGS in the Dixie Valley region. Compression and dilation zones associated with faulting.
Facebook
TwitterBuilding information modeling (BIM) allows representation of detailed information regarding building elements while geographic information system (GIS) allows representation of spatial information about buildings and their surroundings. Overlapping these domains will combine their individual features and provide support to important activities such as building emergency response, construction site safety, construction supply chain management, and sustainable urban design. Interoperability through open data standards is one method of connecting software tools from BIM and GIS domains. However, no single open data standard available today can support all information from the two domains. As a result, many researchers have been working to overlap or connect different open data standards to enhance interoperability. An overview of these studies will help identify the different approaches used and determine the approach with the most potential to enhance interoperability. This paper adopted a strong definition of interoperability using information technology (IT) based standard documents. Based on this definition, previous approaches towards improving interoperability between BIM and GIS applications through open data standards were studied. The result shows previous approaches have implemented data conversion, data integration, and linked data approaches. Between these methods, linked data emerged as having the most potential to connect open data standards and expand interoperability between BIM and GIS applications because it allows information exchange without editing the original data. The paper also identifies the main challenges in implementing linked data technologies for interoperability and provides directions for future research.
Facebook
TwitterSummary: Creating the world’s first open-source, high-resolution, land cover map of the worldStorymap metadata page: URL forthcoming Possible K-12 Next Generation Science standards addressed:Grade level(s) K: Standard K-ESS3-1 - Earth and Human Activity - Use a model to represent the relationship between the needs of different plants or animals (including humans) and the places they liveGrade level(s) K: Standard K-ESS3-3 - Earth and Human Activity - Communicate solutions that will reduce the impact of humans on the land, water, air, and/or other living things in the local environmentGrade level(s) 2: Standard 2-ESS2-1 - Earth’s Systems - Compare multiple solutions designed to slow or prevent wind or water from changing the shape of the landGrade level(s) 2: Standard 2-ESS2-2 - Earth’s Systems - Develop a model to represent the shapes and kinds of land and bodies of water in an areaGrade level(s) 3: Standard 3-LS4-1 - Biological Evolution: Unity and Diversity - Analyze and interpret data from fossils to provide evidence of the organisms and the environments in which they lived long ago.Grade level(s) 3: Standard 3-LS4-1 - Biological Evolution: Unity and Diversity - Analyze and interpret data from fossils to provide evidence of the organisms and the environments in which they lived long ago.Grade level(s) 3: Standard 3-LS4-4 - Biological Evolution: Unity and Diversity - Make a claim about the merit of a solution to a problem caused when the environment changes and the types of plants and animals that live there may changeGrade level(s) 4: Standard 4-ESS1-1 - Earth’s Place in the Universe - Identify evidence from patterns in rock formations and fossils in rock layers to support an explanation for changes in a landscape over timeGrade level(s) 4: Standard 4-ESS2-2 - Earth’s Systems - Analyze and interpret data from maps to describe patterns of Earth’s featuresGrade level(s) 5: Standard 5-ESS2-1 - Earth’s Systems - Develop a model using an example to describe ways the geosphere, biosphere, hydrosphere, and/or atmosphere interact.Grade level(s) 6-8: Standard MS-ESS2-2 - Earth’s Systems - Construct an explanation based on evidence for how geoscience processes have changed Earth’s surface at varying time and spatial scalesGrade level(s) 6-8: Standard MS-ESS2-6 - Earth’s Systems - Develop and use a model to describe how unequal heating and rotation of the Earth cause patterns of atmospheric and oceanic circulation that determine regional climates.Grade level(s) 6-8: Standard MS-ESS3-3 - Earth and Human Activity - Apply scientific principles to design a method for monitoring and minimizing a human impact on the environment.Grade level(s) 9-12: Standard HS-ESS2-1 - Earth’s Systems - Develop a model to illustrate how Earth’s internal and surface processes operate at different spatial and temporal scales to form continental and ocean-floor features.Grade level(s) 9-12: Standard HS-ESS2-7 - Earth’s Systems - Construct an argument based on evidence about the simultaneous coevolution of Earth’s systems and life on EarthGrade level(s) 9-12: Standard HS-ESS3-4 - Earth and Human Activity - Evaluate or refine a technological solution that reduces impacts of human activities on natural systems.Grade level(s) 9-12: Standard HS-ESS3-6 - Earth and Human Activity - Use a computational representation to illustrate the relationships among Earth systems and how those relationships are being modified due to human activityMost frequently used words:areaslandclassesApproximate Flesch-Kincaid reading grade level: 9.7. The FK reading grade level should be considered carefully against the grade level(s) in the NGSS content standards above.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Geoscience Australia has been deriving raster sediment datasets for the continental Australian Exclusive Economic Zone (AEEZ) using existing marine samples collected by Geoscience Australia and …Show full descriptionGeoscience Australia has been deriving raster sediment datasets for the continental Australian Exclusive Economic Zone (AEEZ) using existing marine samples collected by Geoscience Australia and external organisations. Since seabed sediment data are collected at sparsely and unevenly distributed locations, spatial interpolation methods become essential tools for generating spatially continuous information. Previous studies have examined a number of factors that affect the performance of spatial interpolation methods. These factors include sample density, data variation, sampling design, spatial distribution of samples, data quality, correlation of primary and secondary variables, and interaction among some of these factors. Apart from these factors, a spatial reference system used to define sample locations is potentially another factor and is worth investigating. In this study, we aim to examine the degree to which spatial reference systems can affect the predictive accuracy of spatial interpolation methods in predicting marine environmental variables in the continental AEEZ. Firstly, we reviewed spatial reference systems including geographic coordinate systems and projected coordinate systems/map projections, with particular attention paid to map projection classification, distortion and selection schemes; secondly, we selected eight systems that are suitable for the spatial prediction of marine environmental data in the continental AEEZ. These systems include two geographic coordinate systems (WGS84 and GDA94) and six map projections (Lambert Equal-area Azimuthal, Equidistant Azimuthal, Stereographic Conformal Azimuthal, Albers Equal-Area Conic, Equidistant Conic and Lambert Conformal Conic); thirdly, we applied two most commonly used spatial interpolation methods, i.e. inverse distance squared (IDS) and ordinary kriging (OK) to a marine dataset projected using the eight systems. The accuracy of the methods was assessed using leave-one-out cross validation in terms of their predictive errors and, visualization of prediction maps. The difference in the predictive errors between WGS84 and the map projections were compared using paired Mann-Whitney test for both IDW and OK. The data manipulation and modelling work were implemented in ArcGIS and R. The result from this study confirms that the little shift caused by the tectonic movement between WGS84 and GDA94 does not affect the accuracy of the spatial interpolation methods examined (IDS and OK). With respect to whether the unit difference in geographical coordinates or distortions introduced by map projections has more effect on the performance of the spatial interpolation methods, the result shows that the accuracies of the spatial interpolation methods in predicting seabed sediment data in the SW region of AEEZ are similar and the differences are considered negligible, both in terms of predictive errors and prediction map visualisations. Among the six map projections, the slightly better prediction performance from Lambert Equal-Area Azimuthal and Equidistant Azimuthal projections for both IDS and OK indicates that Equal-Area and Equidistant projections with Azimuthal surfaces are more suitable than other projections for spatial predictions of seabed sediment data in the SW region of AEEZ. The outcomes of this study have significant implications for spatial predictions in environmental science. Future spatial prediction work using a data density greater than that in this study may use data based on WGS84 directly and may not have to project the data using certain spatial reference systems. The findings are applicable to spatial predictions of both marine and terrestrial environmental variables. You can also purchase hard copies of Geoscience Australia data and other products at http://www.ga.gov.au/products-services/how-to-order-products/sales-centre.html
Facebook
TwitterGeospatial data can play a significant and increasing role in supporting impact evaluations – as well as other stages of the policy process such as design and implementation. And the amount of geospatial data is growing rapidly, whether that’s conventional data that is now tagged with its precise location, remotely collected data from sensors on the ground, drones flown at low levels, or satellites orbiting the earth. These newly available sources can give us great insight into issues such as the changing coverage of forests, vegetation, crops, water, air quality, buildings and infrastructure, and so on – and often this can be done remotely, at a relatively low cost, and in a very timely fashion. Combining geographical data can also give us a better picture of the spatial relationships between issues – for example helping us develop small area estimates of poverty and other issues. And improved visualization techniques can really bring these analyses to life for decision-makers.
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
GIS-datasets for the Street networks of Stockholm, Gothenburg and Eskilstuna produced as part of the Spatial Morphology Lab (SMoL).
The goal of the SMoL project is to develop a strong theory and methodology for urban planning & design research with an analytical approach. Three frequently recurring variables of spatial urban form are studied that together quite well capture and describe the central characteristics and qualities of the built environment: density, diversity and proximity.
The first measure describes how intensive a place can be used depending on how much built up area is found there. The second measure captures how differentiated the use of a place can be depending on the division in smaller units such as plots. The third measure describes how accessible a place is depending on how it relates with other places. Empirical studies have shown strong links between these metrics and people's use of cities such as pedestrian movement patterns.
To support this goal, a central objective of the project is the establishment of an international platform of GIS data models for comparative studies in spatial urban form comprising three European capitals: London in the UK, Amsterdam in the Netherlands and Stockholm in Sweden, as well as two additional Swedish cities of smaller size than Stockholm: Gothenburg and Eskilstuna.
The result of the project is a GIS database for the five cities covering the three basic layers of urban form: street network (motorised and non-motorised), buildings and plots systems.
The data is shared via SND to create a research infrastructure that is open to new study initiatives. The datasets for Amsterdam will also be uploaded to SND. The datasets of London cannot be uploaded because of licensing restrictions.
The street network GIS-maps include motorised and non-motorised networks. The non-motorized networks include all streets and paths that are accessible for people walking or cycling, including those that are shared with vehicles. All streets where walking or cycling is forbidden, such as motorways, highways, or high-speed tunnels, are not included in the network.
The non-motorised network layers for Stockholm and Eskilstuna are based on the Swedish national road database, NVDB (Nationell Vägdatabas), downloaded from Trafikverket (https://lastkajen.trafikverket.se, date of download 15-5-2016, last update 8-11-2015). For Gothenburg, it is based on Open Street Maps (openstreetmap.org, http://download.geofabrik.de, date of download 29-4-2016), because the NVDB did not provide enough detail for the non-motorized network, as in the other cities. The original road-centre-line maps of all cities were edited based on the same basic representational principles and were converted into line-segment maps, using the following software: FME, Mapinfo professional and PST (Place Syntax Tool). The coordinate system is SWEREF99TM. In the final line-segment maps (GIS-layers) all streets or paths are represented with one line irrespectively of the number of lanes or type, meaning that parallel lines representing a street and a pedestrian or a cycle path running on the side, are reduced to one line. The reason is that these parallel lines are nor physically or perceptually separated, and thus are accessible and recognized from pedestrians as one “line of movement” in the street network. If there are obstacles or great distance between parallel streets and paths, then the multiple lines remain. The aim is to make a skeletal network that better represents the total space, which is accessible for pedestrians to move, irrespectively of the typical separations or distinctions of streets and paths. This representational choice follows the Space Syntax methodology in representing the public space and the street network.
We followed the same editing and generalizing procedure for all maps aiming to remove errors and to increase comparability between networks. This process included removing duplicate and isolated lines, snapping and generalizing. The snapping threshold used was 2m (end points closer than 2m were snapped together). The generalizing threshold used was 1m (successive line segments with angular deviation less than 1m were merged into one). In the final editing step, all road polylines were segmented to their constituting line-segments. The aim was to create appropriate line-segment maps to be analysed using Angular Segment Analysis, a network centrality analysis method introduced in Space Syntax.
All network layers are complemented with an “Unlink points” layer; a GIS point layer with the locations of all non-level intersections, such as pedestrian bridges and tunnels. The Unlink point layer is necessary to conduct network analysis that takes into account the non-planarity of the street network, using such software as PST (Place Syntax Tool).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Mixed-methods designs, especially those in which case selection is regression-based, have become popular across the social sciences. In this paper, we highlight why tools from spatial analysis—which have largely been overlooked in the mixed-methods literature—can be used for case selection and be particularly fruitful for theory development. We discuss two tools for integrating quantitative and qualitative analysis: (1) spatial autocorrelation in the outcome of interest; and (2) spatial autocorrelation in the residuals of a regression model. The case selection strategies presented here enable scholars to systematically use geography to learn more about their data and select cases that help identify scope conditions, evaluate the appropriate unit or level of analysis, examine causal mechanisms, and uncover previously omitted variables.
Facebook
TwitterGIS-datasets for the Street networks of Stockholm, Gothenburg and Eskilstuna produced as part of the Spatial Morphology Lab (SMoL). The goal of the SMoL project is to develop a strong theory and methodology for urban planning & design research with an analytical approach. Three frequently recurring variables of spatial urban form are studied that together quite well capture and describe the central characteristics and qualities of the built environment: density, diversity and proximity. The first measure describes how intensive a place can be used depending on how much built up area is found there. The second measure captures how differentiated the use of a place can be depending on the division in smaller units such as plots. The third measure describes how accessible a place is depending on how it relates with other places. Empirical studies have shown strong links between these metrics and people's use of cities such as pedestrian movement patterns. To support this goal, a central objective of the project is the establishment of an international platform of GIS data models for comparative studies in spatial urban form comprising three European capitals: London in the UK, Amsterdam in the Netherlands and Stockholm in Sweden, as well as two additional Swedish cities of smaller size than Stockholm: Gothenburg and Eskilstuna. The result of the project is a GIS database for the five cities covering the three basic layers of urban form: street network (motorised and non-motorised), buildings and plots systems. The data is shared via SND to create a research infrastructure that is open to new study initiatives. The datasets for Amsterdam will also be uploaded to SND. The datasets of London cannot be uploaded because of licensing restrictions. The street network GIS-maps include motorised and non-motorised networks. The motorised networks exclude all streets that are pedestrian-only and were cars are excluded. The network layers are based on the Swedish national road database, NVDB (Nationell Vägdatabas), downloaded from Trafikverket (https://lastkajen.trafikverket.se, date of download 15-5-2016, last update 8-11-2015). The original road-centre-line maps of all cities were edited based on the same basic representational principles and were converted into line-segment maps, using the following software: FME, Mapinfo professional and PST (Place Syntax Tool). The coordinate system is SWEREF99TM. In the final line-segment maps (GIS-layers) all roads are represented with one line irrespectively of the number of lanes, except from Motorways and Highways which are represented with two lines, one for each direction, again irrespectively of the number of lanes. We followed the same editing and generalizing procedure for all maps aiming to remove errors and to increase comparability between networks. This process included removing duplicate and isolated lines, snapping and generalizing. The snapping threshold used was 2m (end points closer than 2m were snapped together). The generalizing threshold used was 1m (successive line segments with angular deviation less than 1m were merged into one). In the final editing step, all road polylines were segmented to their constituting line-segments. The aim was to create appropriate line-segment maps to be analysed using Angular Segment Analysis, a network centrality analysis method introduced in Space Syntax. All network layers are complemented with an “Unlink points” layer; a GIS point layer with the locations of all non-level intersections, such as overpasses and underpasses, bridges, tunnels, flyovers and the like. The Unlink point layer is necessary to conduct network analysis that takes into account the non-planarity of the street network, using such software as PST (Place Syntax Tool).
Facebook
TwitterABSTRACT This paper discusses a multi-criteria GIS-based optimization model, which aims to determine the locations with the highest potential for the location of the water mains through the use of cost variables, as well as the best path for this tracing. As a result, it was possible to simulate minimum cost routes for the pipeline layout, considering criteria related to: the slope and altitude of the area, the distances of rivers and flooded areas and the proximity of highways. The analysis takes into account the importance (weight) of each criterion in the model. To minimize subjectivity in choosing the values of these weights, expert opinion was sought regarding the criteria analyzed. The HWA (Hierarchical Weight Analysis) method was used to weigh the criteria. To apply the methodology, the study area used an excerpt from the Pajeú pipeline in the state of Pernambuco and a high definition database from the Pernambuco Three-dimensional Program, as well as the SRTM/TOPODATA database. The results obtained through GIS allowed us to identify the areas considered to be the most suitable for the location of the pipeline and to determine an optimized route for this route. In practice, it meant determining a route for the pipeline installation, which suggests that the use of GIS and optimization techniques can help decision making regarding the design of water supply systems.
Facebook
TwitterThis is a dataset download, not a document. The Open button will start the download.This data layer is an element of the Oregon GIS Framework and has been clipped to the Oregon boundary and reprojected to Oregon Lambert (2992). The U.S. Geological Survey (USGS), in partnership with several federal agencies, has developed and released four National Land Cover Database (NLCD) products over the past two decades: NLCD 1992, 2001, 2006, and 2011. These products provide spatially explicit and reliable information on the Nation’s land cover and land cover change. To continue the legacy of NLCD and further establish a long-term monitoring capability for the Nation’s land resources, the USGS has designed a new generation of NLCD products named NLCD 2016. The NLCD 2016 design aims to provide innovative, consistent, and robust methodologies for production of a multi-temporal land cover and land cover change database from 2001 to 2016 at 2–3-year intervals. Comprehensive research was conducted and resulted in developed strategies for NLCD 2016: a streamlined process for assembling and preprocessing Landsat imagery and geospatial ancillary datasets; a multi-source integrated training data development and decision-tree based land cover classifications; a temporally, spectrally, and spatially integrated land cover change analysis strategy; a hierarchical theme-based post-classification and integration protocol for generating land cover and change products; a continuous fields biophysical parameters modeling method; and an automated scripted operational system for the NLCD 2016 production. The performance of the developed strategies and methods were tested in twenty World Reference System-2 path/row throughout the conterminous U.S. An overall agreement ranging from 71% to 97% between land cover classification and reference data was achieved for all tested area and all years. Results from this study confirm the robustness of this comprehensive and highly automated procedure for NLCD 2016 operational mapping. Questions about the NLCD 2016 land cover product can be directed to the NLCD 2016 land cover mapping team at USGS EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov. See included spatial metadata for more details.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Download linkSizeType2019 NLCD2.28 GBapplication/zipThe U.S. Geological Survey (USGS), in partnership with several federal agencies, has developed and released five National Land Cover Database (NLCD) products over the past two decades: NLCD 1992, 2001, 2006, 2011 and 2016. The 2016 release saw land cover created for additional years of 2003, 2008, and 2013. These products provide spatially explicit and reliable information on the Nation’s land cover and land cover change. To continue the legacy of NLCD and further establish a long-term monitoring capability for the Nation’s land resources, the USGS has designed a new generation of NLCD products named NLCD 2019.The NLCD 2019 design aims to provide innovative, consistent, and robust methodologies for production of a multi-temporal land cover and land cover change database from 2001 to 2019 at 2–3-year intervals. Comprehensive research was conducted and resulted in developed strategies for NLCD 2019: continued integration between impervious surface and all landcover products with impervious surface being directly mapped as developed classes in the landcover, a streamlined compositing process for assembling and preprocessing based on Landsat imagery and geospatial ancillary datasets; a multi-source integrated training data development and decision-tree based land cover classifications; a temporally, spectrally, and spatially integrated land cover change analysis strategy; a hierarchical theme-based post-classification and integration protocol for generating land cover and change products; a continuous fields biophysical parameters modeling method; and an automated scripted operational system for the NLCD 2019 production. The performance of the developed strategies and methods were tested in twenty composite referenced areas throughout the conterminous U.S. An overall accuracy assessment from the 2016 publication give a 91% overall landcover accuracy, with the developed classes also showing a 91% accuracy in overall developed. Results from this study confirm the robustness of this comprehensive and highly automated procedure for NLCD 2019 operational mapping. Questions about the NLCD 2019 land cover product can be directed to the NLCD 2019 land cover mapping team at USGS EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov. See included spatial metadata for more details.National Land Cover Database (NLCD) 2019 Impervious ProductsNational Land Cover Database (NLCD) 2019 Land Cover Products
Facebook
TwitterThe U.S. Geological Survey (USGS), in partnership with several federal agencies, has developed and released four National Land Cover Database (NLCD) products over the past two decades: NLCD 1992, 2001, 2006, and 2011. These products provide spatially explicit and reliable information on the Nation’s land cover and land cover change. To continue the legacy of NLCD and further establish a long-term monitoring capability for the Nation’s land resources, the USGS has designed a new generation of NLCD products named NLCD 2016. The NLCD 2016 design aims to provide innovative, consistent, and robust methodologies for production of a multi-temporal land cover and land cover change database from 2001 to 2016 at 2–3-year intervals. Comprehensive research was conducted and resulted in developed strategies for NLCD 2016: a streamlined process for assembling and preprocessing Landsat imagery and geospatial ancillary datasets; a multi-source integrated training data development and decision-tree based land cover classifications; a temporally, spectrally, and spatially integrated land cover change analysis strategy; a hierarchical theme-based post-classification and integration protocol for generating land cover and change products; a continuous fields biophysical parameters modeling method; and an automated scripted operational system for the NLCD 2016 production. The performance of the developed strategies and methods were tested in twenty World Reference System-2 path/row throughout the conterminous U.S. An overall agreement ranging from 71% to 97% between land cover classification and reference data was achieved for all tested area and all years. Results from this study confirm the robustness of this comprehensive and highly automated procedure for NLCD 2016 operational mapping. Questions about the NLCD 2016 land cover product can be directed to the NLCD 2016 land cover mapping team at USGS EROS, Sioux Falls, SD (605) 594-6151 or mrlc@usgs.gov. See included spatial metadata for more details.
Facebook
TwitterTempe’s roadways are an important means of transportation for residents, the workforce, students, and visitors. Tempe measures the quality and condition of its roadways using a Pavement Quality Index (PQI). This measure, rated from a low of 0 to a high of 100, is used by the City to plan for maintenance and repairs, and to allocate resources in the most efficient way possible.
This measure is created using pavement quality data maintained in the RoadMatrix Pavement Management Program. About every three years, the City surveys pavement, such as the smoothness of roadways and any signs of distress in the pavement surface. This data is then used to calculate the PQI, which determines roadway maintenance prioritization schedules as well as the most optimal road treatment options (such as placing a filler material in the cracks and treating the entire pavement surface, milling and replacing the top layer of the asphalt pavement, reconstructing the street section)
This page provides data for the performance measure related to PQI. To access geospatial data regarding PQI please visit https://data.tempe.gov/dataset/pavement-quality-index-segments
The performance measure dashboard is available at 1.22 Pavement Quality Index
This resource represents annual citywide average PQI.
This resource is used in the indicators found in the Safe and Secure Communities dashboard.
Additional Information
Source: Stantec/Road Matrix
Contact (author): Isaac Chavira
Contact E-Mail (author): isaac_chavira@tempe.gov
Contact (maintainer): Sue Taaffe
Contact E-Mail (maintainer): sue_taaffe@tempe.gov
Data Source Type: CSV
Preparation Method: Extracted from Roadmatrix and joined to GIS network
Publish Frequency: Annual (Average PQI)/Quarterly (Segment PQI)
Publish Method: Manual
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This project is a component of a broader effort focused on geothermal heating and cooling (GHC) with the aim of illustrating the numerous benefits of incorporating GHC and geothermal heat exchange (GHX) into community energy planning and national decarbonization strategies. To better assist private sector investment, it is currently necessary to define and assess the potential of low-temperature geothermal resources. For shallow GHC/GHX fields, there is no formal compilation of subsurface characteristics shared among industry practitioners that can improve system design and operations. Alaska is specifically noted in this work, because heretofore, it has not received a similar focus in geothermal potential evaluations as the contiguous United States. The methodology consists of leveraging relevant data to generate a baseline geospatial dataset of low-temperature resources (less than 150 degrees C) to compare and analyze information accessible to anyone trying to understand the potential of GHC/GHX and small-scale low-temperature geothermal power in Alaska (e.g., energy modelers, communities, planners, and policymakers). Importantly, this project identifies data related to (1) the evaluation of GHC/GHX in the shallow subsurface, and (2) the evaluation of low-temperature geothermal resource availability. Additionally, data is being compiled to assess repurposing of oil and gas wells to contribute co-produced fluids toward the geothermal direct use and heating and cooling resource potential. In this work we identified new data from three different datasets of isolated geothermal systems in Alaska and bottom-hole temperature data from oil and gas wells that can be leveraged for evaluation of low-temperature geothermal resource potential. The goal of this project is to facilitate future deployment of GHC/GHX analysis and community-led programs and update the low-temperature geothermal resources assessment of Alaska. A better understanding of shallow potential for GHX will improve design and operations of highly efficient GHC systems. The deployment and impact that can be achieved for low-temperature geothermal resources will contribute to decarbonization goals and facilitate widespread electrification by shaving and shifting grid loads.
Most of the data uses WGS84 coordinate system. However, each dataset come from different sources and has a metadata file with the original coordinate system.
Facebook
TwitterIn this paper, we presented a conceptual framework for flood analysis and terrain manipulation expanded from the outcome of a RMIT MLA Design Research Seminar addressing climate change impacts. Through this process, we were able to understand the exchange between physical properties, spatial factors and design scenarios coupled with a hydrologic and hydraulic modelling for feedback to guide further iterations. In so doing, this approach allows us to address complex interactions that govern this particular landscape system through hydrological and spatial constraints to generate NbS scenarios at the catchment scale.
Facebook
TwitterThe U.S. Geological Survey (USGS), in partnership with several federal agencies, has developed and released five National Land Cover Database (NLCD) products over the past two decades: NLCD 1992, 2001, 2006, 2011 and 2016. These products provide spatially explicit and reliable information on the Nation’s land cover and land cover change.
NLCD 2019 design aims to provide innovative, consistent, and robust methodologies for production of a multi-temporal land cover and land cover change database from 2001 to 2019 at 2–3-year intervals. Comprehensive research was conducted and resulted in developed strategies for NLCD 2019: continued integration between impervious surface and all landcover products with impervious surface being directly mapped as developed classes in the landcover, a streamlined compositing process for assembling and preprocessing based on Landsat imagery and geospatial ancillary datasets; a multi-source integrated training data development and decision-tree based land cover classifications; a temporally, spectrally, and spatially integrated land cover change analysis strategy; a hierarchical theme-based post-classification and integration protocol for generating land cover and change products; a continuous fields biophysical parameters modeling method; and an automated scripted operational system for the NLCD 2019 production.
Facebook
TwitterThe files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. An ArcInfo(tm) (ESRI) GIS database was designed for WICA using the National Park GIS Database Design, Layout, and Procedures created by the BOR. This was created through Arc Macro Language (AML) scripts that helped automate the transfer process and ensure that all spatial and attribute data was consistent and stored properly. Actual transfer of information from the interpreted aerial photographs to a digital, geo-referenced format involved two techniques, scanning (for the vegetation classes) and on-screen digitizing (for the land-use classes). Both techniques required the use of 14 digital black-and-white orthophoto quarter quadrangles (DOQQ's) covering the study area. Transferred information was used to create vegetation polygon coverages and ancillary linear coverages in ArcInfo(tm) for each WICA DOQQ. Attribute information including vegetation map unit, location, and aerial photo number was subsequently entered for all polygons.