Facebook
TwitterThe files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. We converted the photointerpreted data into a format usable in a geographic information system (GIS) by employing three fundamental processes: (1) orthorectify, (2) digitize, and (3) develop the geodatabase. All digital map automation was projected in Universal Transverse Mercator (UTM), Zone 16, using the North American Datum of 1983 (NAD83). Orthorectify: We orthorectified the interpreted overlays by using OrthoMapper, a softcopy photogrammetric software for GIS. One function of OrthoMapper is to create orthorectified imagery from scanned and unrectified imagery (Image Processing Software, Inc., 2002). The software features a method of visual orientation involving a point-and-click operation that uses existing orthorectified horizontal and vertical base maps. Of primary importance to us, OrthoMapper also has the capability to orthorectify the photointerpreted overlays of each photograph based on the reference information provided. Digitize: To produce a polygon vector layer for use in ArcGIS (Environmental Systems Research Institute [ESRI], Redlands, California), we converted each raster-based image mosaic of orthorectified overlays containing the photointerpreted data into a grid format by using ArcGIS. In ArcGIS, we used the ArcScan extension to trace the raster data and produce ESRI shapefiles. We digitally assigned map-attribute codes (both map-class codes and physiognomic modifier codes) to the polygons and checked the digital data against the photointerpreted overlays for line and attribute consistency. Ultimately, we merged the individual layers into a seamless layer. Geodatabase: At this stage, the map layer has only map-attribute codes assigned to each polygon. To assign meaningful information to each polygon (e.g., map-class names, physiognomic definitions, links to NVCS types), we produced a feature-class table, along with other supportive tables and subsequently related them together via an ArcGIS Geodatabase. This geodatabase also links the map to other feature-class layers produced from this project, including vegetation sample plots, accuracy assessment (AA) sites, aerial photo locations, and project boundary extent. A geodatabase provides access to a variety of interlocking data sets, is expandable, and equips resource managers and researchers with a powerful GIS tool.
Facebook
TwitterThe establishment of a BES Multi-User Geodatabase (BES-MUG) allows for the storage, management, and distribution of geospatial data associated with the Baltimore Ecosystem Study. At present, BES data is distributed over the internet via the BES website. While having geospatial data available for download is a vast improvement over having the data housed at individual research institutions, it still suffers from some limitations. BES-MUG overcomes these limitations; improving the quality of the geospatial data available to BES researches, thereby leading to more informed decision-making. BES-MUG builds on Environmental Systems Research Institute's (ESRI) ArcGIS and ArcSDE technology. ESRI was selected because its geospatial software offers robust capabilities. ArcGIS is implemented agency-wide within the USDA and is the predominant geospatial software package used by collaborating institutions. Commercially available enterprise database packages (DB2, Oracle, SQL) provide an efficient means to store, manage, and share large datasets. However, standard database capabilities are limited with respect to geographic datasets because they lack the ability to deal with complex spatial relationships. By using ESRI's ArcSDE (Spatial Database Engine) in conjunction with database software, geospatial data can be handled much more effectively through the implementation of the Geodatabase model. Through ArcSDE and the Geodatabase model the database's capabilities are expanded, allowing for multiuser editing, intelligent feature types, and the establishment of rules and relationships. ArcSDE also allows users to connect to the database using ArcGIS software without being burdened by the intricacies of the database itself. For an example of how BES-MUG will help improve the quality and timeless of BES geospatial data consider a census block group layer that is in need of updating. Rather than the researcher downloading the dataset, editing it, and resubmitting to through ORS, access rules will allow the authorized user to edit the dataset over the network. Established rules will ensure that the attribute and topological integrity is maintained, so that key fields are not left blank and that the block group boundaries stay within tract boundaries. Metadata will automatically be updated showing who edited the dataset and when they did in the event any questions arise. Currently, a functioning prototype Multi-User Database has been developed for BES at the University of Vermont Spatial Analysis Lab, using Arc SDE and IBM's DB2 Enterprise Database as a back end architecture. This database, which is currently only accessible to those on the UVM campus network, will shortly be migrated to a Linux server where it will be accessible for database connections over the Internet. Passwords can then be handed out to all interested researchers on the project, who will be able to make a database connection through the Geographic Information Systems software interface on their desktop computer. This database will include a very large number of thematic layers. Those layers are currently divided into biophysical, socio-economic and imagery categories. Biophysical includes data on topography, soils, forest cover, habitat areas, hydrology and toxics. Socio-economics includes political and administrative boundaries, transportation and infrastructure networks, property data, census data, household survey data, parks, protected areas, land use/land cover, zoning, public health and historic land use change. Imagery includes a variety of aerial and satellite imagery. See the readme: http://96.56.36.108/geodatabase_SAL/readme.txt See the file listing: http://96.56.36.108/geodatabase_SAL/diroutput.txt
Facebook
TwitterIn 2007, the California Ocean Protection Council initiated the California Seafloor Mapping Program (CSMP), designed to create a comprehensive seafloor map of high-resolution bathymetry, marine benthic habitats, and geology within California’s State Waters. The program supports a large number of coastal-zone- and ocean-management issues, including the California Marine Life Protection Act (MLPA) (California Department of Fish and Wildlife, 2008), which requires information about the distribution of ecosystems as part of the design and proposal process for the establishment of Marine Protected Areas. A focus of CSMP is to map California’s State Waters with consistent methods at a consistent scale. The CSMP approach is to create highly detailed seafloor maps through collection, integration, interpretation, and visualization of swath sonar data (the undersea equivalent of satellite remote-sensing data in terrestrial mapping), acoustic backscatter, seafloor video, seafloor photography, high-resolution seismic-reflection profiles, and bottom-sediment sampling data. The map products display seafloor morphology and character, identify potential marine benthic habitats, and illustrate both the surficial seafloor geology and shallow (to about 100 m) subsurface geology. It is emphasized that the more interpretive habitat and geology data rely on the integration of multiple, new high-resolution datasets and that mapping at small scales would not be possible without such data. This approach and CSMP planning is based in part on recommendations of the Marine Mapping Planning Workshop (Kvitek and others, 2006), attended by coastal and marine managers and scientists from around the state. That workshop established geographic priorities for a coastal mapping project and identified the need for coverage of “lands” from the shore strand line (defined as Mean Higher High Water; MHHW) out to the 3-nautical-mile (5.6-km) limit of California’s State Waters. Unfortunately, surveying the zone from MHHW out to 10-m water depth is not consistently possible using ship-based surveying methods, owing to sea state (for example, waves, wind, or currents), kelp coverage, and shallow rock outcrops. Accordingly, some of the data presented in this series commonly do not cover the zone from the shore out to 10-m depth. This data is part of a series of online U.S. Geological Survey (USGS) publications, each of which includes several map sheets, some explanatory text, and a descriptive pamphlet. Each map sheet is published as a PDF file. Geographic information system (GIS) files that contain both ESRI ArcGIS raster grids (for example, bathymetry, seafloor character) and geotiffs (for example, shaded relief) are also included for each publication. For those who do not own the full suite of ESRI GIS and mapping software, the data can be read using ESRI ArcReader, a free viewer that is available at http://www.esri.com/software/arcgis/arcreader/index.html (last accessed September 20, 2013). The California Seafloor Mapping Program is a collaborative venture between numerous different federal and state agencies, academia, and the private sector. CSMP partners include the California Coastal Conservancy, the California Ocean Protection Council, the California Department of Fish and Wildlife, the California Geological Survey, California State University at Monterey Bay’s Seafloor Mapping Lab, Moss Landing Marine Laboratories Center for Habitat Studies, Fugro Pelagos, Pacific Gas and Electric Company, National Oceanic and Atmospheric Administration (NOAA, including National Ocean Service–Office of Coast Surveys, National Marine Sanctuaries, and National Marine Fisheries Service), U.S. Army Corps of Engineers, the Bureau of Ocean Energy Management, the National Park Service, and the U.S. Geological Survey. These web services for the Offshore of Coal Oil Point map area includes data layers that are associated to GIS and map sheets available from the USGS CSMP web page at https://walrus.wr.usgs.gov/mapping/csmp/index.html. Each published CSMP map area includes a data catalog of geographic information system (GIS) files; map sheets that contain explanatory text; and an associated descriptive pamphlet. This web service represents the available data layers for this map area. Data was combined from different sonar surveys to generate a comprehensive high-resolution bathymetry and acoustic-backscatter coverage of the map area. These data reveal a range of physiographic including exposed bedrock outcrops, large fields of sand waves, as well as many human impacts on the seafloor. To validate geological and biological interpretations of the sonar data, the U.S. Geological Survey towed a camera sled over specific offshore locations, collecting both video and photographic imagery; these “ground-truth” surveying data are available from the CSMP Video and Photograph Portal at https://doi.org/10.5066/F7J1015K. The “seafloor character” data layer shows classifications of the seafloor on the basis of depth, slope, rugosity (ruggedness), and backscatter intensity and which is further informed by the ground-truth-survey imagery. The “potential habitats” polygons are delineated on the basis of substrate type, geomorphology, seafloor process, or other attributes that may provide a habitat for a specific species or assemblage of organisms. Representative seismic-reflection profile data from the map area is also include and provides information on the subsurface stratigraphy and structure of the map area. The distribution and thickness of young sediment (deposited over the past about 21,000 years, during the most recent sea-level rise) is interpreted on the basis of the seismic-reflection data. The geologic polygons merge onshore geologic mapping (compiled from existing maps by the California Geological Survey) and new offshore geologic mapping that is based on integration of high-resolution bathymetry and backscatter imagery seafloor-sediment and rock samplesdigital camera and video imagery, and high-resolution seismic-reflection profiles. The information provided by the map sheets, pamphlet, and data catalog has a broad range of applications. High-resolution bathymetry, acoustic backscatter, ground-truth-surveying imagery, and habitat mapping all contribute to habitat characterization and ecosystem-based management by providing essential data for delineation of marine protected areas and ecosystem restoration. Many of the maps provide high-resolution baselines that will be critical for monitoring environmental change associated with climate change, coastal development, or other forcings. High-resolution bathymetry is a critical component for modeling coastal flooding caused by storms and tsunamis, as well as inundation associated with longer term sea-level rise. Seismic-reflection and bathymetric data help characterize earthquake and tsunami sources, critical for natural-hazard assessments of coastal zones. Information on sediment distribution and thickness is essential to the understanding of local and regional sediment transport, as well as the development of regional sediment-management plans. In addition, siting of any new offshore infrastructure (for example, pipelines, cables, or renewable-energy facilities) will depend on high-resolution mapping. Finally, this mapping will both stimulate and enable new scientific research and also raise public awareness of, and education about, coastal environments and issues. Web services were created using an ArcGIS service definition file. The ArcGIS REST service and OGC WMS service include all Offshore Coal Oil Point map area data layers. Data layers are symbolized as shown on the associated map sheets.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This dataset contains model-based place (incorporated and census designated places) level estimates for the PLACES 2022 release in GIS-friendly format. PLACES covers the entire United States—50 states and the District of Columbia (DC)—at county, place, census tract, and ZIP Code Tabulation Area levels. It provides information uniformly on this large scale for local areas at 4 geographic levels. Estimates were provided by the Centers for Disease Control and Prevention (CDC), Division of Population Health, Epidemiology and Surveillance Branch. PLACES was funded by the Robert Wood Johnson Foundation in conjunction with the CDC Foundation. Data sources used to generate these model-based estimates include Behavioral Risk Factor Surveillance System (BRFSS) 2020 or 2019 data, Census Bureau 2010 population estimates, and American Community Survey (ACS) 2015–2019 estimates. The 2022 release uses 2020 BRFSS data for 25 measures and 2019 BRFSS data for 4 measures (high blood pressure, taking high blood pressure medication, high cholesterol, and cholesterol screening) that the survey collects data on every other year. These data can be joined with the 2019 Census TIGER/Line place boundary file in a GIS system to produce maps for 29 measures at the place level. An ArcGIS Online feature service is also available for users to make maps online or to add data to desktop GIS software. https://cdcarcgis.maps.arcgis.com/home/item.html?id=3b7221d4e47740cab9235b839fa55cd7
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This dataset contains model-based county-level estimates for the PLACES 2022 release in GIS-friendly format. PLACES covers the entire United States—50 states and the District of Columbia (DC)—at county, place, census tract, and ZIP Code Tabulation Area levels. It provides information uniformly on this large scale for local areas at 4 geographic levels. Estimates were provided by the Centers for Disease Control and Prevention (CDC), Division of Population Health, Epidemiology and Surveillance Branch. Project was funded by the Robert Wood Johnson Foundation in conjunction with the CDC Foundation. Data sources used to generate these model-based estimates include Behavioral Risk Factor Surveillance System (BRFSS) 2020 or 2019 data, Census Bureau 2020 or 2019 county population estimates, and American Community Survey (ACS) 2016–2020 or 2015–2019 estimates. The 2022 release uses 2020 BRFSS data for 25 measures and 2019 BRFSS data for 4 measures (high blood pressure, taking high blood pressure medication, high cholesterol, and cholesterol screening) that the survey collects data on every other year. These data can be joined with the census 2020 county boundary file in a GIS system to produce maps for 29 measures at the county level. An ArcGIS Online feature service is also available for users to make maps online or to add data to desktop GIS software. https://cdcarcgis.maps.arcgis.com/home/item.html?id=3b7221d4e47740cab9235b839fa55cd7
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data was prepared as input for the Selkie GIS-TE tool. This GIS tool aids site selection, logistics optimization and financial analysis of wave or tidal farms in the Irish and Welsh maritime areas. Read more here: https://www.selkie-project.eu/selkie-tools-gis-technoeconomic-model/
This research was funded by the Science Foundation Ireland (SFI) through MaREI, the SFI Research Centre for Energy, Climate and the Marine and by the Sustainable Energy Authority of Ireland (SEAI). Support was also received from the European Union's European Regional Development Fund through the Ireland Wales Cooperation Programme as part of the Selkie project.
File Formats
Results are presented in three file formats:
tif Can be imported into a GIS software (such as ARC GIS) csv Human-readable text format, which can also be opened in Excel png Image files that can be viewed in standard desktop software and give a spatial view of results
Input Data
All calculations use open-source data from the Copernicus store and the open-source software Python. The Python xarray library is used to read the data.
Hourly Data from 2000 to 2019
Wind -
Copernicus ERA5 dataset
17 by 27.5 km grid
10m wind speed
Wave - Copernicus Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis dataset 3 by 5 km grid
Accessibility
The maximum limits for Hs and wind speed are applied when mapping the accessibility of a site.
The Accessibility layer shows the percentage of time the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5) are below these limits for the month.
Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined by checking if
the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total number of hours for the month.
Environmental data is from the Copernicus data store (https://cds.climate.copernicus.eu/). Wave hourly data is from the 'Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis' dataset.
Wind hourly data is from the ERA 5 dataset.
Availability
A device's availability to produce electricity depends on the device's reliability and the time to repair any failures. The repair time depends on weather
windows and other logistical factors (for example, the availability of repair vessels and personnel.). A 2013 study by O'Connor et al. determined the
relationship between the accessibility and availability of a wave energy device. The resulting graph (see Fig. 1 of their paper) shows the correlation between
accessibility at Hs of 2m and wind speed of 15.0m/s and availability. This graph is used to calculate the availability layer from the accessibility layer.
The input value, accessibility, measures how accessible a site is for installation or operation and maintenance activities. It is the percentage time the
environmental conditions, i.e. the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5), are below operational limits.
Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined
by checking if the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total
number of hours for the month. Once the accessibility was known, the percentage availability was calculated using the O'Connor et al. graph of the relationship
between the two. A mature technology reliability was assumed.
Weather Window
The weather window availability is the percentage of possible x-duration windows where weather conditions (Hs, wind speed) are below maximum limits for the
given duration for the month.
The resolution of the wave dataset (0.05° × 0.05°) is higher than that of the wind dataset
(0.25° x 0.25°), so the nearest wind value is used for each wave data point. The weather window layer is at the resolution of the wave layer.
The first step in calculating the weather window for a particular set of inputs (Hs, wind speed and duration) is to calculate the accessibility at each timestep.
The accessibility is based on a simple boolean evaluation: are the wave and wind conditions within the required limits at the given timestep?
Once the time series of accessibility is calculated, the next step is to look for periods of sustained favourable environmental conditions, i.e. the weather
windows. Here all possible operating periods with a duration matching the required weather-window value are assessed to see if the weather conditions remain
suitable for the entire period. The percentage availability of the weather window is calculated based on the percentage of x-duration windows with suitable
weather conditions for their entire duration.The weather window availability can be considered as the probability of having the required weather window available
at any given point in the month.
Extreme Wind and Wave
The Extreme wave layers show the highest significant wave height expected to occur during the given return period. The Extreme wind layers show the highest wind speed expected to occur during the given return period.
To predict extreme values, we use Extreme Value Analysis (EVA). EVA focuses on the extreme part of the data and seeks to determine a model to fit this reduced
portion accurately. EVA consists of three main stages. The first stage is the selection of extreme values from a time series. The next step is to fit a model
that best approximates the selected extremes by determining the shape parameters for a suitable probability distribution. The model then predicts extreme values
for the selected return period. All calculations use the python pyextremes library. Two methods are used - Block Maxima and Peaks over threshold.
The Block Maxima methods selects the annual maxima and fits a GEVD probability distribution.
The peaks_over_threshold method has two variable calculation parameters. The first is the percentile above which values must be to be selected as extreme (0.9 or 0.998). The
second input is the time difference between extreme values for them to be considered independent (3 days). A Generalised Pareto Distribution is fitted to the selected
extremes and used to calculate the extreme value for the selected return period.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This dataset contains model-based census tract level estimates in GIS-friendly format. PLACES covers the entire United States—50 states and the District of Columbia—at county, place, census tract, and ZIP Code Tabulation Area levels. It provides information uniformly on this large scale for local areas at four geographic levels. Estimates were provided by the Centers for Disease Control and Prevention (CDC), Division of Population Health, Epidemiology and Surveillance Branch. PLACES was funded by the Robert Wood Johnson Foundation in conjunction with the CDC Foundation. Data sources used to generate these model-based estimates are Behavioral Risk Factor Surveillance System (BRFSS) 2021 or 2020 data, Census Bureau 2010 population estimates, and American Community Survey (ACS) 2015–2019 estimates. The 2023 release uses 2021 BRFSS data for 29 measures and 2020 BRFSS data for 7 measures (all teeth lost, dental visits, mammograms, cervical cancer screening, colorectal cancer screening, core preventive services among older adults, and sleeping less than 7 hours) that the survey collects data on every other year. These data can be joined with the census tract 2015 boundary file in a GIS system to produce maps for 36 measures at the census tract level. An ArcGIS Online feature service is also available for users to make maps online or to add data to desktop GIS software.
https://cdcarcgis.maps.arcgis.com/home/item.html?id=2c3deb0c05a748b391ea8c9cf9903588
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
The research focus in the field of remotely sensed imagery has shifted from collection and warehousing of data ' tasks for which a mature technology already exists, to auto-extraction of information and knowledge discovery from this valuable resource ' tasks for which technology is still under active development. In particular, intelligent algorithms for analysis of very large rasters, either high resolutions images or medium resolution global datasets, that are becoming more and more prevalent, are lacking. We propose to develop the Geospatial Pattern Analysis Toolbox (GeoPAT) a computationally efficient, scalable, and robust suite of algorithms that supports GIS processes such as segmentation, unsupervised/supervised classification of segments, query and retrieval, and change detection in giga-pixel and larger rasters. At the core of the technology that underpins GeoPAT is the novel concept of pattern-based image analysis. Unlike pixel-based or object-based (OBIA) image analysis, GeoPAT partitions an image into overlapping square scenes containing 1,000'100,000 pixels and performs further processing on those scenes using pattern signature and pattern similarity ' concepts first developed in the field of Content-Based Image Retrieval. This fusion of methods from two different areas of research results in orders of magnitude performance boost in application to very large images without sacrificing quality of the output.
GeoPAT v.1.0 already exists as the GRASS GIS add-on that has been developed and tested on medium resolution continental-scale datasets including the National Land Cover Dataset and the National Elevation Dataset. Proposed project will develop GeoPAT v.2.0 ' much improved and extended version of the present software. We estimate an overall entry TRL for GeoPAT v.1.0 to be 3-4 and the planned exit TRL for GeoPAT v.2.0 to be 5-6. Moreover, several new important functionalities will be added. Proposed improvements includes conversion of GeoPAT from being the GRASS add-on to stand-alone software capable of being integrated with other systems, full implementation of web-based interface, writing new modules to extent it applicability to high resolution images/rasters and medium resolution climate data, extension to spatio-temporal domain, enabling hierarchical search and segmentation, development of improved pattern signature and their similarity measures, parallelization of the code, implementation of divide and conquer strategy to speed up selected modules.
The proposed technology will contribute to a wide range of Earth Science investigations and missions through enabling extraction of information from diverse types of very large datasets. Analyzing the entire dataset without the need of sub-dividing it due to software limitations offers important advantage of uniformity and consistency. We propose to demonstrate the utilization of GeoPAT technology on two specific applications. The first application is a web-based, real time, visual search engine for local physiography utilizing query-by-example on the entire, global-extent SRTM 90 m resolution dataset. User selects region where process of interest is known to occur and the search engine identifies other areas around the world with similar physiographic character and thus potential for similar process. The second application is monitoring urban areas in their entirety at the high resolution including mapping of impervious surface and identifying settlements for improved disaggregation of census data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The spearfish sample database is being distributed to provide users with a solid database on which to work for learning the tools of GRASS. This document provides some general information about the database and the map layers available. With the release of GRASS 4.1, the GRASS development staff is pleased to announce that the sample data set spearfish is also being distributed. The spearfish data set covers two topographic 1:24,000 quads in western South Dakota. The names of the quads are Spearfish and Deadwood North, SD. The area covered by the data set is in the vicinity of Spearfish, SD and includes a majority of the Black Hills National Forest (i.e., Mount Rushmore). It is anticipated that enough data layers will be provided to allow users to use nearly all of the GRASS tools on the spearfish data set. A majority of this spearfish database was initially provided to USACERL by the EROS Data Center (EDC) in Sioux Falls, SD. The GRASS Development staff expresses acknowledgement and thanks to: the U.S. Geological Survey (USGS) and EROS Data Center for allowing us to distribute this data with our release of GRASS software; and to the U.S. Census Bureau for their samples of TIGER/Line data and the STF1 data which were used in the development of the TIGER programs and tutorials. Thanks also to SPOT Image Corporation for providing multispectral and panchromatic satellite imagery for a portion of the spearfish data set and for allowing us to distribute this imagery with GRASS software. In addition to the data provided by the EDC and SPOT, researchers at USACERL have dev eloped several new layers, thus enhancing the spearfish data set. To use the spearfish data, when entering GRASS, enter spearfish as your choice for the current location.
This is the classical GRASS GIS dataset from 1993 covering a part of Spearfish, South Dakota, USA, with raster, vector and point data. The Spearfish data base covers two 7.5 minute topographic sheets in the northern Black Hills of South Dakota, USA. It is in the Universal Transverse Mercator Projection. It was originally created by Larry Batten while he was with the U. S. Geological Survey's EROS Data Center in South Dakota. The data base was enhanced by USA/CERL and cooperators.
Facebook
TwitterOpen Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
Have you ever wanted to create your own maps, or integrate and visualize spatial datasets to examine changes in trends between locations and over time? Follow along with these training tutorials on QGIS, an open source geographic information system (GIS) and learn key concepts, procedures and skills for performing common GIS tasks – such as creating maps, as well as joining, overlaying and visualizing spatial datasets. These tutorials are geared towards new GIS users. We’ll start with foundational concepts, and build towards more advanced topics throughout – demonstrating how with a few relatively easy steps you can get quite a lot out of GIS. You can then extend these skills to datasets of thematic relevance to you in addressing tasks faced in your day-to-day work.
Facebook
TwitterCrimeMapTutorial is a step-by-step tutorial for learning crime mapping using ArcView GIS or MapInfo Professional GIS. It was designed to give users a thorough introduction to most of the knowledge and skills needed to produce daily maps and spatial data queries that uniformed officers and detectives find valuable for crime prevention and enforcement. The tutorials can be used either for self-learning or in a laboratory setting. The geographic information system (GIS) and police data were supplied by the Rochester, New York, Police Department. For each mapping software package, there are three PDF tutorial workbooks and one WinZip archive containing sample data and maps. Workbook 1 was designed for GIS users who want to learn how to use a crime-mapping GIS and how to generate maps and data queries. Workbook 2 was created to assist data preparers in processing police data for use in a GIS. This includes address-matching of police incidents to place them on pin maps and aggregating crime counts by areas (like car beats) to produce area or choropleth maps. Workbook 3 was designed for map makers who want to learn how to construct useful crime maps, given police data that have already been address-matched and preprocessed by data preparers. It is estimated that the three tutorials take approximately six hours to complete in total, including exercises.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This dataset contains model-based county-level estimates for the PLACES project 2020 release in GIS-friendly format. The PLACES project is the expansion of the original 500 Cities project and covers the entire United States—50 states and the District of Columbia (DC)—at county, place, census tract, and ZIP Code tabulation Areas (ZCTA) levels. It represents a first-of-its kind effort to release information uniformly on this large scale for local areas at 4 geographic levels. Estimates were provided by the Centers for Disease Control and Prevention (CDC), Division of Population Health, Epidemiology and Surveillance Branch. The project was funded by the Robert Wood Johnson Foundation (RWJF) in conjunction with the CDC Foundation. Data sources used to generate these model-based estimates include Behavioral Risk Factor Surveillance System (BRFSS) 2018 or 2017 data, Census Bureau 2018 or 2017 county population estimates, and American Community Survey (ACS) 2014-2018 or 2013-2017 estimates. The 2020 release uses 2018 BRFSS data for 23 measures and 2017 BRFSS data for 4 measures (high blood pressure, taking high blood pressure medication, high cholesterol, and cholesterol screening). Four measures are based on the 2017 BRFSS data because the relevant questions are only asked every other year in the BRFSS. These data can be joined with the census 2015 county boundary file in a GIS system to produce maps for 27 measures at the county level. An ArcGIS Online feature service is also available at https://www.arcgis.com/home/item.html?id=8eca985039464f4d83467b8f6aeb1320 for users to make maps online or to add data to desktop GIS software.
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
GPlates sample data provides open access to GPlates-compatible data files that can be loaded seamlessly in GPlates. GPlates is a desktop software application for the interactive visualisation of plate-tectonics. GPlates is free software (open-source software), developed by EarthByte (part of AuScope) at the University of Sydney, The Division of Geological and Planetary Sciences at CalTech, and the Center for Geodynamics at the Norwegian Geological Survey. GPlates is licensed for distribution under the GNU General Public License (GPL), version 2. GPlates sample data files include feature data, rasters and time-dependent raster images, and global plate polygon files (plate models).
Feature data are available in GPlates Markup Language (.gpml), .PLATES4 (.dat), ESRI Shapefile (.shp) and longditude and latitude with header record (.xy) formats. Files include:
GPlates-compatible present-day rasters and time-dependent raster images include:
Global Plate Polygon Files (plate models) include:
For further information, please refer to the GPlates Sample Data page on the EarthByte website and the GPlates website.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
A major objective of plant ecology research is to determine the underlying processes responsible for the observed spatial distribution patterns of plant species. Plants can be approximated as points in space for this purpose, and thus, spatial point pattern analysis has become increasingly popular in ecological research. The basic piece of data for point pattern analysis is a point location of an ecological object in some study region. Therefore, point pattern analysis can only be performed if data can be collected. However, due to the lack of a convenient sampling method, a few previous studies have used point pattern analysis to examine the spatial patterns of grassland species. This is unfortunate because being able to explore point patterns in grassland systems has widespread implications for population dynamics, community-level patterns and ecological processes. In this study, we develop a new method to measure individual coordinates of species in grassland communities. This method records plant growing positions via digital picture samples that have been sub-blocked within a geographical information system (GIS). Here, we tested out the new method by measuring the individual coordinates of Stipa grandis in grazed and ungrazed S. grandis communities in a temperate steppe ecosystem in China. Furthermore, we analyzed the pattern of S. grandis by using the pair correlation function g(r) with both a homogeneous Poisson process and a heterogeneous Poisson process. Our results showed that individuals of S. grandis were overdispersed according to the homogeneous Poisson process at 0-0.16 m in the ungrazed community, while they were clustered at 0.19 m according to the homogeneous and heterogeneous Poisson processes in the grazed community. These results suggest that competitive interactions dominated the ungrazed community, while facilitative interactions dominated the grazed community. In sum, we successfully executed a new sampling method, using digital photography and a Geographical Information System, to collect experimental data on the spatial point patterns for the populations in this grassland community.
Methods 1. Data collection using digital photographs and GIS
A flat 5 m x 5 m sampling block was chosen in a study grassland community and divided with bamboo chopsticks into 100 sub-blocks of 50 cm x 50 cm (Fig. 1). A digital camera was then mounted to a telescoping stake and positioned in the center of each sub-block to photograph vegetation within a 0.25 m2 area. Pictures were taken 1.75 m above the ground at an approximate downward angle of 90° (Fig. 2). Automatic camera settings were used for focus, lighting and shutter speed. After photographing the plot as a whole, photographs were taken of each individual plant in each sub-block. In order to identify each individual plant from the digital images, each plant was uniquely marked before the pictures were taken (Fig. 2 B).
Digital images were imported into a computer as JPEG files, and the position of each plant in the pictures was determined using GIS. This involved four steps: 1) A reference frame (Fig. 3) was established using R2V software to designate control points, or the four vertexes of each sub-block (Appendix S1), so that all plants in each sub-block were within the same reference frame. The parallax and optical distortion in the raster images was then geometrically corrected based on these selected control points; 2) Maps, or layers in GIS terminology, were set up for each species as PROJECT files (Appendix S2), and all individuals in each sub-block were digitized using R2V software (Appendix S3). For accuracy, the digitization of plant individual locations was performed manually; 3) Each plant species layer was exported from a PROJECT file to a SHAPE file in R2V software (Appendix S4); 4) Finally each species layer was opened in Arc GIS software in the SHAPE file format, and attribute data from each species layer was exported into Arc GIS to obtain the precise coordinates for each species. This last phase involved four steps of its own, from adding the data (Appendix S5), to opening the attribute table (Appendix S6), to adding new x and y coordinate fields (Appendix S7) and to obtaining the x and y coordinates and filling in the new fields (Appendix S8).
To determine the accuracy of our new method, we measured the individual locations of Leymus chinensis, a perennial rhizome grass, in representative community blocks 5 m x 5 m in size in typical steppe habitat in the Inner Mongolia Autonomous Region of China in July 2010 (Fig. 4 A). As our standard for comparison, we used a ruler to measure the individual coordinates of L. chinensis. We tested for significant differences between (1) the coordinates of L. chinensis, as measured with our new method and with the ruler, and (2) the pair correlation function g of L. chinensis, as measured with our new method and with the ruler (see section 3.2 Data Analysis). If (1) the coordinates of L. chinensis, as measured with our new method and with the ruler, and (2) the pair correlation function g of L. chinensis, as measured with our new method and with the ruler, did not differ significantly, then we could conclude that our new method of measuring the coordinates of L. chinensis was reliable.
We compared the results using a t-test (Table 1). We found no significant differences in either (1) the coordinates of L. chinensis or (2) the pair correlation function g of L. chinensis. Further, we compared the pattern characteristics of L. chinensis when measured by our new method against the ruler measurements using a null model. We found that the two pattern characteristics of L. chinensis did not differ significantly based on the homogenous Poisson process or complete spatial randomness (Fig. 4 B). Thus, we concluded that the data obtained using our new method was reliable enough to perform point pattern analysis with a null model in grassland communities.
Facebook
TwitterHigh-quality GIS land use maps for the Twin Cities Metropolitan Area for 1968 that were developed from paper maps (no GIS version existed previously).The GIS shapefiles were exported using ArcGIS Quick Import Tool from the Data Interoperability Toolbox. The coverage files was imported into a file geodatabase then exported to a .shp file for long-term use without proprietary software. An example output of the final GIS file is include as a pdf, in addition, a scan of the original 1968 map (held in the UMN Borchert Map Library) is included as a pdf. Metadata was extracted as an xml file. Finally, all associated coverage files and original map scans were zipped into one file for download and reuse. Data was uploaded to ArcGIS Online 3/9/2020. Original dataset available from the Data Repository of the University of Minnesota: http://dx.doi.org/10.13020/D63W22
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Overview:
The Copernicus DEM is a Digital Surface Model (DSM) which represents the surface of the Earth including buildings, infrastructure and vegetation. The original GLO-30 provides worldwide coverage at 30 meters (refers to 10 arc seconds). Note that ocean areas do not have tiles, there one can assume height values equal to zero. Data is provided as Cloud Optimized GeoTIFFs. Note that the vertical unit for measurement of elevation height is meters.
The Copernicus DEM for Europe at 100 meter resolution (EU-LAEA projection) in COG format has been derived from the Copernicus DEM GLO-30, mirrored on Open Data on AWS, dataset managed by Sinergise (https://registry.opendata.aws/copernicus-dem/).
Processing steps:
The original Copernicus GLO-30 DEM contains a relevant percentage of tiles with non-square pixels. We created a mosaic map in VRT format and defined within the VRT file the rule to apply cubic resampling while reading the data, i.e. importing them into GRASS GIS for further processing. We chose cubic instead of bilinear resampling since the height-width ratio of non-square pixels is up to 1:5. Hence, artefacts between adjacent tiles in rugged terrain could be minimized:
gdalbuildvrt -input_file_list list_geotiffs_MOOD.csv -r cubic -tr 0.000277777777777778 0.000277777777777778 Copernicus_DSM_30m_MOOD.vrt
In order to reproject the data to EU-LAEA projection while reducing the spatial resolution to 100 m, bilinear resampling was performed in GRASS GIS (using r.proj and the pixel values were scaled with 1000 (storing the pixels as Integer values) for data volume reduction. In addition, a hillshade raster map was derived from the resampled elevation map (using r.relief, GRASS GIS). Eventually, we exported the elevation and hillshade raster maps in Cloud Optimized GeoTIFF (COG) format, along with SLD and QML style files.
Projection + EPSG code:
ETRS89-extended / LAEA Europe (EPSG: 3035)
Spatial extent:
north: 6874000
south: -485000
west: 869000
east: 8712000
Spatial resolution:
100 m
Pixel values:
meters * 1000 (scaled to Integer; example: value 23220 = 23.220 m a.s.l.)
Software used:
GDAL 3.2.2 and GRASS GIS 8.0.0 (r.proj; r.relief)
Original dataset license:
https://spacedata.copernicus.eu/documents/20126/0/CSCDA_ESA_Mission-specific+Annex.pdf
Processed by:
mundialis GmbH & Co. KG, Germany (https://www.mundialis.de/)
Facebook
TwitterRoad segments representing centerlines of all roadways or carriageways in a local government. Typically, this information is compiled from orthoimagery or other aerial photography sources. This representation of the road centerlines support address geocoding and mapping. It also serves as a source for public works and other agencies that are responsible for the active management of the road network. (From ESRI Local Government Model "RoadCenterline" Feature)**This dataset was significantly revised in August of 2014 to correct for street segments that were not properly split at intersections. There may be issues with using data based off of the original centerline file. ** The column Speed Limit was updated in November 2014 by the Transportation Intern and is believed to be accurate** The column One Way was updated in November of 2014 by core GIS and is believed to be accurate.[MAXIMOID] A unique id field used in a work order management software called Maximo by IBM. Maximo uses GIS CL data to assign locations to work orders using this field. This field is maintained by the Transportation GIS specialists and is auto incremented when new streets are digitized. For example, if the latest digitized street segment MAXIMOID = 999, the next digitized line will receive MAXIMOID = 1000, and so on. STREET NAMING IS BROKEN INTO THREE FIELDS FOR GEOCODING:PREFIX This field is attributed if a street name has a prefix such as W, N, E, or S.NAME Domain with all street names. The name of the street without prefix or suffix.ROAD_TYPE (Text,4) Describes the type of road aka suffix, if applicable. CAPCOG Addressing Guidelines Sec 504 U. states, “Every road shall have corresponding standard street suffix…” standard street suffix abbreviations comply with USPS Pub 28 Appendix C Street Abbreviations. Examples include, but are not limited to, Rd, Dr, St, Trl, Ln, Gln, Lp, CT. LEFT_LOW The minimum numeric address on the left side of the CL segment. Left side of CL is defined as the left side of the line segment in the From-To direction. For example, if a line has addresses starting at 101 and ending at 201 on its left side, this column will be attributed 101.LEFT_HIGH The largest numeric address on the left side of the CL segment. Left side of CL is defined as the left side of the line segment in the From-To direction. For example, if a line has addresses starting at 101 and ending at 201 on its left side, this column will be attributed 201.LOW The minimum numeric address on the RIGHT side of the CL segment. Right side of CL is defined as the right side of the line segment in the From-To direction. For example, if a line has addresses starting at 100 and ending at 200 on its right side, this column will be attributed 100.HIGHThe maximum numeric address on the RIGHT side of the CL segment. Right side of CL is defined as the right side of the line segment in the From-To direction. For example, if a line has addresses starting at 100 and ending at 200 on its right side, this column will be attributed 200.ALIAS Alternative names for roads if known. This field is useful for geocode re-matching. CLASSThe functional classification of the centerline. For example, Minor (Minor Arterial), Major (Major Arterial). THIS FIELD IS NOT CONSISTENTLY FILLED OUT, NEEDS AN AUDIT. FULLSTREET The full name of the street concatenating the [PREFIX], [NAME], and [SUFFIX] fields. For example, "W San Antonio St."ROWWIDTH Width of right-of-way along the CL segment. Data entry from Plat by Planning GIS Or from Engineering PICPs/ CIPs.NUMLANES Number of striped vehicular driving lanes, including turn lanes if present along majority of segment. Does not inlcude bicycle lanes. LANEMILES Describes the total length of lanes for that segment in miles. It is manually field calculated as follows (( [ShapeLength] / 5280) * [NUMLANES]) and maintained by Transportation GIS.SPEEDLIMIT Speed limit of CL segment if known. If not, assume 30 mph for local and minor arterial streets. If speed limit changes are enacted by city council they will be recorded in the Traffic Register dataset, and this field will be updating accordingly. Initial data entry made by CIP/Planning GIS and maintained by Transportation GIS.[YRBUILT] replaced by [DateBuilt] See below. Will be deleted. 4/21/2017LASTYRRECON (Text,10) Is the last four-digit year a major reconstruction occurred. Most streets have not been reconstructed since orignal construction, and will have values. The Transportation GIS Specialist will update this field. OWNER Describes the governing body or private entity that owns/maintains the CL. It is possible that some streets are owned by other entities but maintained by CoSM. Possible attributes include, CoSM, Hays Owned/City Maintained, TxDOT Owned/City Maintained, TxDOT, one of four counties (Hays, Caldwell, Guadalupe, and Comal), TxState, and Private.ST_FROM Centerline segments are split at their intersections with other CL segments. This field names the nearest cross-street in the From- direction. Should be edited when new CL segments that cause splits are added. ST_TO Centerline segments are split at their intersections with other CL segments. This field names the nearest cross-street in the To- direction. Should be edited when new CL segments that cause splits are added. PAV_WID Pavement width of street in feet from back-of-curb to back-of-curb. This data is entered from as-built by CIP GIS. In January 2017 Transportation Dept. field staff surveyed all streets and measured width from face-of-curb to face-of-curb where curb was present, and edge of pavement to edge of pavement where it was not. This data was used to field calculate pavement width where we had values. A value of 1 foot was added to the field calculation if curb and gutter or stand up curb were present (the face-of-curb to back-of-curb is 6 in, multiple that by 2 to find 1 foot). If no curb was present, the value enter in by the field staff was directly copied over. If values were already present, and entered from asbuilt, they were left alone. ONEWAY Field describes direction of travel along CL in relation to digitized direction. If a street allows bi-directional travel it is attributed "B", a street that is one-way in the From_To direction is attributed "F", a street that is one-way in the To_From direction is attributed "T", and a street that does not allow travel in any direction is attibuted "N". ROADLEVEL Field will be aliased to [MINUTES] and be used to calculate travel time along CL segments in minutes using shape length and [SPEEDLIMIT]. Field calculate using the following expression: [MINUTES] = ( ([SHAPE_LENGTH] / 5280) / ( [SPEEDLIMIT] / 60 ))ROWSTATUS Values include "Open" or "Closed". Describes whether a right-of-way is open or closed. If a street is constructed within ROW it is "Open". If a street has not yet been constructed, and there is ROW, it is "Cosed". UPDATE: This feature class only has CL geometries for "Open" rights-of-way. This field should be deleted or re-purposed. ASBUILT field used to hyper link as-built documents detailing construction of the CL. Field was added in Dec. 2016. DateBuilt Date field used to record month and year a road was constructed from Asbuilt. Data was collected previously without month information. Data without a known month is entered as "1/1/YYYY". When month and year are known enter as "M/1/YYYY". Month and Year from asbuilt. Added by Engineering/CIP. ACCEPTED Date field used to record the month, day, and year that a roadway was officially accepted by the City of San Marcos. Engineering signs off on acceptance letters and stores these documents. This field was added in May of 2018. Due to a lack of data, the date built field was copied into this field for older roadways. Going forward, all new roadways will have this date. . This field will typically be populated well after a road has been drawn into GIS. Entered by Engineering/CIP. ****In an effort to make summarizing the data more efficient in Operations Dashboard, a generic date of "1/1/1900" was assigned to all COSM owned or maintained roads that had NULL values. These were roads that either have not been accepted yet, or roads that were expcepted a long time ago and their accepted date is not known. WARRANTY_EXP Date field used to record the expiration date of a newly accepted roadway. Typically this is one year from acceptance date, but can be greater. This field was added in May of 2018, so only roadways that have been excepted since and older roadways with valid warranty dates within this time frame have been populated.
Facebook
Twitterhttps://dataverse.ird.fr/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.23708/LHTEVZhttps://dataverse.ird.fr/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.23708/LHTEVZ
The Millennium Coral Reef Mapping Project provides thematic maps of coral reefs worldwide at geomorphological scale. Maps were created by photo-interpretation of Landsat 7 and Landsat 8 satellite images. Maps are provided as standard Shapefiles usable in GIS software. The geomorphological classification scheme is hierarchical and includes 5 levels. The GIS products include for each polygon a number of attributes. The 5 level geomorphological attributes are provided (numerical codes or text). The Level 1 corresponds to the differentiation between oceanic and continental reefs. Then from Levels 2 to 5, the higher the level, the more detailed the thematic classification is. Other binary attributes specify for each polygon if it belongs to terrestrial area (LAND attribute), and sedimentary or hard-bottom reef areas (REEF attribute). Examples and more details on the attributes are provided in the references cited. The products distributed here were created by IRD, in their last version. Shapefiles for 102 atolls of France (in the Pacific and Indian Oceans) as mapped by the Global coral reef mapping project at geomorphological scale using LANDSAT satellite data (L7 and L8). The data set provides one zip file per region of interest. Global coral reef mapping project at geomorphological scale using LANDSAT satellite data (L7 and L8). Funded by National Aeronautics and Space Administration, NASA grants NAG5-10908 (University of South Florida, PIs: Franck Muller-Karger and Serge Andréfouët) and CARBON-0000-0257 (NASA, PI: Julie Robinson) from 2001 to 2007. Funded by IRD since 2003 (in kind, PI: Serge Andréfouët).
Facebook
TwitterThe Digital Geologic-GIS Map of Olympic National Park and Vicinity, Washington is composed of GIS data layers and GIS tables, and is available in the following GRI-supported GIS data formats: 1.) a 10.1 file geodatabase (olym_geology.gdb), a 2.) Open Geospatial Consortium (OGC) geopackage, and 3.) 2.2 KMZ/KML file for use in Google Earth, however, this format version of the map is limited in data layers presented and in access to GRI ancillary table information. The file geodatabase format is supported with a 1.) ArcGIS Pro map file (.mapx) file (olym_geology.mapx) and individual Pro layer (.lyrx) files (for each GIS data layer), as well as with a 2.) 10.1 ArcMap (.mxd) map document (olym_geology.mxd) and individual 10.1 layer (.lyr) files (for each GIS data layer). The OGC geopackage is supported with a QGIS project (.qgz) file. Upon request, the GIS data is also available in ESRI 10.1 shapefile format. Contact Stephanie O'Meara (see contact information below) to acquire the GIS data in these GIS data formats. In addition to the GIS data and supporting GIS files, three additional files comprise a GRI digital geologic-GIS dataset or map: 1.) this file (olym_geology_gis_readme.pdf), 2.) the GRI ancillary map information document (.pdf) file (olym_geology.pdf) which contains geologic unit descriptions, as well as other ancillary map information and graphics from the source map(s) used by the GRI in the production of the GRI digital geologic-GIS data for the park, and 3.) a user-friendly FAQ PDF version of the metadata (olym_geology_metadata_faq.pdf). Please read the olym_geology_gis_readme.pdf for information pertaining to the proper extraction of the GIS data and other map files. Google Earth software is available for free at: https://www.google.com/earth/versions/. QGIS software is available for free at: https://www.qgis.org/en/site/. Users are encouraged to only use the Google Earth data for basic visualization, and to use the GIS data for any type of data analysis or investigation. The data were completed as a component of the Geologic Resources Inventory (GRI) program, a National Park Service (NPS) Inventory and Monitoring (I&M) Division funded program that is administered by the NPS Geologic Resources Division (GRD). For a complete listing of GRI products visit the GRI publications webpage: For a complete listing of GRI products visit the GRI publications webpage: https://www.nps.gov/subjects/geology/geologic-resources-inventory-products.htm. For more information about the Geologic Resources Inventory Program visit the GRI webpage: https://www.nps.gov/subjects/geology/gri,htm. At the bottom of that webpage is a "Contact Us" link if you need additional information. You may also directly contact the program coordinator, Jason Kenworthy (jason_kenworthy@nps.gov). Source geologic maps and data used to complete this GRI digital dataset were provided by the following: Washington Division of Geology and Earth Resources. Detailed information concerning the sources used and their contribution the GRI product are listed in the Source Citation section(s) of this metadata record (olym_geology_metadata.txt or olym_geology_metadata_faq.pdf). Users of this data are cautioned about the locational accuracy of features within this dataset. Based on the source map scale of 1:100,000 and United States National Map Accuracy Standards features are within (horizontally) 50.8 meters or 166.7 feet of their actual location as presented by this dataset. Users of this data should thus not assume the location of features is exactly where they are portrayed in Google Earth, ArcGIS, QGIS or other software used to display this dataset. All GIS and ancillary tables were produced as per the NPS GRI Geology-GIS Geodatabase Data Model v. 2.3. (available at: https://www.nps.gov/articles/gri-geodatabase-model.htm).
Facebook
TwitterCan your desktop computer crunch the large GIS datasets that are becoming increasingly common across the geosciences? Do you have access to or the know-how to take advantage of advanced high performance computing (HPC) capability? Web based cyberinfrastructure takes work off your desk or laptop computer and onto infrastructure or "cloud" based data and processing servers. This talk will describe the HydroShare collaborative environment and web based services being developed to support the sharing and processing of hydrologic data and models. HydroShare supports the upload, storage, and sharing of a broad class of hydrologic data including time series, geographic features and raster datasets, multidimensional space-time data, and other structured collections of data. Web service tools and a Python client library provide researchers with access to HPC resources without requiring them to become HPC experts. This reduces the time and effort spent in finding and organizing the data required to prepare the inputs for hydrologic models and facilitates the management of online data and execution of models on HPC systems. This presentation will illustrate the use of web based data and computation services from both the browser and desktop client software. These web-based services implement the Terrain Analysis Using Digital Elevation Model (TauDEM) tools for watershed delineation, generation of hydrology-based terrain information, and preparation of hydrologic model inputs. They allow users to develop scripts on their desktop computer that call analytical functions that are executed completely in the cloud, on HPC resources using input datasets stored in the cloud, without installing specialized software, learning how to use HPC, or transferring large datasets back to the user's desktop. These cases serve as examples for how this approach can be extended to other models to enhance the use of web and data services in the geosciences.
Slides for AGU 2015 presentation IN51C-03, December 18, 2015
Facebook
TwitterThe files linked to this reference are the geospatial data created as part of the completion of the baseline vegetation inventory project for the NPS park unit. Current format is ArcGIS file geodatabase but older formats may exist as shapefiles. We converted the photointerpreted data into a format usable in a geographic information system (GIS) by employing three fundamental processes: (1) orthorectify, (2) digitize, and (3) develop the geodatabase. All digital map automation was projected in Universal Transverse Mercator (UTM), Zone 16, using the North American Datum of 1983 (NAD83). Orthorectify: We orthorectified the interpreted overlays by using OrthoMapper, a softcopy photogrammetric software for GIS. One function of OrthoMapper is to create orthorectified imagery from scanned and unrectified imagery (Image Processing Software, Inc., 2002). The software features a method of visual orientation involving a point-and-click operation that uses existing orthorectified horizontal and vertical base maps. Of primary importance to us, OrthoMapper also has the capability to orthorectify the photointerpreted overlays of each photograph based on the reference information provided. Digitize: To produce a polygon vector layer for use in ArcGIS (Environmental Systems Research Institute [ESRI], Redlands, California), we converted each raster-based image mosaic of orthorectified overlays containing the photointerpreted data into a grid format by using ArcGIS. In ArcGIS, we used the ArcScan extension to trace the raster data and produce ESRI shapefiles. We digitally assigned map-attribute codes (both map-class codes and physiognomic modifier codes) to the polygons and checked the digital data against the photointerpreted overlays for line and attribute consistency. Ultimately, we merged the individual layers into a seamless layer. Geodatabase: At this stage, the map layer has only map-attribute codes assigned to each polygon. To assign meaningful information to each polygon (e.g., map-class names, physiognomic definitions, links to NVCS types), we produced a feature-class table, along with other supportive tables and subsequently related them together via an ArcGIS Geodatabase. This geodatabase also links the map to other feature-class layers produced from this project, including vegetation sample plots, accuracy assessment (AA) sites, aerial photo locations, and project boundary extent. A geodatabase provides access to a variety of interlocking data sets, is expandable, and equips resource managers and researchers with a powerful GIS tool.