Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data was prepared as input for the Selkie GIS-TE tool. This GIS tool aids site selection, logistics optimization and financial analysis of wave or tidal farms in the Irish and Welsh maritime areas. Read more here: https://www.selkie-project.eu/selkie-tools-gis-technoeconomic-model/
This research was funded by the Science Foundation Ireland (SFI) through MaREI, the SFI Research Centre for Energy, Climate and the Marine and by the Sustainable Energy Authority of Ireland (SEAI). Support was also received from the European Union's European Regional Development Fund through the Ireland Wales Cooperation Programme as part of the Selkie project.
File Formats
Results are presented in three file formats:
tif Can be imported into a GIS software (such as ARC GIS) csv Human-readable text format, which can also be opened in Excel png Image files that can be viewed in standard desktop software and give a spatial view of results
Input Data
All calculations use open-source data from the Copernicus store and the open-source software Python. The Python xarray library is used to read the data.
Hourly Data from 2000 to 2019
Wind -
Copernicus ERA5 dataset
17 by 27.5 km grid
10m wind speed
Wave - Copernicus Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis dataset 3 by 5 km grid
Accessibility
The maximum limits for Hs and wind speed are applied when mapping the accessibility of a site.
The Accessibility layer shows the percentage of time the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5) are below these limits for the month.
Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined by checking if
the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total number of hours for the month.
Environmental data is from the Copernicus data store (https://cds.climate.copernicus.eu/). Wave hourly data is from the 'Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis' dataset.
Wind hourly data is from the ERA 5 dataset.
Availability
A device's availability to produce electricity depends on the device's reliability and the time to repair any failures. The repair time depends on weather
windows and other logistical factors (for example, the availability of repair vessels and personnel.). A 2013 study by O'Connor et al. determined the
relationship between the accessibility and availability of a wave energy device. The resulting graph (see Fig. 1 of their paper) shows the correlation between
accessibility at Hs of 2m and wind speed of 15.0m/s and availability. This graph is used to calculate the availability layer from the accessibility layer.
The input value, accessibility, measures how accessible a site is for installation or operation and maintenance activities. It is the percentage time the
environmental conditions, i.e. the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5), are below operational limits.
Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined
by checking if the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total
number of hours for the month. Once the accessibility was known, the percentage availability was calculated using the O'Connor et al. graph of the relationship
between the two. A mature technology reliability was assumed.
Weather Window
The weather window availability is the percentage of possible x-duration windows where weather conditions (Hs, wind speed) are below maximum limits for the
given duration for the month.
The resolution of the wave dataset (0.05° × 0.05°) is higher than that of the wind dataset
(0.25° x 0.25°), so the nearest wind value is used for each wave data point. The weather window layer is at the resolution of the wave layer.
The first step in calculating the weather window for a particular set of inputs (Hs, wind speed and duration) is to calculate the accessibility at each timestep.
The accessibility is based on a simple boolean evaluation: are the wave and wind conditions within the required limits at the given timestep?
Once the time series of accessibility is calculated, the next step is to look for periods of sustained favourable environmental conditions, i.e. the weather
windows. Here all possible operating periods with a duration matching the required weather-window value are assessed to see if the weather conditions remain
suitable for the entire period. The percentage availability of the weather window is calculated based on the percentage of x-duration windows with suitable
weather conditions for their entire duration.The weather window availability can be considered as the probability of having the required weather window available
at any given point in the month.
Extreme Wind and Wave
The Extreme wave layers show the highest significant wave height expected to occur during the given return period. The Extreme wind layers show the highest wind speed expected to occur during the given return period.
To predict extreme values, we use Extreme Value Analysis (EVA). EVA focuses on the extreme part of the data and seeks to determine a model to fit this reduced
portion accurately. EVA consists of three main stages. The first stage is the selection of extreme values from a time series. The next step is to fit a model
that best approximates the selected extremes by determining the shape parameters for a suitable probability distribution. The model then predicts extreme values
for the selected return period. All calculations use the python pyextremes library. Two methods are used - Block Maxima and Peaks over threshold.
The Block Maxima methods selects the annual maxima and fits a GEVD probability distribution.
The peaks_over_threshold method has two variable calculation parameters. The first is the percentile above which values must be to be selected as extreme (0.9 or 0.998). The
second input is the time difference between extreme values for them to be considered independent (3 days). A Generalised Pareto Distribution is fitted to the selected
extremes and used to calculate the extreme value for the selected return period.
The City of Seattle Transportation GIS Datasets | https://data-seattlecitygis.opendata.arcgis.com/datasets?t=transportation | Lifecycle status: Production | Purpose: to enable open access to SDOT GIS data. This website includes over 60 transportation-related GIS datasets from categories such as parking, transit, pedestrian, bicycle, and roadway assets. | PDDL: https://opendatacommons.org/licenses/pddl/ | The City of Seattle makes no representation or warranty as to its accuracy. The City of Seattle has created this service for our GIS Open Data website. We do reserve the right to alter, suspend, re-host, or retire this service at any time and without notice. | Datasets: 2007 Traffic Flow Counts, 2008 Traffic Flow Counts, 2009 Traffic Flow Counts, 2010 Traffic Flow Counts, 2011 Traffic Flow Counts, 2012 Traffic Flow Counts, 2013 Traffic Flow Counts, 2014 Traffic Flow Counts, 2015 Traffic Flow Counts, 2016 Traffic Flow Counts, 2017 Traffic Flow Counts, 2018 Traffic Flow Counts, Areaways, Bike Racks, Blockface, Bridges, Channelization File Geodatabase, Collisions, Crash Cushions, Curb Ramps, dotMaps Active Projects, Dynamic Message Signs, Existing Bike Facilities, Freight Network, Greater Downtown Alleys, Guardrails, High Impact Areas, Intersections, Marked Crosswalks, One-Way Streets, Paid Area Curbspaces, Pavement Moratoriums, Pay Stations, Peak Hour Parking Restrictions, Planned Bike Facilities, Public Garages or Parking Lots, Radar Speed Signs, Restricted Parking Zone (RPZ) Program, Retaining Walls, SDOT Capital Projects Input, Seattle On Street Paid Parking-Daytime Rates, Seattle On Street Paid Parking-Evening Rates, Seattle On Street Paid Parking-Morning Rates, Seattle Streets, SidewalkObservations, Sidewalks, Snow Ice Routes, Stairways, Street Design Concept Plans, Street Ends (Shoreline), Street Furnishings, Street Signs, Street Use Permits Use Addresses, Streetcar Lines, Streetcar Stations, Traffic Beacons, Traffic Cameras, Traffic Circles, Traffic Detectors, Traffic Lanes, Traffic Signals, Transit Classification, Trees.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
For complete collection of data and models, see https://doi.org/10.21942/uva.c.5290546.Original model developed in 2016-17 in ArcGIS by Henk Pieter Sterk (www.rfase.org), with minor updates in 2021 by Stacy Shinneman and Henk Pieter Sterk. Model used to generate publication results:Hierarchical geomorphological mapping in mountainous areas Matheus G.G. De Jong, Henk Pieter Sterk, Stacy Shinneman & Arie C. Seijmonsbergen. Submitted to Journal of Maps 2020, revisions made in 2021.This model creates tiers (columns) of geomorphological features (Tier 1, Tier 2 and Tier 3) in the landscape of Vorarlberg, Austria, each with an increasing level of detail. The input dataset needed to create this 'three-tier-legend' is a geomorphological map of Vorarlberg with a Tier 3 category (e.g. 1111, for glacially eroded bedrock). The model then automatically adds Tier 1, Tier 2 and Tier 3 categories based on the Tier 3 code in the 'Geomorph' field. The model replaces the input file with an updated shapefile of the geomorphology of Vorarlberg, now including three tiers of geomorphological features. Python script files and .lyr symbology files are also provided here.
Geographic Information System (GIS) analyses are an essential part of natural resource management and research. Calculating and summarizing data within intersecting GIS layers is common practice for analysts and researchers. However, the various tools and steps required to complete this process are slow and tedious, requiring many tools iterating over hundreds, or even thousands of datasets. USGS scientists will combine a series of ArcGIS geoprocessing capabilities with custom scripts to create tools that will calculate, summarize, and organize large amounts of data that can span many temporal and spatial scales with minimal user input. The tools work with polygons, lines, points, and rasters to calculate relevant summary data and combine them into a single output table that can be easily incorporated into statistical analyses. These tools are useful for anyone interested in using an automated script to quickly compile summary information within all areas of interest in a GIS dataset.
Toolbox Use
License
Creative Commons-PDDC
Recommended Citation
Welty JL, Jeffries MI, Arkle RS, Pilliod DS, Kemp SK. 2021. GIS Clipping and Summarization Toolbox: U.S. Geological Survey Software Release. https://doi.org/10.5066/P99X8558
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Input shapefiles for the Weighted Overlay Lab of UWSP's WATR 391 GIS course.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the processed geospatial data files needed to run the Open-Source Spatial Electrification Tool (OnSSET), for 20 countries in Sub Sahara Africa. These data files were created by the KTH team for the Global Electrification Platform (GEP) model (https://electrifynow.energydata.info/). To access result files and geospatial population clusters, please go to https://energydata.info/dataset/?q=gep.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Grid Garage Toolbox is designed to help you undertake the Geographic Information System (GIS) tasks required to process GIS data (geodata) into a standard, spatially aligned format. This format …Show full descriptionThe Grid Garage Toolbox is designed to help you undertake the Geographic Information System (GIS) tasks required to process GIS data (geodata) into a standard, spatially aligned format. This format is required by most, grid or raster, spatial modelling tools such as the Multi-criteria Analysis Shell for Spatial Decision Support (MCAS-S). Grid Garage contains 36 tools designed to save you time by batch processing repetitive GIS tasks as well diagnosing problems with data and capturing a record of processing step and any errors encountered. Grid Garage provides tools that function using a list based approach to batch processing where both inputs and outputs are specified in tables to enable selective batch processing and detailed result reporting. In many cases the tools simply extend the functionality of standard ArcGIS tools, providing some or all of the inputs required by these tools via the input table to enable batch processing on a 'per item' basis. This approach differs slightly from normal batch processing in ArcGIS, instead of manually selecting single items or a folder on which to apply a tool or model you provide a table listing target datasets. In summary the Grid Garage allows you to: List, describe and manage very large volumes of geodata. Batch process repetitive GIS tasks such as managing (renaming, describing etc.) or processing (clipping, resampling, reprojecting etc.) many geodata inputs such as time-series geodata derived from satellite imagery or climate models. Record any errors when batch processing and diagnose errors by interrogating the input geodata that failed. Develop your own models in ArcGIS ModelBuilder that allow you to automate any GIS workflow utilising one or more of the Grid Garage tools that can process an unlimited number of inputs. Automate the process of generating MCAS-S TIP metadata files for any number of input raster datasets. The Grid Garage is intended for use by anyone with an understanding of GIS principles and an intermediate to advanced level of GIS skills. Using the Grid Garage tools in ArcGIS ModelBuilder requires skills in the use of the ArcGIS ModelBuilder tool. Download Instructions: Create a new folder on your computer or network and then download and unzip the zip file from the GitHub Release page for each of the following items in the 'Data and Resources' section below. There is a folder in each zip file that contains all the files. See the Grid Garage User Guide for instructions on how to install and use the Grid Garage Toolbox with the sample data provided.
This mapping tool provides a representation of the general watershed boundaries for stream systems declared fully appropriated by the State Water Board. The boundaries were created by Division of Water Rights staff by delineating FASS critical reaches and consolidating HUC 12 sub-watersheds to form FASS Watershed boundaries. As such, the boundaries are in most cases conservative with respect to the associated stream system. However, users should check neighboring FASS Watersheds to ensure the stream system of interest is not restricted by other FASS listings. For more information regarding the Declaration of Fully Appropriated Stream Systems, visit the Division of Water Rights’ Fully Appropriated Streams webpage. How to Use the Interactive Mapping Tool: If it is your first time viewing the map, you will need to click the “OK” box on the splash screen and agree to the disclaimer before continuing. Navigate to your point of interest by either using the search bar or by zooming in on the map. You may enter a stream name, street address, or watershed ID in the search bar. Click on the map to identify the location of interest and one or more pop-up boxes may appear with information about the fully appropriated stream systems within the general watershed boundaries of the identified location. The information provided in the pop-up box may include: (a) stream name, (b) tributary, (c) season declared fully appropriated, (d) Board Decisions/Water Right Orders, and/or (e) court references/adjudications. You may toggle the FAS Streams reference layer on and off to find representative critical reaches associated with the FASS Watershed layer. Please note that this layer is for general reference purposes only and ultimately the critical reach listed in Appendix A of Water Rights Order 98-08 and Appendix A together with any associated footnotes controls. Note: A separate FAS Watershed boundary layer was created for the Bay-Delta Watershed. The Bay-Delta Watershed layer should be toggled on to check if the area of interest is fully appropriated under State Water Board Decision 1594.
This digital dataset was created as part of a U.S. Geological Survey study, done in cooperation with the Monterey County Water Resource Agency, to conduct a hydrologic resource assessment and develop an integrated numerical hydrologic model of the hydrologic system of Salinas Valley, CA. As part of this larger study, the USGS developed this digital dataset of geologic data and three-dimensional hydrogeologic framework models, referred to here as the Salinas Valley Geological Framework (SVGF), that define the elevation, thickness, extent, and lithology-based texture variations of nine hydrogeologic units in Salinas Valley, CA. The digital dataset includes a geospatial database that contains two main elements as GIS feature datasets: (1) input data to the 3D framework and textural models, within a feature dataset called “ModelInput”; and (2) interpolated elevation, thicknesses, and textural variability of the hydrogeologic units stored as arrays of polygonal cells, within a feature dataset called “ModelGrids”. The model input data in this data release include stratigraphic and lithologic information from water, monitoring, and oil and gas wells, as well as data from selected published cross sections, point data derived from geologic maps and geophysical data, and data sampled from parts of previous framework models. Input surface and subsurface data have been reduced to points that define the elevation of the top of each hydrogeologic units at x,y locations; these point data, stored in a GIS feature class named “ModelInputData”, serve as digital input to the framework models. The location of wells used a sources of subsurface stratigraphic and lithologic information are stored within the GIS feature class “ModelInputData”, but are also provided as separate point feature classes in the geospatial database. Faults that offset hydrogeologic units are provided as a separate line feature class. Borehole data are also released as a set of tables, each of which may be joined or related to well location through a unique well identifier present in each table. Tables are in Excel and ascii comma-separated value (CSV) format and include separate but related tables for well location, stratigraphic information of the depths to top and base of hydrogeologic units intercepted downhole, downhole lithologic information reported at 10-foot intervals, and information on how lithologic descriptors were classed as sediment texture. Two types of geologic frameworks were constructed and released within a GIS feature dataset called “ModelGrids”: a hydrostratigraphic framework where the elevation, thickness, and spatial extent of the nine hydrogeologic units were defined based on interpolation of the input data, and (2) a textural model for each hydrogeologic unit based on interpolation of classed downhole lithologic data. Each framework is stored as an array of polygonal cells: essentially a “flattened”, two-dimensional representation of a digital 3D geologic framework. The elevation and thickness of the hydrogeologic units are contained within a single polygon feature class SVGF_3DHFM, which contains a mesh of polygons that represent model cells that have multiple attributes including XY location, elevation and thickness of each hydrogeologic unit. Textural information for each hydrogeologic unit are stored in a second array of polygonal cells called SVGF_TextureModel. The spatial data are accompanied by non-spatial tables that describe the sources of geologic information, a glossary of terms, a description of model units that describes the nine hydrogeologic units modeled in this study. A data dictionary defines the structure of the dataset, defines all fields in all spatial data attributer tables and all columns in all nonspatial tables, and duplicates the Entity and Attribute information contained in the metadata file. Spatial data are also presented as shapefiles. Downhole data from boreholes are released as a set of tables related by a unique well identifier, tables are in Excel and ascii comma-separated value (CSV) format.
This dataset was updated April, 2024.This ownership dataset was generated primarily from CPAD data, which already tracks the majority of ownership information in California. CPAD is utilized without any snapping or clipping to FRA/SRA/LRA. CPAD has some important data gaps, so additional data sources are used to supplement the CPAD data. Currently this includes the most currently available data from BIA, DOD, and FWS. Additional sources may be added in subsequent versions. Decision rules were developed to identify priority layers in areas of overlap.Starting in 2022, the ownership dataset was compiled using a new methodology. Previous versions attempted to match federal ownership boundaries to the FRA footprint, and used a manual process for checking and tracking Federal ownership changes within the FRA, with CPAD ownership information only being used for SRA and LRA lands. The manual portion of that process was proving difficult to maintain, and the new method (described below) was developed in order to decrease the manual workload, and increase accountability by using an automated process by which any final ownership designation could be traced back to a specific dataset.The current process for compiling the data sources includes: Clipping input datasets to the California boundary Filtering the FWS data on the Primary Interest field to exclude lands that are managed by but not owned by FWS (ex: Leases, Easements, etc) Supplementing the BIA Pacific Region Surface Trust lands data with the Western Region portion of the LAR dataset which extends into California. Filtering the BIA data on the Trust Status field to exclude areas that represent mineral rights only. Filtering the CPAD data on the Ownership Level field to exclude areas that are Privately owned (ex: HOAs) In the case of overlap, sources were prioritized as follows: FWS > BIA > CPAD > DOD As an exception to the above, DOD lands on FRA which overlapped with CPAD lands that were incorrectly coded as non-Federal were treated as an override, such that the DOD designation could win out over CPAD.In addition to this ownership dataset, a supplemental _source dataset is available which designates the source that was used to determine the ownership in this dataset.Data Sources: GreenInfo Network's California Protected Areas Database (CPAD2023a). https://www.calands.org/cpad/; https://www.calands.org/wp-content/uploads/2023/06/CPAD-2023a-Database-Manual.pdf US Fish and Wildlife Service FWSInterest dataset (updated December, 2023). https://gis-fws.opendata.arcgis.com/datasets/9c49bd03b8dc4b9188a8c84062792cff_0/explore Department of Defense Military Bases dataset (updated September 2023) https://catalog.data.gov/dataset/military-bases Bureau of Indian Affairs, Pacific Region, Surface Trust and Pacific Region Office (PRO) land boundaries data (2023) via John Mosley John.Mosley@bia.gov Bureau of Indian Affairs, Land Area Representations (LAR) and BIA Regions datasets (updated Oct 2019) https://biamaps.doi.gov/bogs/datadownload.htmlData Gaps & Changes:Known gaps include several BOR, ACE and Navy lands which were not included in CPAD nor the DOD MIRTA dataset. Our hope for future versions is to refine the process by pulling in additional data sources to fill in some of those data gaps. Additionally, any feedback received about missing or inaccurate data can be taken back to the appropriate source data where appropriate, so fixes can occur in the source data, instead of just in this dataset.24_1: Input datasets this year included numerous changes since the previous version, particularly the CPAD and DOD inputs. Of particular note was the re-addition of Camp Pendleton to the DOD input dataset, which is reflected in this version of the ownership dataset. We were unable to obtain an updated input for tribral data, so the previous inputs was used for this version.23_1: A few discrepancies were discovered between data changes that occurred in CPAD when compared with parcel data. These issues will be taken to CPAD for clarification for future updates, but for ownership23_1 it reflects the data as it was coded in CPAD at the time. In addition, there was a change in the DOD input data between last year and this year, with the removal of Camp Pendleton. An inquiry was sent for clarification on this change, but for ownership23_1 it reflects the data per the DOD input dataset.22_1 : represents an initial version of ownership with a new methodology which was developed under a short timeframe. A comparison with previous versions of ownership highlighted the some data gaps with the current version. Some of these known gaps include several BOR, ACE and Navy lands which were not included in CPAD nor the DOD MIRTA dataset. Our hope for future versions is to refine the process by pulling in additional data sources to fill in some of those data gaps. In addition, any topological errors (like overlaps or gaps) that exist in the input datasets may thus carry over to the ownership dataset. Ideally, any feedback received about missing or inaccurate data can be taken back to the relevant source data where appropriate, so fixes can occur in the source data, instead of just in this dataset.
AddressNC has been prioritized by the North Carolina Geographic Information Coordinating Council (GICC) as a critical framework dataset. The AddressNC Program runs parallel to and is derived from the North Carolina 911 Board Next Generation 911 (NG911) Program. Address data has been identified as mission critical for validation and accurate call routing within NG911 and the AddressNC Program completes a full-circle approach of address maintenance and sustainability through applied enhancements and quality control beyond 911 requirements. A primary goal of AddressNC is to continually develop and maintain quality address points on a continuous cycle through updates published in NG911. Various agencies in federal, state, and local government can benefit by applying practical applications of quality addressing in their own programs, negating the need to rely on outdated statewide addressing data and/or using paid address data sets from third party sources.
AddressNC has been prioritized by the North Carolina Geographic Information Coordinating Council (GICC) as a critical framework dataset. The AddressNC Program runs parallel to and is derived from the North Carolina 911 Board Next Generation 911 (NG911) Program. Address data has been identified as mission critical for validation and accurate call routing within NG911 and the AddressNC Program completes a full-circle approach of address maintenance and sustainability through applied enhancements and quality control beyond 911 requirements. A primary goal of AddressNC is to continually develop and maintain quality address points on a continuous cycle through updates published in NG911. Various agencies in federal, state, and local government can benefit by applying practical applications of quality addressing in their own programs, negating the need to rely on outdated statewide addressing data and/or using paid address data sets from third party sources.
Accessibility is defined as the travel time to a location of interest using land (road/off road) or water (navigable river, lake and ocean) based travel. This accessibility is computed using a cost-distance algorithm which computes the “cost” of traveling between two locations on a regular raster grid. Generally this cost is measured in units of time.The input GIS data and a description of the underlying model that were developed by Andrew Nelson in the GEM (Global Environment Monitoring) unit in collaboration with the World Bank’s Development Research Group between October 2007 and May 2008. The pixel values representing minutes of travel time. Available dataset: Joint Research Centre - Land Resource Management Unit
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The GRASS GIS database containing the input raster layers needed to reproduce the results from the manuscript entitled:
"Mapping forests with different levels of naturalness using machine learning and landscape data mining" (under review)
Abstract:
To conserve biodiversity, it is imperative to maintain and restore sufficient amounts of functional habitat networks. Hence, locating remaining forests with natural structures and processes over landscapes and large regions is a key task. We integrated machine learning (Random Forest) and wall-to-wall open landscape data to scan all forest landscapes in Sweden with a 1 ha spatial resolution with respect to the relative likelihood of hosting High Conservation Value Forests (HCVF). Using independent spatial stand- and plot-level validation data we confirmed that our predictions (ROC AUC in the range of 0.89 - 0.90) correctly represent forests with different levels of naturalness, from deteriorated to those with high and associated biodiversity conservation values. Given ambitious national and international conservation objectives, and increasingly intensive forestry, our model and the resulting wall-to-wall mapping fills an urgent gap for assessing fulfilment of evidence-based conservation targets, spatial planning, and designing forest landscape restoration.
This database was compiled from the following sources:
source: https://geodata.naturvardsverket.se/nedladdning/skogliga_vardekarnor_2016.zip
source: https://www.lantmateriet.se/en/geodata/geodata-products/product-list/terrain-model-download-grid-50/
source: https://glad.earthengine.app
source: https://doi.org/10.6084/m9.figshare.9828827.v2
source: https://www.scb.se/en/services/open-data-api/open-geodata/grid-statistics/
To learn more about the GRASS GIS database structure, see:
The National Hydrography Dataset Plus High Resolution (NHDplus High Resolution) maps the lakes, ponds, streams, rivers and other surface waters of the United States. Created by the US Geological Survey, NHDPlus High Resolution provides mean annual flow and velocity estimates for rivers and streams. Additional attributes provide connections between features facilitating complicated analyses.For more information on the NHDPlus High Resolution dataset see the User’s Guide for the National Hydrography Dataset Plus (NHDPlus) High Resolution.Dataset SummaryPhenomenon Mapped: Surface waters and related features of the United States and associated territoriesGeographic Extent: The Contiguous United States, Hawaii, portions of Alaska, Puerto Rico, Guam, US Virgin Islands, Northern Marianas Islands, and American SamoaProjection: Web Mercator Auxiliary Sphere Visible Scale: Visible at all scales but layer draws best at scales larger than 1:1,000,000Source: USGSUpdate Frequency: AnnualPublication Date: July 2022This layer was symbolized in the ArcGIS Map Viewer and while the features will draw in the Classic Map Viewer the advanced symbology will not. Prior to publication, the network and non-network flowline feature classes were combined into a single flowline layer. Similarly, the Area and Waterbody feature classes were merged under a single schema.Attribute fields were added to the flowline and waterbody layers to simplify symbology and enhance the layer's pop-ups. Fields added include Pop-up Title, Pop-up Subtitle, Esri Symbology (waterbodies only), and Feature Code Description. All other attributes are from the original dataset. No data values -9999 and -9998 were converted to Null values.What can you do with this layer?Feature layers work throughout the ArcGIS system. Generally your work flow with feature layers will begin in ArcGIS Online or ArcGIS Pro. Below are just a few of the things you can do with a feature service in Online and Pro.ArcGIS OnlineAdd this layer to a map in the map viewer. The layer or a map containing it can be used in an application. Change the layer’s transparency and set its visibility rangeOpen the layer’s attribute table and make selections. Selections made in the map or table are reflected in the other. Center on selection allows you to zoom to features selected in the map or table and show selected records allows you to view the selected records in the table.Apply filters. For example you can set a filter to show larger streams and rivers using the mean annual flow attribute or the stream order attribute.Change the layer’s style and symbologyAdd labels and set their propertiesCustomize the pop-upUse as an input to the ArcGIS Online analysis tools. This layer works well as a reference layer with the trace downstream and watershed tools. The buffer tool can be used to draw protective boundaries around streams and the extract data tool can be used to create copies of portions of the data.ArcGIS ProAdd this layer to a 2d or 3d map.Use as an input to geoprocessing. For example, copy features allows you to select then export portions of the data to a new feature class.Change the symbology and the attribute field used to symbolize the dataOpen table and make interactive selections with the mapModify the pop-upsApply Definition Queries to create sub-sets of the layerThis layer is part of the ArcGIS Living Atlas of the World that provides an easy way to explore the landscape layers and many other beautiful and authoritative maps on hundreds of topics.Questions?Please leave a comment below if you have a question about this layer, and we will get back to you as soon as possible.
A Groundwater Nitrate Decision Support Tool (GW-NDST) for wells in Wisconsin was developed to assist resource managers with assessing how legacy and possible future nitrate leaching rates, combined with groundwater lag times and potential denitrification, influence nitrate concentrations in wells (Juckem et al. 2024). The GW-NDST relies on several support models, including machine-learning models that require numerous GIS input files. This data release contains all GIS files required to run the GW-NDST and its machine-learning support models. The GIS files are packaged into three ZIP files (WI_County.zip, WT-ML.zip, and WI_Buff1km.zip) which are contained in this data release. Before running the GW-NDST, these ZIP files need to be downloaded and unzipped inside the "data_in/GIS/" subdirectory of the GW-NDST. The GW-NDST can be downloaded from the official software release on GitLab (https://doi.org/10.5066/P13ETB4Q). Further instructions for running the GW-NDST, and for acquiring requisite files, can be found in the software's readme file.
This submission contains an ESRI map package (.mpk) with an embedded geodatabase for GIS resources used or derived in the Nevada Machine Learning project, meant to accompany the final report. The package includes layer descriptions, layer grouping, and symbology. Layer groups include: new/revised datasets (paleo-geothermal features, geochemistry, geophysics, heat flow, slip and dilation, potential structures, geothermal power plants, positive and negative test sites), machine learning model input grids, machine learning models (Artificial Neural Network (ANN), Extreme Learning Machine (ELM), Bayesian Neural Network (BNN), Principal Component Analysis (PCA/PCAk), Non-negative Matrix Factorization (NMF/NMFk) - supervised and unsupervised), original NV Play Fairway data and models, and NV cultural/reference data. See layer descriptions for additional metadata. Smaller GIS resource packages (by category) can be found in the related datasets section of this submission. A submission linking the full codebase for generating machine learning output models is available through the "Related Datasets" link on this page, and contains results beyond the top picks present in this compilation.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the GIS output for input into R / Stata for analysis.
This dataset represents a unique compiled environmental data set for the circumpolar Arctic ocean region 45N to 90N region. It consists of 170 layers (mostly marine, some terrestrial) in ArcGIS 10 format to be used with a Geographic Information System (GIS) and which are listed below in detail. Most layers are long-term average raster GRIDs for the summer season, often by ocean depth, and represent value-added products easy to use. The sources of the data are manifold such as the World Ocean Atlas 2009 (WOA09), International Bathimetric Chart of the Arctic Ocean (IBCAO), Canadian Earth System Model 2 (CanESM2) data (the newest generation of models available) and data sources such as plankton databases and OBIS. Ocean layers were modeled and predicted into the future and zooplankton species were modeled based on future data: Calanus hyperboreus (AphiaID104467), Metridia longa (AphiaID 104632), M. pacifica (AphiaID 196784) and Thysanoessa raschii (AphiaID 110711). Some layers are derived within ArcGIS. Layers have pixel sizes between 1215.819573 meters and 25257.72929 meters for the best pooled model, and between 224881.2644 and 672240.4095 meters for future climate data. Data was then reprojected into North Pole Stereographic projection in meters (WGS84 as the geographic datum). Also, future layers are included as a selected subset of proposed future climate layers from the Canadian CanESM2 for the next 100 years (scenario runs rcp26 and rcp85). The following layer groups are available: bathymetry (depth, derived slope and aspect); proximity layers (to,glaciers,sea ice, protected areas, wetlands, shelf edge); dissolved oxygen, apparent oxygen, percent oxygen, nitrogen, phosphate, salinity, silicate (all for August and for 9 depth classes); runoff (proximity, annual and August); sea surface temperature; waterbody temperature (12 depth classes); modeled ocean boundary layers (H1, H2, H3 and Wx).This dataset is used for a M.Sc. thesis by the author, and freely available upon request. For questions and details we suggest contacting the authors. Process_Description: Please contact Moritz Schmid for the thesis and detailed explanations. Short version: We model predicted here for the first time ocean layers in the Arctic Ocean based on a unique dataset of physical oceanography. Moreover, we developed presence/random absence models that indicate where the studied zooplankton species are most likely to be present in the Arctic Ocean. Apart from that, we develop the first spatially explicit models known to science that describe the depth in which the studied zooplankton species are most likely to be at, as well as their distribution of life stages. We do not only do this for one present day scenario. We modeled five different scenarios and for future climate data. First, we model predicted ocean layers using the most up to date data from various open access sources, referred here as best-pooled model data. We decided to model this set of stratification layers after discussions and input of expert knowledge by Professor Igor Polyakov from the International Arctic Research Center at the University of Alaska Fairbanks. We predicted those stratification layers because those are the boundaries and layers that the plankton has to cross for diel vertical migration and a change in those would most likely affect the migration. I assigned 4 variables to the stratification layers. H1, H2, H3 and Wx. H1 is the lower boundary of the mixed layer depth. Above this layer a lot of atmospheric disturbance is causing mixing of the water, giving the mixed layer its name. H2, the middle of the halocline is important because in this part of the ocean a strong gradient in salinity and temperature separates water layers. H3, the isotherm is important, because beneath it flows denser and colder Atlantic water. Wx summarizes the overall width of the described water column. Ocean layers were predicted using machine learning algorithms (TreeNet, Salford Systems). Second, ocean layers were included as predictors and used to predict the presence/random absence, most likely depth and life stage layers for the zooplankton species: Calanus hyperboreus, Metridia longa, Metridia pacifica and Thysanoessa raschii, This process was repeated for future predictions based on the CanESM2 data (see in the data section). For zooplankton species the following layers were developed and for the future. C. hyperboreus: Best-pooled model as well as future predictions (rcp26 including ocean layer(also excluding), rcp85 including oocean layers (also excluding) for 2010 and 2100.For parameters: Presence/random absence, most likely depth and life stage layers M. longa: Best-pooled model as well as future predictions (rcp26 including ocean layer(also excluding), rcp85 including oocean layers (also excluding) for 2010 and 2100. For parameters: Presence/rand... Visit https://dataone.org/datasets/f63d0f6c-7d53-46ce-b755-42a368007601 for complete metadata about this dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A conference paper describing GIS tools developed in support of the blast loss estimation capability for the Australian Reinsurance Pool Corporation. The paper focus is on GIS tools developed for: …Show full descriptionA conference paper describing GIS tools developed in support of the blast loss estimation capability for the Australian Reinsurance Pool Corporation. The paper focus is on GIS tools developed for: exposure database construction and integration of a number of datasets including 3D building geometry
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data was prepared as input for the Selkie GIS-TE tool. This GIS tool aids site selection, logistics optimization and financial analysis of wave or tidal farms in the Irish and Welsh maritime areas. Read more here: https://www.selkie-project.eu/selkie-tools-gis-technoeconomic-model/
This research was funded by the Science Foundation Ireland (SFI) through MaREI, the SFI Research Centre for Energy, Climate and the Marine and by the Sustainable Energy Authority of Ireland (SEAI). Support was also received from the European Union's European Regional Development Fund through the Ireland Wales Cooperation Programme as part of the Selkie project.
File Formats
Results are presented in three file formats:
tif Can be imported into a GIS software (such as ARC GIS) csv Human-readable text format, which can also be opened in Excel png Image files that can be viewed in standard desktop software and give a spatial view of results
Input Data
All calculations use open-source data from the Copernicus store and the open-source software Python. The Python xarray library is used to read the data.
Hourly Data from 2000 to 2019
Wind -
Copernicus ERA5 dataset
17 by 27.5 km grid
10m wind speed
Wave - Copernicus Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis dataset 3 by 5 km grid
Accessibility
The maximum limits for Hs and wind speed are applied when mapping the accessibility of a site.
The Accessibility layer shows the percentage of time the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5) are below these limits for the month.
Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined by checking if
the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total number of hours for the month.
Environmental data is from the Copernicus data store (https://cds.climate.copernicus.eu/). Wave hourly data is from the 'Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis' dataset.
Wind hourly data is from the ERA 5 dataset.
Availability
A device's availability to produce electricity depends on the device's reliability and the time to repair any failures. The repair time depends on weather
windows and other logistical factors (for example, the availability of repair vessels and personnel.). A 2013 study by O'Connor et al. determined the
relationship between the accessibility and availability of a wave energy device. The resulting graph (see Fig. 1 of their paper) shows the correlation between
accessibility at Hs of 2m and wind speed of 15.0m/s and availability. This graph is used to calculate the availability layer from the accessibility layer.
The input value, accessibility, measures how accessible a site is for installation or operation and maintenance activities. It is the percentage time the
environmental conditions, i.e. the Hs (Atlantic -Iberian Biscay Irish - Ocean Wave Reanalysis) and wind speed (ERA5), are below operational limits.
Input data is 20 years of hourly wave and wind data from 2000 to 2019, partitioned by month. At each timestep, the accessibility of the site was determined
by checking if the Hs and wind speed were below their respective limits. The percentage accessibility is the number of hours within limits divided by the total
number of hours for the month. Once the accessibility was known, the percentage availability was calculated using the O'Connor et al. graph of the relationship
between the two. A mature technology reliability was assumed.
Weather Window
The weather window availability is the percentage of possible x-duration windows where weather conditions (Hs, wind speed) are below maximum limits for the
given duration for the month.
The resolution of the wave dataset (0.05° × 0.05°) is higher than that of the wind dataset
(0.25° x 0.25°), so the nearest wind value is used for each wave data point. The weather window layer is at the resolution of the wave layer.
The first step in calculating the weather window for a particular set of inputs (Hs, wind speed and duration) is to calculate the accessibility at each timestep.
The accessibility is based on a simple boolean evaluation: are the wave and wind conditions within the required limits at the given timestep?
Once the time series of accessibility is calculated, the next step is to look for periods of sustained favourable environmental conditions, i.e. the weather
windows. Here all possible operating periods with a duration matching the required weather-window value are assessed to see if the weather conditions remain
suitable for the entire period. The percentage availability of the weather window is calculated based on the percentage of x-duration windows with suitable
weather conditions for their entire duration.The weather window availability can be considered as the probability of having the required weather window available
at any given point in the month.
Extreme Wind and Wave
The Extreme wave layers show the highest significant wave height expected to occur during the given return period. The Extreme wind layers show the highest wind speed expected to occur during the given return period.
To predict extreme values, we use Extreme Value Analysis (EVA). EVA focuses on the extreme part of the data and seeks to determine a model to fit this reduced
portion accurately. EVA consists of three main stages. The first stage is the selection of extreme values from a time series. The next step is to fit a model
that best approximates the selected extremes by determining the shape parameters for a suitable probability distribution. The model then predicts extreme values
for the selected return period. All calculations use the python pyextremes library. Two methods are used - Block Maxima and Peaks over threshold.
The Block Maxima methods selects the annual maxima and fits a GEVD probability distribution.
The peaks_over_threshold method has two variable calculation parameters. The first is the percentile above which values must be to be selected as extreme (0.9 or 0.998). The
second input is the time difference between extreme values for them to be considered independent (3 days). A Generalised Pareto Distribution is fitted to the selected
extremes and used to calculate the extreme value for the selected return period.