This service is an image service created from a mosaic dataset that references TIFF imagery taken in 2023. Overviews were created in the mosaic dataset to make rendering at small scales optimal.
Coastwide vegetation surveys have been conducted multiple times over the past 50 years (e.g., Chabreck and Linscombe 1968, 1978, 1988, 1997, 2001, and 2013) by the Louisiana Department of Wildlife and Fisheries (LDWF) in support of coastal management activities. The last survey was conducted in 2013 and was funded by the Louisiana Coastal Protection and Restoration Authority (CPRA) and the U.S. Geological Survey (USGS) as a part of the Coastal Wetlands Planning, Protection, and Restoration Act (CWPPRA) monitoring program. These surveys provide important data that have been utilized by federal, state, and local resource managers. The surveys provide information on the condition of Louisiana’s coastal marshes by mapping plant species composition and vegetation change through time. During the summer of 2021, the U.S. Geological Survey, Louisiana State University, and the Louisiana Department of Wildlife and Fisheries jointly completed a helicopter survey to collect data on 2021 vegetation types using the same field methodology at previously sampled data points. Plant species were identified and their abundance classified at each point. Based on species composition and abundance, each marsh sampling station was assigned a marsh type: fresh, intermediate, brackish, or saline marsh. The field point data were interpolated to classify marsh vegetation into polygons and map the distribution of vegetation types. We then used the 2021 polygons with additional remote sensing data to create the final raster dataset. We used the polygon marsh type zones (available in this data release), as well as National Land Cover Database (NLCD; https://www.usgs.gov/centers/eros/science/national-land-cover-database) and NOAA Coastal Change Analysis Program (CCAP; https://coast.noaa.gov/digitalcoast/data/ccapregional.html) datasets to create a composite raster dataset. The composite raster was created to provide more detail, particularly with regard to “Other”, “Swamp”, and “Water” categories, than is available in the polygon dataset. The overall boundary of the raster product was extended beyond past surveys to better inform swamp, water, and other boundaries across the coast. A majority of NLCD and CCAP classification during a 2010-2019 period was used, rather than creating a raster classification specific to 2021, as there was a desire to use published datasets. Users are cautioned that the raster dataset is generalized but more specific than the polygon dataset. This data release includes 3 datasets: the point field data collected by the helicopter survey team, the polygon data developed from the point data, and the raster data developed from the polygon data plus additional remote sensing data as described above.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The present dataset is part of the Alaiz Experiment-2017 (ALEX17). The information is divided into two groups based on their source. 1)Two raster-tpye geotif files containing the Digital Elevation and Digital Surface Models (DEM and DSM) data of the ALEX17 domain. The models were built by TRACASA ( https://tracasa.es/all-about-us/) which is a company part of the Navarra Government. The original dataset is cropped to fit the ALEX17 experimental domain with the following spatial coverage: 607700, 4720300 628010, 4738800 The datasets are generated through lidar airborne scans taken during years 2011 and 2012 and updated by photogrammetry with orthophotos of year 2014. The original lidar scans (2011-2012) have a density of 1pnt/m^2 . The raw data are then processed and converted to orthometric heights (from the original ellipsoidal heights ) and later projected into a 2x2m grid with spatial reference EPSG:25830. The conversion from ellipsoidal to orthometric height is carried out with the EGM2008_REDNAP model, generated by the Spanish Geographic National Institute available in: ftp://ftp.geodesia.ign.es/geoide/ 2)The second dataset is also a raster-type file which contains the approximate annual mean of aerodynamic roughness length in meters. The maps was created with two data sources: Visual estimation of the roughness length values & zones. The Corine Land Cover (CLC) 2006 data. 2.1) The visual estimations of roughness values w carried out with the use of both, orthophotos gathered from the National Geographic Institute of Spain (IGN) as well as site visits. These values were assigned to the Alaiz mountain region while the 2.2) CLC-derived roughness was set to the rest of the domain area. The orthophotos are obtained from the National Plan for Aerial Orthophotogrpy (PNOA) program (available at http://www.ign.es/ign/layoutIn/faimgsataerea.do ). These photos have a pixel size of 50cm and were taken in summer 2014. On the other hand, the Corine Land Cover (CLC) 2006 raster dataset have a 100 m grid size. These data are available at http://www.eea.europa.eu/data-and-maps/data/corine- land-cover-2006-raster-3 (g100_06.zip file). The roughness values were derived from the Land Cover data mostly based on the relation between CLC and the aerodynamic roughness length applied by the Finnish wind atlas (http://www.tuuliatlas.fi/modelling/mallinnus_3.html ). The final composed roughness raster map was built by interpolation (nearest-neighbor) of the two data sources onto a 10x10 meters grid . The map is also projected with the same spatial reference as the DEM/DSM data described above.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Soil acidity is a natural process that can be exacerbated in farming systems. Current knowledge and data on the extent and severity of acidic soils in south-western Victoria is limited. This makes inferences on the impacts to production across the region difficult. Furthermore, improved mapping is required in order to define the opportunities to address soil acidity in southern Victoria and increase production potential. The availability of soil site data managed in the Victorian Soil Information System (VSIS) and spatially exhaustive ancillary datasets (i.e. environmental covariate map data such as elevation, rainfall and gamma radiometrics) support the application of predictive modelling techniques to produce soil pH maps at finer scales and qualities previously unattainable.
The digital soil maps of soil pH for the South West region of Victoria have been produced by modelling the spatial relationships between points (soil sites) of measured or estimated soil pH and their environment (defined by a comprehensive set of covariates). A 10-fold cross validation procedure was used to produce average predictions for the upper, lower and mean values. The mapping provides predictions of soil pH at 50 m pixel resolution for six set depths from the surface down to two metres. The six set depths have been chosen to align to the Global Soil Map specifications, www.globalsoilmap.net.
In total, data from 3,668 sites were identified for application in spatial models across south-western Victoria. This data has been sourced from land studies dating back to the 1950s and the 670 samples collected by this project are now accessible as part of this larger dataset. Spatial covariate datasets using in modelling includes climate (e.g. annual rainfall, evaporation, Prescott index), landscape (e.g. clay mineral maps), organisms (e.g. MODIS time series, LANDSAT scenes), relief (e.g. elevation, slope, topographic wetness index) and parent material (e.g. terrain weathering index). In total, 71 covariate raster datasets have been used in generating soil pH maps.
The maps are for soil pH measured in a 1:5 soil-to-water suspension (pHw) with possible addition of a salt solution (typically Calcium chloride, CaCl2). The raster datasets (maps) include a mean, lower and upper uncertainty prediction for each depth interval.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.
This resource contains raster datasets created using ArcGIS to analyse groundwater levels in the Namoi subregion.
This is an update to some of the data that is registered here: http://data.bioregionalassessments.gov.au/dataset/7604087e-859c-4a92-8548-0aa274e8a226
These data layers were created in ArcGIS as part of the analysis to investigate surface water - groundwater connectivity in the Namoi subregion. The data layers provide several of the figures presented in the Namoi 2.1.5 Surface water - groundwater interactions report.
Extracted points inside Namoi subregion boundary. Converted bore and pipe values to Hydrocode format, changed heading of 'Value' column to 'Waterlevel' and removed unnecessary columns then joined to Updated_NSW_GroundWaterLevel_data_analysis_v01\NGIS_NSW_Bore_Join_Hydmeas_unique_bores.shp clipped to only include those bores within the Namoi subregion.
Selected only those bores with sample dates between >=26/4/2012 and <31/7/2012. Then removed 4 gauges due to anomalous ref_pt_height values or WaterElev values higher than Land_Elev values.
Then added new columns of calculations:
WaterElev = TsRefElev - Water_Leve
DepthWater = WaterElev - Ref_pt_height
Ref_pt_height = TsRefElev - LandElev
Alternatively - Selected only those bores with sample dates between >=1/5/2006 and <1/7/2006
2012_Wat_Elev - This raster was created by interpolating Water_Elev field points from HydmeasJune2012_only.shp, using Spatial Analyst - Topo to Raster tool. And using the alluvium boundary (NAM_113_Aquifer1_NamoiAlluviums.shp) as a boundary input source.
12_dw_olp_enf - Select out only those bores that are in both source files.
Then using depthwater in Topo to Raster, with alluvium as the boundary, ENFORCE field chosen, and using only those bores present in 2012 and 2006 dataset.
2012dw1km_alu - Clipped the 'watercourselines' layer to the Namoi Subregion, then selected 'Major' water courses only. Then used the Geoprocessing 'Buffer' tool to create a polygon delineating an area 1km around all the major streams in the Namoi subregion.
selected points from HydmeasJune2012_only.shp that were within 1km of features the WatercourseLines then used the selected points and the 1km buffer around the major water courses and the Topo to Raster tool in Spatial analyst to create the raster.
Then used the alluvium boundary to truncate the raster, to limit to the area of interest.
12_minus_06 - Select out bores from the 2006 dataset that are also in the 2012 dataset. Then create a raster using depth_water in topo to raster, with ENFORCE field chosen to remove sinks, and alluvium as boundary. Then, using Map Algebra - Raster Calculator, subtract the raster just created from 12_dw_olp_enf
Bioregional Assessment Programme (2017) Namoi bore analysis rasters - updated. Bioregional Assessment Derived Dataset. Viewed 10 December 2018, http://data.bioregionalassessments.gov.au/dataset/effa0039-ba15-459e-9211-232640609d44.
Derived From Bioregional Assessment areas v02
Derived From Gippsland Project boundary
Derived From Bioregional Assessment areas v04
Derived From Upper Namoi groundwater management zones
Derived From Natural Resource Management (NRM) Regions 2010
Derived From Bioregional Assessment areas v03
Derived From Victoria - Seamless Geology 2014
Derived From GIS analysis of HYDMEAS - Hydstra Groundwater Measurement Update: NSW Office of Water - Nov2013
Derived From Bioregional Assessment areas v01
Derived From GEODATA TOPO 250K Series 3, File Geodatabase format (.gdb)
Derived From GEODATA TOPO 250K Series 3
Derived From NSW Catchment Management Authority Boundaries 20130917
Derived From Geological Provinces - Full Extent
Derived From Hydstra Groundwater Measurement Update - NSW Office of Water, Nov2013
Statewide 2016 Lidar points colorized with 2018 NAIP imagery as a scene created by Esri using ArcGIS Pro for the entire State of Connecticut. This service provides the colorized Lidar point in interactive 3D for visualization, interaction of the ability to make measurements without downloading.Lidar is referenced at https://cteco.uconn.edu/data/lidar/ and can be downloaded at https://cteco.uconn.edu/data/download/flight2016/. Metadata: https://cteco.uconn.edu/data/flight2016/info.htm#metadata. The Connecticut 2016 Lidar was captured between March 11, 2016 and April 16, 2016. Is covers 5,240 sq miles and is divided into 23, 381 tiles. It was acquired by the Captiol Region Council of Governments with funding from multiple state agencies. It was flown and processed by Sanborn. The delivery included classified point clouds and 1 meter QL2 DEMs. The 2016 Lidar is published on the Connecticut Environmental Conditions Online (CT ECO) website. CT ECO is the collaborative work of the Connecticut Department of Energy and Environmental Protection (DEEP) and the University of Connecticut Center for Land Use Education and Research (CLEAR) to share environmental and natural resource information with the general public. CT ECO's mission is to encourage, support, and promote informed land use and development decisions in Connecticut by providing local, state and federal agencies, and the public with convenient access to the most up-to-date and complete natural resource information available statewide.Process used:Extract Building Footprints from Lidar1. Prepare Lidar - Download 2016 Lidar from CT ECO- Create LAS Dataset2. Extract Building Footprints from LidarUse the LAS Dataset in the Classify Las Building Tool in ArcGIS Pro 2.4.Colorize LidarColorizing the Lidar points means that each point in the point cloud is given a color based on the imagery color value at that exact location.1. Prepare Imagery- Acquire 2018 NAIP tif tiles from UConn (originally from USDA NRCS).- Create mosaic dataset of the NAIP imagery.2. Prepare and Analyze Lidar Points- Change the coordinate system of each of the lidar tiles to the Projected Coordinate System CT NAD 83 (2011) Feet (EPSG 6434). This is because the downloaded tiles come in to ArcGIS as a Custom Projection which cannot be published as a Point Cloud Scene Layer Package.- Convert Lidar to zlas format and rearrange. - Create LAS Datasets of the lidar tiles.- Colorize Lidar using the Colorize LAS tool in ArcGIS Pro. - Create a new LAS dataset with a division of Eastern half and Western half due to size limitation of 500GB per scene layer package. - Create scene layer packages (.slpk) using Create Cloud Point Scene Layer Package. - Load package to ArcGIS Online using Share Package. - Publish on ArcGIS.com and delete the scene layer package to save storage cost.Additional layers added:Visit https://cteco.uconn.edu/projects/lidar3D/layers.htm for a complete list and links. 3D Buildings and Trees extracted by Esri from the lidarShaded Relief from CTECOImpervious Surface 2012 from CT ECONAIP Imagery 2018 from CTECOContours (2016) from CTECOLidar 2016 Download Link derived from https://www.cteco.uconn.edu/data/download/flight2016/index.htm
This module walks through the process of creating a Reference for a raster dataset.
Overview: Actual Natural Vegetation (ANV): probability of occurrence for the Sweet chestnut in its realized environment for the period 2000 - 2021 Traceability (lineage): This is an original dataset produced with a machine learning framework which used a combination of point datasets and raster datasets as inputs. Point dataset is a harmonized collection of tree occurrence data, comprising observations from National Forest Inventories (EU-Forest), GBIF and LUCAS. The complete dataset is available on Zenodo. Raster datasets used as input are: harmonized and gapfilled time series of seasonal aggregates of the Landsat GLAD ARD dataset (bands and spectral indices); monthly time series air and surface temperature and precipitation from a reprocessed version of the Copernicus ERA5 dataset; long term averages of bioclimatic variables from CHELSA, tree species distribution maps from the European Atlas of Forest Tree Species; elevation, slope and other elevation-derived metrics; long term monthly averages snow probability and long term monthly averages of cloud fraction from MODIS. For a more comprehensive list refer to Bonannella et al. (2022) (in review, preprint available at: https://doi.org/10.21203/rs.3.rs-1252972/v1). Scientific methodology: Probability and uncertainty maps were the output of a spatiotemporal ensemble machine learning framework based on stacked regularization. Three base models (random forest, gradient boosted trees and generalized linear models) were first trained on the input dataset and their predictions were used to train an additional model (logistic regression) which provided the final predictions. More details on the whole workflow are available in the listed publication. Usability: Probability maps can be used to detect potential forest degradation and compositional change across the time period analyzed. Some possible applications for these topics are explained in the listed publication. Uncertainty quantification: Uncertainty is quantified by taking the standard deviation of the probabilities predicted by the three components of the spatiotemporal ensemble model. Data validation approaches: Distribution maps were validated using a spatial 5-fold cross validation following the workflow detailed in the listed publication. Completeness: The raster files perfectly cover the entire Geo-harmonizer region as defined by the landmask raster dataset available here. Consistency: Areas which are outside of the calibration area of the point dataset (Iceland, Norway) usually have high uncertainty values. This is not only a problem of extrapolation but also of poor representation in the feature space available to the model of the conditions that are present in this countries. Positional accuracy: The rasters have a spatial resolution of 30m. Temporal accuracy: The maps cover the period 2000 - 2020, each map covers a certain number of years according to the following scheme: (1) 2000--2002, (2) 2002--2006, (3) 2006--2010, (4) 2010--2014, (5) 2014--2018 and (6) 2018--2020 Thematic accuracy: Both probability and uncertainty maps contain values from 0 to 100: in the case of probability maps, they indicate the probability of occurrence of a single individual of the target species, while uncertainty maps indicate the standard deviation of the ensemble model.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Moisture Index (MI) for the state of Utah is calculated from a spatial raster of annual actual (ETact) and potential (PET) evapotranspiration data from 2000 to 2013 derived from the MODIS instrumentation (Mu, Zhao, & Running, 2011; Mu, Zhao, & Running, 2013; Numerical Terradynamic Simulation Group, 2013). Moisture Index (MI) was created to compare the suitability of settlement locations throughout Utah to explain initial Euro-American settlement of the region. MI is one of two proxies created specifically for Utah for comparison of environmental productivity throughout the state. Moisture index (MI) was originally used by Ramankutty et al. (2002) on a global scale to understand probability of cultivation based on a series of environmental factors. The Ramankutty et al. (2002) methods were used to build a regional proxy of agricultural suitability for the state of Utah. Adapting the methods in Ramankutty et al. (2002), we were able to create a higher resolution dataset of MI specific to the state of Utah. Unlike S, MI only accounts for evapotranspiration rates.The Moisture Index is calculated as: MI = ETact / PET Where ETact is the actual evapotranspiration and PET is the potential evapotranspiration. This calculation results in a zero to one index representing global variation in moisture. MI is calculated for the study area (Utah) using a raster of annual actual (ETact) and potential (PET) evapotranspiration data from 2000 to 2013 derived from the MODIS instrumentation (Mu, Zhao, & Running, 2011; Mu, Zhao, & Running, 2013; Numerical Terradynamic Simulation Group, 2013). Using the ArcMap 10.3.1 Raster Calculator (Spatial Analyst), a raster dataset is created at a resolution of 2.6 kilometer square, which contain values representative of the average Moisture Index for Utah over a fourteen year period (ESRI, 2015). The data were collected remotely by satellite (MODIS) and represents reflective surfaces (urban areas, lakes, and the Utah Salt Flats) as null values in the dataset. Areas of null values that were not bodies of water are interpolated using Inverse Distance Weighting (3d Analyst) in ArcMap 10.3.1 (ESRI, 2015). Download the moisture index (MI) data below. If you have any questions or concerns, please contact me at PYaworsky89@gmail.com. Citations ESRI. (2015). ArcGIS Desktop: Release (Version 10.3.1). Redlands, CA: Environmental Systems Research Institute. Mu, Q., Zhao, M., & Running, S. W. (2013). MODIS Global Terrestrial Evapotranspiration (ET) Product (NASA MOD16A2/A3). Algorithm Theoretical Basis Document, Collection, 5. Retrieved from http://www.ntsg.umt.edu/sites/ntsg.umt.edu/files/MOD16_ATBD.pdf Mu, Q., Zhao, M., & Running, S. W. (2011). Improvements to a MODIS global terrestrial evapotranspiration algorithm. Remote Sensing of Environment, 115(8), 1781–1800. Numerical Terradynamic Simulation Group. (2013, July 29). MODIS Global Evapotranspiration Project (MOD16). University of Montana. Ramankutty, N., Foley, J. A., Norman, J., & Mcsweeney, K. (2002). The global distribution of cultivable lands: current patterns and sensitivity to possible climate change. Global Ecology and Biogeography, 11(5), 377–392. http://doi.org/10.1046/j.1466-822x.2002.00294.x
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This collection provides a seamlessly merged, hydrologically robust Digital Elevation Model (DEM) for the Murray Darling Basin (MDB), Australia, at 5 m and 25 m grid cell resolution.
This composite DEM has been created from all the publicly available high resolution DEMs in the Geoscience Australia (GA) elevation data portal Elvis (https://elevation.fsdf.org.au/) as at November 2022. The input DEMs, also sometimes referred to as digital terrain models (DTMs), are bare-earth products which represent the ground surface with buildings and vegetation removed. The DEMs were either from lidar (0.5 to 2 m resolution) or photogrammetry (5 m resolution) and totalled 852 individual DEMs.
The merging process involved ranking the DEMs, pairing the DEMs with overlaps, and adjusting and smoothing the elevations of the lower ranked DEM to make the edge elevations compatible with the higher-ranked DEM. This method is adapted from Gallant 2019 with modifications to work with hundreds of DEMs and have a variable number of gaussian smoothing steps.
Where there were gaps in the high-resolution DEM extents, the Forests and Buildings removed DEM (FABDEM; Hawker et al. 2022), a bare-earth radar-derived, 1 arc-second resolution global elevation model was used as the underlying base DEM. FABDEM is based on the Copernicus global digital surface model.
Additionally, hillshade datasets created from both the 5 m and 25 m DEMs are provided.
Note: the FABDEM dataset is available publicly for non-commercial purposes and consequently the data files available with this Collection are also available with a Creative Commons NonCommercial ShareAlike 4.0 Licence (CC BY-NC-SA 4.0). See https://data.bris.ac.uk/datasets/25wfy0f9ukoge2gs7a5mqpq2j7/license.txt Lineage: For a more detailed lineage see the supporting document Composite_MDB_DEM_Lineage.
DATA SOURCES 1. Geoscience Australia elevation data (https://elevation.fsdf.org.au/) via Amazon Web Service s3 bucket. Of the 852 digital elevation models (DEMs) from the GA elevation data portal, 601 DEMs were from lidar and 251 were from photogrammetry. The latest date of download was Nov 2022. The oldest input DEM was from 2008 and the newest from 2022.
METHODS Part I. Preprocessing The input DEMs were prepared for merging with the following steps: 1. Metadata for all input DEMs was collated in a single file and the DEMs were ranked from finest resolution/newest to coarsest resolution/oldest 2. Tiled input DEMs were combined into single files 3. Input DEMs were reprojected to GA LCC conformal conic projection (EPSG:7845) and bilinearly resampled to 5 m 4. Input DEMs were shifted vertically to the Australian Vertical Working Surface (AVWS; EPSG:9458) 5. The input DEMs were stacked (without any merging and/or smoothing at DEM edges) based on rank so that higher ranking DEMs preceded the lower ranking DEMs, i.e. the elevation value in a grid cell came from the highest rank DEM which had a value in that cell 6. An index raster dataset was produced, where the value assigned to each grid cell was the rank of the DEM which contributed the elevation value to the stacked DEM (see Collection Files - Index_5m_resolution) 7. A metadata file describing each input dataset was linked to the index dataset via the rank attribute (see Collection Files - Metadata)
Vertical height reference surface https://icsm.gov.au/australian-vertical-working-surface
Part II. DEM Merging The method for seamlessly merging DEMs to create a composite dataset is based on Gallant 2019, with modifications to work with hundreds of input DEMs. Within DEM pairs, the elevations of the lower ranked DEM are adjusted and smoothed to make the edge elevations compatible with the higher-ranked DEM. Processing was on the CSIRO Earth Analytics and Science Innovation (EASI) platform. Code was written in python and dask was used for task scheduling.
Part III. Postprocessing 1. A minor correction was made to the 5 m composite DEM in southern Queensland to replace some erroneous elevation values (-8000 m a.s.l.) with the nearest values from the surrounding grid cells 2. A 25 m version of the composite DEM was created by aggregating the 5m DEM, using a 5 x 5 grid cell window and calculating the mean elevation 3. Hillshade datasets were produced for the 5 m and 25 m DEMs using python code from https://github.com/UP-RS-ESP/DEM-Consistency-Metrics
Part IV. Validation Six validation areas were selected across the MDB for qualitative checking of the output at input dataset boundaries. The hillshade datasets were used to look for linear artefacts. Flow direction and flow accumulation rasters and drainage lines were derived from the stacked DEM (step 5 in preprocessing) and the post-merge composite DEM. These were compared to determine whether the merging process had introduced additional errors.
OUTPUTS 1. seamlessly merged composite DEMs at 5 m and 25 m resolutions (geotiff) 2. hillshade datasets for the 5 m and 25 m DEMs (geotiff) 3. index raster dataset at 5 m resolution (geotiff) 4. metadata file containing input dataset information and rank (the rank column values link to the index raster dataset values) 5. figure showing a map of the index dataset and 5m composite DEM (jpeg)
DATA QUALITY STATEMENT Note that we did not attempt to improve the quality of the input DEMs, they were not corrected prior to merging and any errors will be retained in the composite DEM.
This dataset combines the work of several different projects to create a seamless data set for the contiguous United States. Data from four regional Gap Analysis Projects and the LANDFIRE project were combined to make this dataset. In the northwestern United States (Idaho, Oregon, Montana, Washington and Wyoming) data in this map came from the Northwest Gap Analysis Project. In the southwestern United States (Colorado, Arizona, Nevada, New Mexico, and Utah) data used in this map came from the Southwest Gap Analysis Project. The data for Alabama, Florida, Georgia, Kentucky, North Carolina, South Carolina, Mississippi, Tennessee, and Virginia came from the Southeast Gap Analysis Project and the California data was generated by the updated California Gap land cover project. The Hawaii Gap Analysis project provided the data for Hawaii. In areas of the county (central U.S., Northeast, Alaska) that have not yet been covered by a regional Gap Analysis Project, data from the Landfire project was used. Similarities in the methods used by these projects made possible the combining of the data they derived into one seamless coverage. They all used multi-season satellite imagery (Landsat ETM+) from 1999-2001 in conjunction with digital elevation model (DEM) derived datasets (e.g. elevation, landform) to model natural and semi-natural vegetation. Vegetation classes were drawn from NatureServe's Ecological System Classification (Comer et al. 2003) or classes developed by the Hawaii Gap project. Additionally, all of the projects included land use classes that were employed to describe areas where natural vegetation has been altered. In many areas of the country these classes were derived from the National Land Cover Dataset (NLCD). For the majority of classes and, in most areas of the country, a decision tree classifier was used to discriminate ecological system types. In some areas of the country, more manual techniques were used to discriminate small patch systems and systems not distinguishable through topography. The data contains multiple levels of thematic detail. At the most detailed level natural vegetation is represented by NatureServe's Ecological System classification (or in Hawaii the Hawaii GAP classification). These most detailed classifications have been crosswalked to the five highest levels of the National Vegetation Classification (NVC), Class, Subclass, Formation, Division and Macrogroup. This crosswalk allows users to display and analyze the data at different levels of thematic resolution. Developed areas, or areas dominated by introduced species, timber harvest, or water are represented by other classes, collectively refered to as land use classes; these land use classes occur at each of the thematic levels. Raster data in both ArcGIS Grid and ERDAS Imagine format is available for download at http://gis1.usgs.gov/csas/gap/viewer/land_cover/Map.aspx Six layer files are included in the download packages to assist the user in displaying the data at each of the Thematic levels in ArcGIS. In adition to the raster datasets the data is available in Web Mapping Services (WMS) format for each of the six NVC classification levels (Class, Subclass, Formation, Division, Macrogroup, Ecological System) at the following links. http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Class_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Subclass_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Formation_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Division_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_NVC_Macrogroup_Landuse/MapServer http://gis1.usgs.gov/arcgis/rest/services/gap/GAP_Land_Cover_Ecological_Systems_Landuse/MapServer
From the site: "This raster dataset has been created using the original data "Pennsylvania conservation gap fish habitat models" as originated by the Environmental Resources Research Institute of The Pennsylvania State University. Conservation Values were then assigned to species as determined by SmartConservation® methodology and combined to create an overall conservation value raster for fish. The resulting raster was then reclassified into 10 quantiles as follows: Old Value New Value 0 0 1-59 1 60-67 2 68-81 3 82-92 4 93 5 94-122 6 123-125 7 126 8 127-177 9 178-202 10 Conservation values were determined by experts gathered by Natural Lands Trust through SmartConservation®. This data set is one of several that have been combined to create an overall aquatic resources conservation value raster for the Central Appalachian Forest Ecoregion. Therefore the values were determined as a relative rank, comparable in value only to the other input aquatic resources data. Conservation value ranges from 1 - 10 with 10 being the highest value."
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data shows areas where merged survey bathymetry and backscatter data exists and allows you to download the data. The data was collected between 2001 and 2021.Bathymetry is the measurement of how deep is the sea. Bathymetry is the study of the shape and features of the seabed. The name comes from Greek words meaning "deep" and “measure". Bathymetry is collected on board boats working at sea and airplanes over land and coastline. The boats use special equipment called a multibeam echosounder. A multibeam echosounder is a type of sonar that is used to map the seabed. Sound waves are emitted in a fan shape beneath the boat. The amount of time it takes for the sound waves to bounce off the bottom of the sea and return to a receiver is used to determine water depth. The strength of the sound wave is used to determine how hard the bottom of the sea is. In other words, backscatter is the measure of sound that is reflected by the seafloor and received by the sonar. A strong sound wave indicates a hard surface (rocks, gravel), and a weak return signal indicates a soft surface (silt, mud).LiDAR is another way to map the seabed, using airplanes. Two laser light beams are emitted from a sensor on-board an airplane. The red beam reaches the water surface and bounces back; while the green beam penetrates the water hits the seabed and bounces back. The difference in time between the two beams returning allows the water depth to be calculated. LiDAR is only suitable for shallow waters (up to 30m depth).This data shows areas which have data available for download in Irish waters. These are areas where several surveys have been merged together.It is a vector dataset. Vector data portray the world using points, lines, and polygons (areas).This data is shown as polygons. Each polygon holds information on the data type (bathymetry or backscatter), format of data available for download (GEOTIFF, ESRI GRID), its resolution, projection, last update and provides links to download the data.The data available for download are raster datasets. Raster data is another name for gridded data. Raster data stores information in pixels (grid cells). Each raster grid makes up a matrix of cells (or pixels) organised into rows and columns.This data was collected using a boat or plane. Data is output in xyz format. X and Y are the location and Z is the depth or backscatter value. A software package converts it into gridded data. The grid cell size varies. Most of this data is available at 10m resolution. Each grid cell size is 10 meter by 10 meter. This means that each cell (pixel) represents an area of 10 meter squared.ESRI GRID datasets contain the depth value. This means you can click on a location and get its depth.GEOTIFFS are images of the data and only record colour values. We use software to create a 3D effect of what the seabed looks like. By using vertical exaggeration, artificial sun-shading (mostly as if there is a light source in the northwest) and colouring the depths using colour maps, it is possible to highlight the subtle relief of the seabed. The darker shading represents a deeper depths and lighter shading represents shallower depths.This data shows areas that have been surveyed. There are plans to fill in the missing areas between 2020 and 2026. The deeper offshore waters were mapped as part of the Irish National Seabed Survey (INSS) between 1999 and 2005. INtegrated Mapping FOr the Sustainable Development of Ireland's MArine Resource (INFOMAR) is mapping the inshore areas. (2006 - 2026).
Airborne lidar data of Wax Lake Delta, Louisiana, USA, a delta prograding into Atchafalaya Bay, Gulf of Mexico. Rasters were produced from the 2013 Airborne lidar data wherein trees and vegetation were removed. The data were then rasterized and saved to geotiff using the software package CloudCompare.
Note: The point cloud data were not initially classified for vegetation removal. As such, the investigators removed vegetation to create the raster datasets. The removal process will not match the removal from the classification performed on the data later. The later classification is uploaded to OpenTopography. To get the same rasters as the ones available on OpenTopography, start with the complete unclassified point cloud and follow the methods described in the full metadata file.
This collection of files are part of a larger dataset uploaded in support of Low Temperature Geothermal Play Fairway Analysis for the Appalachian Basin (GPFA-AB, DOE Project DE-EE0006726). Phase 1 of the GPFA-AB project identified potential Geothermal Play Fairways within the Appalachian basin of Pennsylvania, West Virginia and New York. This was accomplished through analysis of 4 key criteria: thermal quality, natural reservoir productivity, risk of seismicity, and heat utilization. Each of these analyses represent a distinct project task, with the fifth task encompassing combination of the 4 risks factors. Supporting data for all five tasks has been uploaded into the Geothermal Data Repository node of the National Geothermal Data System (NGDS).
This submission comprises the data for Thermal Quality Analysis (project task 1) and includes all of the necessary shapefiles, rasters, datasets, code, and references to code repositories that were used to create the thermal resource and risk factor maps as part of the GPFA-AB project. The identified Geothermal Play Fairways are also provided with the larger dataset. Figures (.png) are provided as examples of the shapefiles and rasters. The regional standardized 1 square km grid used in the project is also provided as points (cell centers), polygons, and as a raster. Two ArcGIS toolboxes are available: 1) RegionalGridModels.tbx for creating resource and risk factor maps on the standardized grid, and 2) ThermalRiskFactorModels.tbx for use in making the thermal resource maps and cross sections. These toolboxes contain item description documentation for each model within the toolbox, and for the toolbox itself. This submission also contains three R scripts: 1) AddNewSeisFields.R to add seismic risk data to attribute tables of seismic risk, 2) StratifiedKrigingInterpolation.R for the interpolations used in the thermal resource analysis, and 3) LeaveOneOutCrossValidation.R for the cross validations used in the thermal interpolations.
Some file descriptions make reference to various 'memos'. These are contained within the final report submitted October 16, 2015.
Each zipped file in the submission contains an 'about' document describing the full Thermal Quality Analysis content available, along with key sources, authors, citation, use guidelines, and assumptions, with the specific file(s) contained within the .zip file highlighted.
UPDATE: Newer version of the Thermal Quality Analysis has been added here: https://gdr.openei.org/submissions/879 (Also linked below) Newer version of the Combined Risk Factor Analysis has been added here: https://gdr.openei.org/submissions/880 (Also linked below) This is one of sixteen associated .zip files relating to thermal resource interpolation results within the Thermal Quality Analysis task of the Low Temperature Geothermal Play Fairway Analysis for the Appalachian Basin. This file contains the binary grid (raster) for the predicted depth to 100 degrees C.
The sixteen files contain the results of the thermal resource interpolation as binary grid (raster) files, images (.png) of the rasters, and toolbox of ArcGIS Models used. Note that raster files ending in “pred” are the predicted mean for that resource, and files ending in “err” are the standard error of the predicted mean for that resource. Leave one out cross validation results are provided for each thermal resource.
Several models were built in order to process the well database with outliers removed. ArcGIS toolbox ThermalRiskFactorModels contains the ArcGIS processing tools used. First, the WellClipsToWormSections model was used to clip the wells to the worm sections (interpolation regions). Then, the 1 square km gridded regions (see series of 14 Worm Based Interpolation Boundaries .zip files) along with the wells in those regions were loaded into R using the rgdal package. Then, a stratified kriging algorithm implemented in the R gstat package was used to create rasters of the predicted mean and the standard error of the predicted mean. The code used to make these rasters is called StratifiedKrigingInterpolation.R Details about the interpolation, and exploratory data analysis on the well data is provided in 9_GPFA-AB_InterpolationThermalFieldEstimation.pdf (Smith, 2015), contained within the final report.
The output rasters from R are brought into ArcGIS for further spatial processing. First, the BufferedRasterToClippedRaster tool is used to clip the interpolations back to the Worm Sections. Then, the Mosaic tool in ArcGIS is used to merge all predicted mean rasters into a single raster, and all error rasters into a single raster for each thermal resource.
A leave one out cross validation was performed on each of the thermal resources. The code used to implement the cross validation is provided in the R script LeaveOneOutCrossValidation.R. The results of the cross validation are given for each thermal resource.
Other tools provided in this toolbox are useful for creating cross sections of the thermal resource. ExtractThermalPropertiesToCrossSection model extracts the predicted mean and the standard error of predicted mean to the attribute table of a line of cross section. The AddExtraInfoToCrossSection model is then used to add any other desired information, such as state and county boundaries, to the cross section attribute table. These two functions can be combined as a single function, as provided by the CrossSectionExtraction model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data provided by the Marine Institute, and may also incorporate data from other agencies and bodies. This dataset shows the distribution of fishing effort by fishing vessels according to the gear type used. Fishing effort is defined as the time spent engaged in fishing operations or time spent at sea, this time may be multiplied by a measure of fishing capacity, e.g. engine power. In this dataset fishing effort is measured as average hours spent actively fishing per kilometre square, per year. Data from years 2014 to 2018 was used to produce this data product for the Marine Institute publication the “Atlas of Commercial Fisheries around Ireland, third edition“ (https://oar.marine.ie/handle/10793/1432). Effort for offshore fisheries is based on the following 2 primary data types - data on vessel positioning and data on gear types used: Vessel Monitoring Systems (VMS) supplied by the Irish Naval Service provide geographical position and speed of vessel at intervals of two hours or less (Commission Regulation (EC) No. 2244/2003). The data are available for all EU vessels of 12m and larger, operating inside the Irish EEZ; outside this zone only Irish VMS data are routinely available. VMS do not record whether a vessel is fishing, steaming or inactive. Logbooks collected by the Sea-Fisheries Protection Authority and supplied by the Department of Agriculture, Food & the Marine were the primary data source for information on landings and gear types used by Irish vessels. EU Fleet Register obtained from the EU fleet register provides information for non-Irish vessels and for Irish vessels for which the gear was not known from the logbooks. Note that if vessels use more than one gear, it is possible that the gear type assigned to them was not the one that was actually used. The fishing gear data was classified into eight main groups: demersal otter trawls; beam trawls; demersal seines; gill and trammel nets; longlines; dredges; pots and pelagic trawls. The VMS data was analysed using the approach described by Gerritsen and Lordan (IJMS 68(1)). This approach assigns effort to each of the VMS data points. The effort of a VMS data point is defined as the time interval since the previous data point. Next the data are filtered for fishing activity using speed criteria, vessels were assumed to be actively fishing if their speed fell within a certain range (depending on the fishing gear used). The points that remain are then aggregated into a spatial grid to produce a raster dataset showing fishing effort (in hours) per kilometre square per year for each gear type group. The data is available for all countries combined and for Irish vessels only.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset is the Building Exposure Type (BuildingExposure) raster for the United States. It is part of the data publication Wildfire Risk to Communities: Spatial datasets of wildfire risk for populated areas in the United States. Exposure is the spatial coincidence of wildfire likelihood and intensity with communities. The BuildingExposure layer delineates whether buildings at each pixel are directly exposed to wildfire from adjacent wildland vegetation (pixel value of 1000), indirectly exposed to wildfire from indirect sources such as embers and home-to-home ignition (pixel values between 0 and 1000), or not exposed to wildfire due to distance from direct and indirect ignition sources (pixel value of 0).�It is similar to Exposure Type in the companion data publication (Scott et al. 2020), but just in places where housing units or other buildings are present.�Metadata and Downloads.Note:�Pixel values in this image service have been altered from the original raster dataset due to data requirements in web services. The service is intended primarily for data visualization. Relative values and spatial patterns have been largely preserved in the service, but users are encouraged to download the source data for quantitative analysis.Short, Karen C.; Finney, Mark A.; Vogler, Kevin C.; Scott, Joe H.; Gilbertson-Day, Julie W.; Grenfell, Isaac C. 2020. Spatial datasets of probabilistic wildfire risk components for the United States (270m). 2nd Edition. Fort Collins, CO: Forest Service Research Data Archive.�https://doi.org/10.2737/RDS-2016-0034-2
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Introduction and Rationale: Due to our increasing understanding of the role the surrounding landscape plays in ecological processes, a detailed characterization of land cover, including both agricultural and natural habitats, is ever more important for both researchers and conservation practitioners. Unfortunately, in the United States, different types of land cover data are split across thematic datasets that emphasize agricultural or natural vegetation, but not both. To address this data gap and reduce duplicative efforts in geospatial processing, we merged two major datasets, the LANDFIRE National Vegetation Classification (NVC) and USDA-NASS Cropland Data Layer (CDL), to produce an integrated land cover map. Our workflow leveraged strengths of the NVC and the CDL to produce detailed rasters comprising both agricultural and natural land-cover classes. We generated these maps for each year from 2012-2021 for the conterminous United States, quantified agreement between input layers and accuracy of our merged product, and published the complete workflow necessary to update these data. In our validation analyses, we found that approximately 5.5% of NVC agricultural pixels conflicted with the CDL, but we resolved a majority of these conflicts based on surrounding agricultural land, leaving only 0.6% of agricultural pixels unresolved in our merged product. Contents: Spatial data
Attribute table for merged rasters
Technical validation data
Number and proportion of mismatched pixels Number and proportion of unresolved pixels Producer's and User's accuracy values and coverage of reference data Resources in this dataset:Resource Title: Attribute table for merged rasters. File Name: CombinedRasterAttributeTable_CDLNVC.csvResource Description: Raster attribute table for merged raster product. Class names and recommended color map were taken from USDA-NASS Cropland Data Layer and LANDFIRE National Vegetation Classification. Class values are also identical to source data, except classes from the CDL are now negative values to avoid overlapping NVC values. Resource Title: Number and proportion of mismatched pixels. File Name: pixel_mismatch_byyear_bycounty.csvResource Description: Number and proportion of pixels that were mismatched between the Cropland Data Layer and National Vegetation Classification, per year from 2012-2021, per county in the conterminous United States.Resource Title: Number and proportion of unresolved pixels. File Name: unresolved_conflict_byyear_bycounty.csvResource Description: Number and proportion of unresolved pixels in the final merged rasters, per year from 2012-2021, per county in the conterminous United States. Unresolved pixels are a result of mismatched pixels that we could not resolve based on surrounding agricultural land (no agriculture with 90m radius).Resource Title: Producer's and User's accuracy values and coverage of reference data. File Name: accuracy_datacoverage_byyear_bycounty.csvResource Description: Producer's and User's accuracy values and coverage of reference data, per year from 2012-2021, per county in the conterminous United States. We defined coverage of reference data as the proportional area of land cover classes that were included in the reference data published by USDA-NASS and LANDFIRE for the Cropland Data Layer and National Vegetation Classification, respectively. CDL and NVC classes with reference data also had published accuracy statistics. Resource Title: Data Dictionary. File Name: Data_Dictionary_RasterMerge.csv
The Willamette Lowland basin-fill aquifers (hereinafter referred to as the Willamette aquifer) is located in Oregon and in southern Washington. The aquifer is composed of unconsolidated deposits of sand and gravel, which are interlayered with clay units. The aquifer thickness varies from less than 100 feet to 800 feet. The aquifer is underlain by basaltic-rock. Cities such as Portland, Oregon, depend on the aquifer for public and industrial use (HA 730-H). This product provides source data for the Willamette aquifer framework, including: Georeferenced images: 1. i_08WLMLWD_bot.tif: Georeferenced figure of altitude contour lines representing the bottom of the Willamette aquifer. The original figure was from Professional Paper 1424-A, Plate 2 (1424-A-P2). The contour lines from this figure were digitized to make the file c_08WLMLWD_bot.shp, and the fault lines were digitized to make f_08WLMLWD_bot.shp. Extent shapefiles: 1. p_08WLMLWD.shp: Polygon shapefile containing the areal extent of the Willamette aquifer (Willamette_AqExtent). The original shapefile was modified to create the shapefile included in this data release. It was modified to only include the Willamette Lowland portion of the aquifer. The extent file contains no aquifer subunits. Contour line shapefiles: 1. c_08WLMLWD_bot.shp: Contour line dataset containing altitude values, in feet, referenced to National Geodetic Vertical Datum of 1929 (NGVD29), across the bottom of the Willamette aquifer. These data were used to create the ra_08WLMLWD_bot.tif raster dataset. Fault line shapefiles: 1. f_08WLMLWD_bot.shp: Fault line dataset containing fault lines across the bottom of the Willamette aquifer. These data were not used in raster creation but were included as supplementary information. Altitude raster files: 1. ra_08WLMLWD_top.tif: Altitude raster dataset of the top of the Willamette aquifer. The altitude values are in meters reference to North American Vertical Datum of 1988 (NAVD88). The top of the aquifer is assumed to be land surface based on available data and was interpolated from the digital elevation model (DEM) dataset (NED, 100-meter). 2. ra_08WLMLWD_bot.tif: Altitude raster dataset of the bottom of the Willamette aquifer. The altitude values are in meters reference to NAVD88. This raster was interpolated from the c_08WLMLWD_bot.shp dataset. Depth raster files: 1. rd_08WLMLWD_top.tif: Depth raster dataset of the top of the Willamette aquifer. The depth values are in meters below land surface (NED, 100-meter). The top of the aquifer is assumed to be land surface based on available data. 2. rd_08WLMLWD_bot.tif : Depth raster dataset of the bottom of the Willamette aquifer. The depth values are in meters below land surface (NED, 100-meter).
Studies utilizing Global Positioning System (GPS) telemetry rarely result in 100% fix success rates (FSR). Many assessments of wildlife resource use do not account for missing data, either assuming data loss is random or because a lack of practical treatment for systematic data loss. Several studies have explored how the environment, technological features, and animal behavior influence rates of missing data in GPS telemetry, but previous spatially explicit models developed to correct for sampling bias have been specified to small study areas, on a small range of data loss, or to be species-specific, limiting their general utility. Here we explore environmental effects on GPS fix acquisition rates across a wide range of environmental conditions and detection rates for bias correction of terrestrial GPS-derived, large mammal habitat use. We also evaluate patterns in missing data that relate to potential animal activities that change the orientation of the antennae and characterize home-range probability of GPS detection for 4 focal species; cougars (Puma concolor), desert bighorn sheep (Ovis canadensis nelsoni), Rocky Mountain elk (Cervus elaphus ssp. nelsoni) and mule deer (Odocoileus hemionus). Part 1, Positive Openness Raster (raster dataset): Openness is an angular measure of the relationship between surface relief and horizontal distance. For angles less than 90 degrees it is equivalent to the internal angle of a cone with its apex at a DEM location, and is constrained by neighboring elevations within a specified radial distance. 480 meter search radius was used for this calculation of positive openness. Openness incorporates the terrain line-of-sight or viewshed concept and is calculated from multiple zenith and nadir angles-here along eight azimuths. Positive openness measures openness above the surface, with high values for convex forms and low values for concave forms (Yokoyama et al. 2002). We calculated positive openness using a custom python script, following the methods of Yokoyama et. al (2002) using a USGS National Elevation Dataset as input. Part 2, Northern Arizona GPS Test Collar (csv): Bias correction in GPS telemetry data-sets requires a strong understanding of the mechanisms that result in missing data. We tested wildlife GPS collars in a variety of environmental conditions to derive a predictive model of fix acquisition. We found terrain exposure and tall over-story vegetation are the primary environmental features that affect GPS performance. Model evaluation showed a strong correlation (0.924) between observed and predicted fix success rates (FSR) and showed little bias in predictions. The model's predictive ability was evaluated using two independent data-sets from stationary test collars of different make/model, fix interval programming, and placed at different study sites. No statistically significant differences (95% CI) between predicted and observed FSRs, suggest changes in technological factors have minor influence on the models ability to predict FSR in new study areas in the southwestern US. The model training data are provided here for fix attempts by hour. This table can be linked with the site location shapefile using the site field. Part 3, Probability Raster (raster dataset): Bias correction in GPS telemetry datasets requires a strong understanding of the mechanisms that result in missing data. We tested wildlife GPS collars in a variety of environmental conditions to derive a predictive model of fix aquistion. We found terrain exposure and tall overstory vegetation are the primary environmental features that affect GPS performance. Model evaluation showed a strong correlation (0.924) between observed and predicted fix success rates (FSR) and showed little bias in predictions. The models predictive ability was evaluated using two independent datasets from stationary test collars of different make/model, fix interval programing, and placed at different study sites. No statistically significant differences (95% CI) between predicted and observed FSRs, suggest changes in technological factors have minor influence on the models ability to predict FSR in new study areas in the southwestern US. We evaluated GPS telemetry datasets by comparing the mean probability of a successful GPS fix across study animals home-ranges, to the actual observed FSR of GPS downloaded deployed collars on cougars (Puma concolor), desert bighorn sheep (Ovis canadensis nelsoni), Rocky Mountain elk (Cervus elaphus ssp. nelsoni) and mule deer (Odocoileus hemionus). Comparing the mean probability of acquisition within study animals home-ranges and observed FSRs of GPS downloaded collars resulted in a approximatly 1:1 linear relationship with an r-sq= 0.68. Part 4, GPS Test Collar Sites (shapefile): Bias correction in GPS telemetry data-sets requires a strong understanding of the mechanisms that result in missing data. We tested wildlife GPS collars in a variety of environmental conditions to derive a predictive model of fix acquisition. We found terrain exposure and tall over-story vegetation are the primary environmental features that affect GPS performance. Model evaluation showed a strong correlation (0.924) between observed and predicted fix success rates (FSR) and showed little bias in predictions. The model's predictive ability was evaluated using two independent data-sets from stationary test collars of different make/model, fix interval programming, and placed at different study sites. No statistically significant differences (95% CI) between predicted and observed FSRs, suggest changes in technological factors have minor influence on the models ability to predict FSR in new study areas in the southwestern US. Part 5, Cougar Home Ranges (shapefile): Cougar home-ranges were calculated to compare the mean probability of a GPS fix acquisition across the home-range to the actual fix success rate (FSR) of the collar as a means for evaluating if characteristics of an animal’s home-range have an effect on observed FSR. We estimated home-ranges using the Local Convex Hull (LoCoH) method using the 90th isopleth. Data obtained from GPS download of retrieved units were only used. Satellite delivered data was omitted from the analysis for animals where the collar was lost or damaged because satellite delivery tends to lose as additional 10% of data. Comparisons with home-range mean probability of fix were also used as a reference for assessing if the frequency animals use areas of low GPS acquisition rates may play a role in observed FSRs. Part 6, Cougar Fix Success Rate by Hour (csv): Cougar GPS collar fix success varied by hour-of-day suggesting circadian rhythms with bouts of rest during daylight hours may change the orientation of the GPS receiver affecting the ability to acquire fixes. Raw data of overall fix success rates (FSR) and FSR by hour were used to predict relative reductions in FSR. Data only includes direct GPS download datasets. Satellite delivered data was omitted from the analysis for animals where the collar was lost or damaged because satellite delivery tends to lose approximately an additional 10% of data. Part 7, Openness Python Script version 2.0: This python script was used to calculate positive openness using a 30 meter digital elevation model for a large geographic area in Arizona, California, Nevada and Utah. A scientific research project used the script to explore environmental effects on GPS fix acquisition rates across a wide range of environmental conditions and detection rates for bias correction of terrestrial GPS-derived, large mammal habitat use.
This service is an image service created from a mosaic dataset that references TIFF imagery taken in 2023. Overviews were created in the mosaic dataset to make rendering at small scales optimal.