Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes a series of R scripts required to carry out some of the practical exercises in the book “Land Use Cover Datasets and Validation Tools”, available in open access.
The scripts have been designed within the context of the R Processing Provider, a plugin that integrates the R processing environment into QGIS. For all the information about how to use these scripts in QGIS, please refer to Chapter 1 of the book referred to above.
The dataset includes 15 different scripts, which can implement the calculation of different metrics in QGIS:
Change statistics such as absolute change, relative change and annual rate of change (Change_Statistics.rsx)
Areal and spatial agreement metrics, either overall (Overall Areal Inconsistency.rsx, Overall Spatial Agreement.rsx, Overall Spatial Inconsistency.rsx) or per category (Individual Areal Inconsistency.rsx, Individual Spatial Agreement.rsx)
The four components of change (gross gains, gross losses, net change and swap) proposed by Pontius Jr. (2004) (LUCCBudget.rsx)
The intensity analysis proposed by Aldwaik and Pontius (2012) (Intensity_analysis.rsx)
The Flow matrix proposed by Runfola and Pontius (2013) (Stable_change_flow_matrix.rsx, Flow_matrix_graf.rsx)
Pearson and Spearman correlations (Correlation.rsx)
The Receiver Operating Characteristic (ROC) (ROCAnalysis.rsx)
The Goodness of Fit (GOF) calculated using the MapCurves method proposed by Hargrove et al. (2006) (MapCurves_raster.rsx, MapCurves_vector.rsx)
The spatial distribution of overall, user and producer’s accuracies, obtained through Geographical Weighted Regression methods (Local accuracy assessment statistics.rsx).
Descriptions of all these methods can be found in different chapters of the aforementioned book.
The dataset also includes a readme file listing all the scripts provided, detailing their authors and the references on which their methods are based.
Facebook
TwitterThis dataset was constructed for Phase 2 research analyzing spatial relationships between Amazon geoglyphs and environmental conditions. The analysis includes NDVI and NDMI calculations and grid-based anomaly detection.
Data sources: - Sentinel-2 Composites: forobs.jrc.ec.europa.eu/sentinel/sentinel2_composite - Pan-tropical cloud-free annual composites (2020) - jqjacobs.net: Archaeogeodesy Placemarks (Amazon geoglyph category extracted from Google Earth KML)
amazon_geoglyphs_analysis/
├── data/
│ ├── sites_geoglyphs.gpkg # Site locations (extracted geoglyph coordinates)
│ ├── focus_rgb_swir1_nir_red.tif # Sentinel-2 composite (RGB: SWIR1, NIR, RED channels)
│ ├── focus_ndvi.tif # NDVI index (vegetation greenness)
│ ├── focus_ndmi.tif # NDMI index (vegetation moisture)
│ ├── focus_area.gpkg # Analysis boundary (study area extent)
│ ├── amazon_grid_anomaly.gpkg # Grid-based anomaly analysis
│ └── amazon_basin.gpkg # Amazon basin boundaries
└── analysis_project.qgz # QGIS project (integrated analysis workflow)
focus_rgb_swir1_nir_red.tif)focus_ndvi.tif)Expression: ("focus_rgb_swir1_nir_red@2" - "focus_rgb_swir1_nir_red@3") /
("focus_rgb_swir1_nir_red@2" + "focus_rgb_swir1_nir_red@3")focus_ndmi.tif)Expression: ("focus_rgb_swir1_nir_red@2" - "focus_rgb_swir1_nir_red@1") /
("focus_rgb_swir1_nir_red@2" + "focus_rgb_swir1_nir_red@1")amazon_grid_anomaly.gpkg)Expression: "ndvi_mean" <= "ndvi_p10" AND "ndmi_mean" <= "ndmi_p10"
Facebook
TwitterThis dataset was constructed for the Phase 2 research described in the write-up document, analyzing the spatial relationships between geoglyphs (ancient earthwork structures) in the Amazon basin and hydrological environments to identify potential geoglyph locations.
Data sources
2_1_plan_research_area/
├── scripts/
│ └── kmz_point_extractor.py # Data extraction script (Archaeogeodesy KMZ → geoglyph coordinates)
├── data/
│ ├── amazon_basin.gpkg # Watershed boundaries (HydroBASINS Level 3 Amazon basin)
│ ├── amazon_gloric.gpkg # River data (GloRiC clipped to basin extent)
│ ├── amazon_grid_gloric.gpkg # Grid statistics (0.5° grid-based river environment statistics)
│ ├── sites_geoglyphs.gpkg # Site locations (extracted geoglyph points)
│ ├── survey_area.gpkg # Administrative areas (Brazil/Peru/Bolivia states of interest)
│ └── focus_area.gpkg # Analysis area (potential geoglyph survey target region)
└── plan_research_area.qgz # QGIS project (integrated layer management)
amazon_basin.gpkg)amazon_gloric.gpkg)amazon_grid_gloric.gpkg)survey_area.gpkg)focus_area.gpkg)This dataset serves as the foundation for Phase 2 research utilizing environmental filtering and Sentinel-2 multispectral analysis to identify potential geoglyph locations.
Facebook
TwitterThis project explores the feasibility of integrating solar-powered infrastructure into bike pathways as a sustainable energy and transportation solution for California. Using advanced tools like ArcGIS (for analysis), PVWatts, SAM, and JEDI, this study evaluates the economic, environmental, and technical implications through a conceptual case study based in Riverside. Insights drawn from global case studies and stakeholder feedback highlight challenges such as financial constraints, regulatory complexities, and technical design considerations, while also identifying opportunities for renewable energy generation, greenhouse gas emission reductions, and enhanced urban mobility. The conceptual case study serves as a framework for assessing potential benefits and informing actionable strategies. Recommendations address barriers and align implementation with California’s climate action and sustainability goals, offering a roadmap for integrating renewable energy with active transportation sy..., The data collection and processing methods for this project utilized a combination of publicly available tools and resources to ensure accuracy and usability. Key geospatial, energy modeling, and economic analysis data were gathered using reliable tools such as ArcGIS, SAM, JEDI, and PVWatts, with outputs systematically processed into accessible formats. This approach enabled comprehensive analysis of bike path integration, energy performance, and economic impacts.
Data Collection:
BikePaths_Riverside.qgz: Geospatial data detailing bike paths in Riverside was gathered from publicly available sources and initially analyzed using ArcGIS Pro. To ensure open access and reusability, the data has been converted to a .qgz project file compatible with QGIS (version 3.42), a free and open-source GIS platform.
SAM_Input_Variable_Values.csv: Input parameters were collected based on standard system specifications, financial assumptions, and default or adjusted inputs available in the System Ad..., , # Data for: Solar bike path feasibility study in California
https://doi.org/10.5061/dryad.4tmpg4fn1
The data was collected to evaluate the feasibility, technical requirements, and potential impacts of integrating solar-powered infrastructure into bike pathways. The study utilized geospatial data from ArcGIS for spatial analysis and site evaluation, combined with energy modeling tools such as PVWatts and SAM to estimate energy production, greenhouse gas reductions, and financial metrics. The JEDI model was employed to assess economic and job creation impacts. These efforts were guided by a conceptual case study in Riverside, California, to simulate real-world scenarios and inform actionable strategies for renewable energy integration. Feedback from stakeholders further shaped the analysis, addressing technical, economic, and regulatory challenges while aligning with California's sustainability goa...,
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: The forward intersection method is already widely used in the geodetic survey of coordinates of inaccessible points, especially when only angle measurements are available, in this case, also called the triangulation method. However, the mathematical solution of the 3D forward intersection with the analytical definition of spatial lines, resolved by the Minimum Distances Method, is still not widespread in the academic and professional environment. This mathematical modeling determines the 3D coordinates of a point located in the middle of the minimum distance between two or more spatial lines, which spatially "intersect" towards the observation point. This solution is more accurate than others presented in the literature because it simultaneously solves the problem of 3D determination of a point by the method of least squares, in addition to providing an estimate of the coordinate precision, which are inherent to the adjustment. This work, therefore, has the objective of explaining the Minimum Distances Method for the spatial intersection of targeted measurements with a Total Station from two or more known observation points for the 3D determination of inaccessible points located in corners of buildings. For the analysis of the method, a Python tool was developed for QGIS that calculates the 3D coordinates and generates the adjustment processing report, being applied with real observations of the Geodetic survey of the SUDENE building, in Recife-PE. The methodology developed in this work proved to be suitable for measurements of large structures, achieving spherical precision better than ±1.0 cm, following the Brazilian standards for urban cadastre.
Facebook
TwitterThis specialized location dataset delivers detailed information about marina establishments. Maritime industry professionals, coastal planners, and tourism researchers can leverage precise location insights to understand maritime infrastructure, analyze recreational boating landscapes, and develop targeted strategies.
How Do We Create Polygons?
-All our polygons are manually crafted using advanced GIS tools like QGIS, ArcGIS, and similar applications. This involves leveraging aerial imagery, satellite data, and street-level views to ensure precision. -Beyond visual data, our expert GIS data engineers integrate venue layout/elevation plans sourced from official company websites to construct highly detailed polygons. This meticulous process ensures maximum accuracy and consistency. -We verify our polygons through multiple quality assurance checks, focusing on accuracy, relevance, and completeness.
What's More?
-Custom Polygon Creation: Our team can build polygons for any location or category based on your requirements. Whether it’s a new retail chain, transportation hub, or niche point of interest, we’ve got you covered. -Enhanced Customization: In addition to polygons, we capture critical details such as entry and exit points, parking areas, and adjacent pathways, adding greater context to your geospatial data. -Flexible Data Delivery Formats: We provide datasets in industry-standard GIS formats like WKT, GeoJSON, Shapefile, and GDB, making them compatible with various systems and tools. -Regular Data Updates: Stay ahead with our customizable refresh schedules, ensuring your polygon data is always up-to-date for evolving business needs.
Unlock the Power of POI and Geospatial Data
With our robust polygon datasets and point-of-interest data, you can: -Perform detailed market and location analyses to identify growth opportunities. -Pinpoint the ideal locations for your next store or business expansion. -Decode consumer behavior patterns using geospatial insights. -Execute location-based marketing campaigns for better ROI. -Gain an edge over competitors by leveraging geofencing and spatial intelligence.
Why Choose LocationsXYZ?
LocationsXYZ is trusted by leading brands to unlock actionable business insights with our accurate and comprehensive spatial data solutions. Join our growing network of successful clients who have scaled their operations with precise polygon and POI datasets. Request your free sample today and explore how we can help accelerate your business growth.
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The Global GIS Mapping Tools Market is poised for significant expansion, projected to reach a substantial market size of $10 billion by 2025, with an anticipated Compound Annual Growth Rate (CAGR) of 12.5% through 2033. This robust growth trajectory is fueled by the increasing demand for advanced spatial analysis and visualization capabilities across a multitude of sectors. Key drivers include the escalating need for accurate geological exploration to identify and manage natural resources, the critical role of GIS in planning and executing complex water conservancy projects for sustainable water management, and the indispensable application of GIS in urban planning for efficient city development and infrastructure management. Furthermore, the burgeoning adoption of cloud-based and web-based GIS solutions is democratizing access to powerful mapping tools, enabling broader use by organizations of all sizes. The market is also benefiting from advancements in data processing, artificial intelligence integration, and the growing availability of open-source GIS platforms. Despite the optimistic outlook, certain restraints could temper the market's full potential. High initial investment costs for sophisticated GIS software and hardware, coupled with a shortage of skilled GIS professionals in certain regions, may pose challenges. However, the overwhelming benefits of enhanced decision-making, improved operational efficiency, and the ability to gain deep insights from spatial data are compelling organizations to overcome these hurdles. The competitive landscape is dynamic, featuring established players like Esri and Autodesk alongside innovative providers such as Mapbox and CARTO, all vying for market share by offering specialized features, user-friendly interfaces, and integrated solutions. The continuous evolution of GIS technology, driven by the integration of remote sensing data, big data analytics, and real-time information, will continue to shape the market's future. Here's a comprehensive report description on GIS Mapping Tools, incorporating your specified requirements:
This in-depth report provides a panoramic view of the global GIS Mapping Tools market, meticulously analyzing its landscape from the Historical Period (2019-2024) through to the Forecast Period (2025-2033), with 2025 serving as both the Base Year and the Estimated Year. The study period encompasses 2019-2033, offering a robust historical context and forward-looking projections. The market is valued in the millions of US dollars, with detailed segment-specific valuations and growth trajectories. The report is structured to deliver actionable intelligence to stakeholders, covering market concentration, key trends, regional dominance, product insights, and critical industry dynamics. It delves into the intricate interplay of companies such as Esri, Hexagon, Autodesk, CARTO, and Mapbox, alongside emerging players like Geoway and Shenzhen Edraw Software, across diverse applications including Geological Exploration, Water Conservancy Projects, and Urban Planning. The analysis also differentiates between Cloud Based and Web Based GIS solutions, providing a granular understanding of market segmentation.
Facebook
Twitter
Facebook
TwitterXtract.io's location data for home and electronics retailers delivers a comprehensive view of the retail sector. Retail analysts, industry researchers, and business developers can utilize this dataset to understand market distribution, identify potential opportunities, and develop strategic insights into home and electronics retail landscapes.
How Do We Create Polygons? -All our polygons are manually crafted using advanced GIS tools like QGIS, ArcGIS, and similar applications. This involves leveraging aerial imagery and street-level views to ensure precision. -Beyond visual data, our expert GIS data engineers integrate venue layout/elevation plans sourced from official company websites to construct detailed indoor polygons. This meticulous process ensures higher accuracy and consistency. -We verify our polygons through multiple quality checks, focusing on accuracy, relevance, and completeness.
What's More? -Custom Polygon Creation: Our team can build polygons for any location or category based on your specific requirements. Whether it’s a new retail chain, transportation hub, or niche point of interest, we’ve got you covered. -Enhanced Customization: In addition to polygons, we capture critical details such as entry and exit points, parking areas, and adjacent pathways, adding greater context to your geospatial data. -Flexible Data Delivery Formats: We provide datasets in industry-standard formats like WKT, GeoJSON, Shapefile, and GDB, making them compatible with various systems and tools. -Regular Data Updates: Stay ahead with our customizable refresh schedules, ensuring your polygon data is always up-to-date for evolving business needs.
Unlock the Power of POI and Geospatial Data With our robust polygon datasets and point-of-interest data, you can: -Perform detailed market analyses to identify growth opportunities. -Pinpoint the ideal location for your next store or business expansion. -Decode consumer behavior patterns using geospatial insights. -Execute targeted, location-driven marketing campaigns for better ROI. -Gain an edge over competitors by leveraging geofencing and spatial intelligence.
Why Choose LocationsXYZ? LocationsXYZ is trusted by leading brands to unlock actionable business insights with our spatial data solutions. Join our growing network of successful clients who have scaled their operations with precise polygon and POI data. Request your free sample today and explore how we can help accelerate your business growth.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundLandscaping studies related to public health education in India do not exclusively focus on the most common Masters of Public Health (MPH) program. The field of public health faces challenges due to the absence of a professional council, resulting in fragmented documentation of these programs. This study was undertaken to map all MPH programs offered across various institutes in India in terms of their geographic distribution, accreditation status, and administration patterns.MethodologyAn exhaustive internet search using various keywords was conducted to identify all MPH programs offered in India. Websites were explored for their details. A data extraction tool was developed for recording demographic and other data. Information was extracted from these websites as per the tool and collated in a matrix. Geographic coordinates obtained from Google Maps, and QGIS software facilitated map generation.ResultsThe search identified 116 general and 13 MPH programs with specializations offered by different universities and institutes across India. India is divided into six zones, and the distribution of MPH programs in these zones is as follows, central zone has 20 programs; the east zone has 11; the north zone has 35; the north-east zone has 07; the south zone has 26; and the west zone has 17 MPH programs. While 107 are university grants commission (UGC) approved universities and institutes, only 46 MPH programs are conducted by both UGC approved and National Assessment and Accreditation Council (NAAC) accredited universities and institutes. Five universities are categorized as central universities; 22 are deemed universities; 51 are private universities; and 29 are state universities. Nine are considered institutions of national importance by the UGC, and four institutions are recognized as institutions of eminence. All general MPH programs span 2 years and are administered under various faculties, with only 27 programs being conducted within dedicated schools or centers of public health.ConclusionThe MPH programs in India show considerable diversity in their geographic distribution, accreditation status, and administration pattern.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for: Regional Correlations in the layered deposits of Arabia Terra, Mars
Overview:
This repository contains the map-projected HiRISE Digital Elevation Models (DEMs) and the map-projected HiRISE image for each DEM and for each site in the study. Also contained in the repository is a GeoPackage file (beds_2019_08_28_09_29.gpkg) that contains the dip corrected bed thickness measurements, longitude and latitude positions, and error information for each bed measured in the study. GeoPackage files supersede shapefiles as a standard geospatial data format and can be opened in a variety of open source tools including QGIS, and proprietary tools such as recent versions of ArcGIS. For more information about GeoPackage files, please use https://www.geopackage.org/ as a resource. A more detailed description of columns in the beds_2019_08_28_09_29.gpkg file is described below in a dedicated section. Table S1 from the supplementary is also included as an excel spreadsheet file (table_s1.xlsx).
HiRISE DEMs and Images:
Each HiRISE DEM, and corresponding map-projected image used in the study are included in this repository as GeoTiff files (ending with .tif). The file names correspond to the combination of the HiRISE Image IDs listed in Table 1 that were used to produce the DEM for the site, with the image with the smallest emission angle (most-nadir) listed first. Files ending with “_align_1-DEM-adj.tif” are the DEM files containing the 1 meter per pixel elevation values, and files ending with “_align_1-DRG.tif” are the corresponding map-projected HiRISE (left) image. Table 1 Image Pairs correspond to filenames in this repository in the following way: In Table 1, Sera Crater corresponds to HiRISE Image Pair: PSP_001902_1890/PSP_002047_1890, which corresponds to files: “PSP_001902_1890_PSP_002047_1890_align_1-DEM-adj.tif” for the DEM file and “PSP_001902_1890_PSP_002047_1890_align_1-DRG.tif” for the map-projected image file. Each site is listed below with the DEM and map-projected image filenames that correspond to the site as listed in Table 1. The DEM and Image files can be opened in a variety of open source tools including QGIS, and proprietary tools such as recent versions of ArcGIS.
· Sera
o DEM: PSP_001902_1890_PSP_002047_1890_align_1-DEM-adj.tif
o Image: PSP_001902_1890_PSP_002047_1890_align_1-DRG.tif
· Banes
o DEM: ESP_013611_1910_ESP_014033_1910_align_1-DEM-adj.tif
o Image: ESP_013611_1910_ESP_014033_1910_align_1-DRG.tif
· Wulai 1
o DEM: ESP_028129_1905_ESP_028195_1905_align_1-DEM-adj.tif
o Image: ESP_028129_1905_ESP_028195_1905_align_1-DRG.tif
· Wulai 2
o DEM: ESP_028129_1905_ESP_028195_1905_align_1-DEM-adj.tif
o Image: ESP_028129_1905_ESP_028195_1905_align_1-DRG.tif
· Jiji
o DEM: ESP_016657_1890_ESP_017013_1890_align_1-DEM-adj.tif
o Image: ESP_016657_1890_ESP_017013_1890_align_1-DRG.tif
· Alofi
o DEM: ESP_051825_1900_ESP_051970_1900_align_1-DEM-adj.tif
o Image: ESP_051825_1900_ESP_051970_1900_align_1-DRG.tif
· Yelapa
o DEM: ESP_015958_1835_ESP_016235_1835_align_1-DEM-adj.tif
o Image: ESP_015958_1835_ESP_016235_1835_align_1-DRG.tif
· Danielson 1
o DEM: PSP_002733_1880_PSP_002878_1880_align_1-DEM-adj.tif
o Image: PSP_002733_1880_PSP_002878_1880_align_1-DRG.tif
· Danielson 2
o DEM: PSP_008205_1880_PSP_008930_1880_align_1-DEM-adj.tif
o Image: PSP_008205_1880_PSP_008930_1880_align_1-DRG.tif
· Firsoff
o DEM: ESP_047184_1820_ESP_039404_1820_align_1-DEM-adj.tif
o Image: ESP_047184_1820_ESP_039404_1820_align_1-DRG.tif
· Kaporo
o DEM: PSP_002363_1800_PSP_002508_1800_align_1-DEM-adj.tif
o Image: PSP_002363_1800_PSP_002508_1800_align_1-DRG.tif
Description of beds_2019_08_28_09_29.gpkg:
The GeoPackage file “beds_2019_08_28_09_29.gpkg” contains the dip corrected bed thickness measurements among other columns described below. The file can be opened in a variety of open source tools including QGIS, and proprietary tools such as recent versions of ArcGIS.
(Column_Name: Description)
sitewkn: Site name corresponding to the bed (i.e. Danielson 1)
section: Section ID of the bed (sections contain multiple beds)
meansl: The mean slope (dip) in degrees for the section
meanaz: The mean azimuth (dip-direction) in degrees for the section
ang_error: Angular error for a section derived from individual azimuths in the section
B_1: Plane coefficient 1 for the section
B_2: Plane coefficient 2 for the section
lon: Longitude of the centroid of the Bed
lat: Latitude of the centroid of the Bed
thickness: Thickness of the bed BEFORE dip correction
dipcor_thick: Dip-corrected bed thickness
lon1: Longitude of the centroid of the lower layer for the bed (each bed has a lower and upper layer)
lon2: Longitude of the centroid of the upper layer for the bed
lat1: Latitude of the centroid of the lower layer for the bed
lat2: Latitude of the centroid of the upper layer for the bed
meanc1: Mean stratigraphic position of the lower layer for the bed
meanc2: Mean stratigraphic position of the upper layer for the bed
uuid1: Universally unique identifier of the lower layer for the bed
uuid2: Universally unique identifier of the upper layer for the bed
stdc1: Standard deviation of the stratigraphic position of the lower layer for the bed
stdc2: Standard deviation of the stratigraphic position of the upper layer for the bed
sl1: Individual Slope (dip) of the lower layer for the bed
sl2: Individual Slope (dip) of the upper layer for the bed
az1: Individual Azimuth (dip-direction) of the lower layer for the bed
az2: Individual Azimuth (dip-direction) of the upper layer for the bed
meanz: Mean elevation of the bed
meanz1: Mean elevation of the lower layer for the bed
meanz2: Mean elevation of the upper layer for the bed
rperr1: Regression error for the plane fit of the lower layer for the bed
rperr2: Regression error for the plane fit of the upper layer for the bed
rpstdr1: Standard deviation of the residuals for the plane fit of the lower layer for the bed
rpstdr2: Standard deviation of the residuals for the plane fit of the upper layer for the bed
Facebook
TwitterXtract.io’s Pharmacy & Drug Store Location Data provides a complete geospatial view of pharmaceutical retail across the United States and Canada. This dataset includes handcrafted polygons and geocoded coordinates for each pharmacy location, making it a powerful resource for healthcare planners, market researchers, and retail strategists.
Organizations can leverage this dataset to:
Conduct healthcare accessibility mapping and identify underserved areas.
Evaluate market penetration and retail coverage across regions.
Analyze the competitive landscape in pharmaceutical retail.
Support site selection and expansion strategies.
How We Build Pharmacy Polygons
Manually crafted polygons created using GIS tools like QGIS and ArcGIS, with aerial and street-level imagery.
Integration of venue layouts and elevation plans from official sources for enhanced accuracy.
Rigorous multi-stage quality checks ensure accuracy, completeness, and relevance.
What Else We Offer
Custom polygon creation for any retail chain, healthcare facility, or point of interest.
Enhanced metadata including entry/exit points, parking areas, and surrounding context.
Flexible formats: WKT, GeoJSON, Shapefile, and GDB for smooth system integration.
Regular updates tailored to client needs (30, 60, 90 days).
Unlock the Power of Healthcare Geospatial Data
With detailed pharmacy polygon data and POI datasets, businesses can:
Map healthcare service coverage and accessibility.
Identify growth opportunities in underserved communities.
Decode consumer behavior in the pharmaceutical retail space.
Strengthen location-driven strategies with spatial intelligence.
Why Choose LocationsXYZ?
LocationsXYZ is trusted by enterprises worldwide to deliver 95% accurate, handcrafted POI and polygon data. With our pharma dataset, you gain actionable insights to support healthcare planning, retail expansion, and competitive benchmarking.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data package includes two related data files that can be used as input for habitat network analyses on amphibians using a specific habitat network analysis tool (HNAT; v0.1.2-alpha):
HNAT is a plugin for the open-source Geographic Information System QGIS (https://qgis.org/en/site/). HNAT can be downloaded at https://github.com/SMoG-Chalmers/hnat/releases/tag/v0.1.2-alpha. To run the habitat network analyses based on the input data provided in this package one must install the plugin HNAT into QGIS. This software has been created by Chalmers within a research project financed by the Swedish government research council for sustainable development, Formas (FR -2021/0004), within the framework of the national research program "From research to implementation for a sustainable society 2021". The Excel-file contains the parameters for amphibians and the GeoTiff-file is representing a biotope raster map covering the Gothenburg region in western Sweden. SRID=3006 (Sweref99 TM). Pixel size =10x10 metres. The pixel values of the biotope map correspond to the biotope codes listed in the in the parameter file (see column “BiotopeCode”). For each biotope the parameter file holds biotope specific parameter values for two alternative amphibian models denoted “Amphibians_NMDWater_ponds” and Amphibians_NMDWater_ponds_NoFriction”. The two alternative parameter settings can be used to demonstrate the difference in model prediction with or without the assumption that amphibian movements are affected by barrier effects caused by roads, buildings and certain biotopes biotope types. The “NoFriction” version assumes that amphibian dispersal probability declines exponentially with increasing Euclidian distance whereas the other set assumes dispersal to be affected by barriers. Read the readme file for details on each parameter provided in the parameter file.
The GeoTiff-file is a biotope mape which has been created by combining a couple of publicly available geodata sets. As a base for the biotope map the Swedish land cover map NMD was used (https://geodata.naturvardsverket.se/nedladdning/marktacke/NMD2018/NMD2018_basskikt_ogeneraliserad_Sverige_v1_1.zip). To achieve a greater cartographic representation of small ponds, streams, buildings and transport infrastructure relevant for amphibian dispersal, reproduction and foraging, NMD was complemented by information from a number of vector layers. In total, 20 new biotope classes representing buildings of different height ranging from less than 5 m up to 100 m, were added to the basic land cover map. The heights were obtained by analyzing the LiDAR data provided by Swedish Land Survey (for details see Berghauser Pont et al., 2019). The data was rasterized and added on top of existing pixels representing buildings in the Swedish land cover map. The roads were separated into 101 new biotope classes with different expected number of vehicles per day. Instead of using statistics from the Swedish Transport Administration on observed number of vehicles per day relative traffic volumes were predicted based on angular betweenness centrality values calculated from the road network using PST (Place Syntax Tool, Stavroulaki et al. 2023). PST is an open-source plugin for QGIS (https://www.smog.chalmers.se/pst). Traffic volumes are expected to be correlated to the centrality values (Serra and Hillier, 2019). The vector layer with the centrality values was buffered by 15 m prior to rasterization. After that the new pixel values were added to the basic Land cover raster in sequence following the order of centrality values. Information on small streams with a maximum width of 6 m was added from a vector layer of Swedish streams (https://www.lantmateriet.se/en/geodata/geodata-products/product-list/topography-50-download-vector/). These lines where rasterized and added to the land cover raster by replacing the underlaying pixel values with new class specific pixel values. Small pondlike waterbodies was identified from the NMD data selecting contiguous fragments of the original NMD biotope class 61 with a smaller area than 1 hectare. Pixels representing the smaller water bodies was then changed to 201.
References Berghauser Pont M, Stavroulaki G, Bobkova E, et al. (2019). The spatial distribution and frequency of street, plot and building types across five European cities. Environment and Planning B: Urban analytics and city science 46(7): 1226-1242. Serra M and Hillier B (2019) Angular and Metric Distance in Road Network Analysis: A nationwide correlation study. Computers, Environment and Urban Systems 74: 194-207. Stavroulaki I, Berghauser Pont M, Fitger M, et al. (2023) PST Documentation_v.3.2.5_20231128, DOI:10.13140/RG.2.2.32984.67845.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Urban areas are expanding rapidly, with the majority of the global and US population inhabiting them. Urban forests are critically important for providing ecosystem services to the growing urban populace, but their health is threatened by invasive insects. Insect density and damage are highly variable in different sites across urban landscapes, such that trees in some sites experience outbreaks and are severely damaged while others are relatively unaffected. To protect urban forests against damage from invasive insects and support future delivery of ecosystem services, we must first understand the factors that affect insect density and damage to their hosts across urban landscapes. This study explores how a variety of environmental factors that vary across urban habitats influence density of invasive insects. Specifically, we evaluate how vegetational complexity, distance to buildings, impervious surface, canopy temperature, host availability, and density of co-occurring herbivores impact three invasive pests of elm trees: the elm leaf beetle (Xanthogaleruca luteola), the elm flea weevil (Orchestes steppensis), and the elm leafminer (Fenusa ulmi). Except for building distance, all environmental factors were associated with density of at least one pest species. Furthermore, insect responses to these factors were species-specific, with direction and strength of associations influenced by insect life history. These findings can be used to inform future urban pest management and tree care efforts, making urban forests more resilient in an era where globalization and climate change make them particularly vulnerable to attack. Keywords: urban forest, invasive species, impervious surface, temperature, species interactions. Methods Insect Density At each sampling period, we measured insect density on four branches of each tree, one branch in each cardinal direction (N, S, E, and W). The sampling unit was a 30 cm terminal branch (Dahlsten et al., 1993; Rodrigo et al., 2019), and we assumed equal leaf area per branch. All sampled branches were in the lower canopy up to 3 meters from the ground, and branches that could not be reached from the ground were accessed using a ladder. Sampled branches were haphazardly chosen from a distance where insects were not distinguishable to avoid sampling bias. On each tree branch, we counted individuals of each observable insect stage: beetle eggs, larvae, and adults (the beetle pupates in cryptic locations such as under bark or in the soil, and thus pupae were not counted); weevil leaf mines and adults; and the number of leaves with leafminer mines. Individual leafminer mines were not counted because adult females lay multiple eggs per leaf, and it is common for mines to merge and become indistinguishable from one another as larvae develop. Thus, it was not possible to count the number of individual mines for this species. Leafminer adults were not counted because this stage had disappeared for the season by the start of the first sampling period. The total number of leaves on each branch was also recorded. In addition to serving as the response variable for our environmental hypotheses, insect density of each species was also used as predictor variables for the co-occurring herbivore hypothesis. Tree 0 indicates the end of the dataset. Urban Site Factors Host Availability (AllElm_Density) We measured host availability digitally by counting the number of elm trees within a 100 meter buffer around each tree using QGIS version 3.10.12 (QGIS Development Team, 2022) and a dataset of publicly managed trees provided by municipal forestry departments. We chose a 100 meter radius because significant changes in insect density are detectable for multiple insect species at this spatial scale (Sperry et al., 2001). Although Siberian elm is a preferred host of the insects in this system, other species of elm may also serve as hosts and were thus included in this data set. Following digital assessment, we verified all counts in situ to capture any visible privately owned trees and verify that trees in the dataset were still alive and present in the field. Despite efforts to avoid spatial autocorrelation, four trees had 100 meter buffers that overlapped with the buffer of another tree (that is, two locations where two trees had overlapping buffers). Because the maximum overlap was <14% of the buffer area, we retained these trees in our analyses. Vegetational Complexity (SCI_0_500) We measured the structural complexity of the vegetation in a 10 x 10 meter area around each tree following Shrewsbury & Raupp (2000, 2006). Specifically, we sectioned off a 10 x 10 meter area around each study tree and divided this area into one hundred 12 meter plots. In each of these plots, we recorded five vegetation categories: ground cover (e.g., mulch or turf grass), herbaceous plants (e.g., garden annuals/perennials, tall native grasses), shrubs (e.g., hydrangea, boxwood, barberry), understory trees (e.g., juniper, plum, crabapple, small Siberian Elm), or overstory trees (those with mature canopy including ash, pine, and other elm). One point was awarded for each vegetation type present, resulting in 0-5 points awarded in each plot. To quantify complexity of the vegetation in a continuous way, points were summed for all one hundred plots. Thus, each tree received a vegetational complexity score between 0 and 500. Building Distance (Building Distance_m) To assess the local availability of structures for insect overwintering, we measured the distance of each sampled tree to the nearest building in meters as in Speight et al (1998). This was performed digitally using QGIS version 3.10.12 (QGIS Development Team, 2022) and the ESRI Standard Basemap, which displays built structures. Impervious Surface (ImperviousSurface_20m) Impervious surface data were obtained through the USGS Multi-Resolution Land Characteristics Consortium (Dewitz & US Geological Survey, 2021) on a 30 x 30 meter scale and processed using QGIS version 3.10.12 (QGIS Development Team, 2022). We used the zonal statistics tool to calculate the percentage of impervious surface within a 20 meter buffer surrounding each sampled tree, which is more predictive of herbivorous insect density than impervious surface at larger spatial scales (Just et al., 2019). Although impervious surface data were not available at a smaller spatial scale, the zonal statistics tool allowed us to obtain an estimate of impervious surface within 20 meters of each tree using 30 x 30 meter data by computing an average impervious surface value based on weighted averages of the extent to which each 30 x 30 meter pixel overlapped with the 20 meter buffer around a tree. Canopy Temperature (MeanTemp_Night) Canopy temperature at each tree was measured every 1.5 hours via the iButton Thermochron (model DS1921G-F5). Temperature logging began at 7:30AM MST on June 12 and ended at 7:30AM MST on August 25 for a total of 1,185 data points per logger. We placed each logger in a compostable container to prevent contact with direct sunlight and attached them with a zip tie to branches approximately 2-3 meters from the ground. We placed temperature loggers on the east side of the tree wherever possible or on the west side of the tree if a stable eastern location was not available. Despite efforts to minimize contact with direct sunlight, several loggers recorded artificially inflated temperatures. This made mean and maximum temperatures impractical for analysis. We used mean nighttime temperature in the following analyses (7:30PM-7:30AM MST, n=666 measurements per logger) because the urban heat island effect is less variable, occurs more frequently, and is more intense in urban canopies at night compared to the day (Du et al., 2021; Sun et al., 2019).
Facebook
TwitterNotice: this is not the latest Heat Island Severity image service. For 2023 data, visit https://tpl.maps.arcgis.com/home/item.html?id=db5bdb0f0c8c4b85b8270ec67448a0b6. This layer contains the relative heat severity for every pixel for every city in the United States. This 30-meter raster was derived from Landsat 8 imagery band 10 (ground-level thermal sensor) from the summers of 2018 and 2019.Federal statistics over a 30-year period show extreme heat is the leading cause of weather-related deaths in the United States. Extreme heat exacerbated by urban heat islands can lead to increased respiratory difficulties, heat exhaustion, and heat stroke. These heat impacts significantly affect the most vulnerable—children, the elderly, and those with preexisting conditions.The purpose of this layer is to show where certain areas of cities are hotter than the average temperature for that same city as a whole. Severity is measured on a scale of 1 to 5, with 1 being a relatively mild heat area (slightly above the mean for the city), and 5 being a severe heat area (significantly above the mean for the city). The absolute heat above mean values are classified into these 5 classes using the Jenks Natural Breaks classification method, which seeks to reduce the variance within classes and maximize the variance between classes. Knowing where areas of high heat are located can help a city government plan for mitigation strategies.This dataset represents a snapshot in time. It will be updated yearly, but is static between updates. It does not take into account changes in heat during a single day, for example, from building shadows moving. The thermal readings detected by the Landsat 8 sensor are surface-level, whether that surface is the ground or the top of a building. Although there is strong correlation between surface temperature and air temperature, they are not the same. We believe that this is useful at the national level, and for cities that don’t have the ability to conduct their own hyper local temperature survey. Where local data is available, it may be more accurate than this dataset. Dataset SummaryThis dataset was developed using proprietary Python code developed at The Trust for Public Land, running on the Descartes Labs platform through the Descartes Labs API for Python. The Descartes Labs platform allows for extremely fast retrieval and processing of imagery, which makes it possible to produce heat island data for all cities in the United States in a relatively short amount of time.What can you do with this layer?This layer has query, identify, and export image services available. Since it is served as an image service, it is not necessary to download the data; the service itself is data that can be used directly in any Esri geoprocessing tool that accepts raster data as input.Using the Urban Heat Island (UHI) Image ServicesThe data is made available as an image service. There is a processing template applied that supplies the yellow-to-red or blue-to-red color ramp, but once this processing template is removed (you can do this in ArcGIS Pro or ArcGIS Desktop, or in QGIS), the actual data values come through the service and can be used directly in a geoprocessing tool (for example, to extract an area of interest). Following are instructions for doing this in Pro.In ArcGIS Pro, in a Map view, in the Catalog window, click on Portal. In the Portal window, click on the far-right icon representing Living Atlas. Search on the acronyms “tpl” and “uhi”. The results returned will be the UHI image services. Right click on a result and select “Add to current map” from the context menu. When the image service is added to the map, right-click on it in the map view, and select Properties. In the Properties window, select Processing Templates. On the drop-down menu at the top of the window, the default Processing Template is either a yellow-to-red ramp or a blue-to-red ramp. Click the drop-down, and select “None”, then “OK”. Now you will have the actual pixel values displayed in the map, and available to any geoprocessing tool that takes a raster as input. Below is a screenshot of ArcGIS Pro with a UHI image service loaded, color ramp removed, and symbology changed back to a yellow-to-red ramp (a classified renderer can also be used): Other Sources of Heat Island InformationPlease see these websites for valuable information on heat islands and to learn about exciting new heat island research being led by scientists across the country:EPA’s Heat Island Resource CenterDr. Ladd Keith, University of Arizona Dr. Ben McMahan, University of Arizona Dr. Jeremy Hoffman, Science Museum of Virginia Dr. Hunter Jones, NOAADaphne Lundi, Senior Policy Advisor, NYC Mayor's Office of Recovery and ResiliencyDisclaimer/FeedbackWith nearly 14,000 cities represented, checking each city's heat island raster for quality assurance would be prohibitively time-consuming, so The Trust for Public Land checked a statistically significant sample size for data quality. The sample passed all quality checks, with about 98.5% of the output cities error-free, but there could be instances where the user finds errors in the data. These errors will most likely take the form of a line of discontinuity where there is no city boundary; this type of error is caused by large temperature differences in two adjacent Landsat scenes, so the discontinuity occurs along scene boundaries (see figure below). The Trust for Public Land would appreciate feedback on these errors so that version 2 of the national UHI dataset can be improved. Contact Dale.Watt@tpl.org with feedback.
Facebook
TwitterHISTORECO is a comprehensive database that includes more than 45 geographic, climatic, hydrological, demographic, and economic variables (64 columns of data apart from the 7 first of identification of the municipality in "Historeco.csv") spanning the 20th and 21st centuries, covering all 8,122 municipalities in Spain each of which has one value per decade. It is a unique dataset that integrates data from various sources, facilitating the analysis of long-term temporal and spatial trends across multiple disciplines such as climate, geography, and socio-economic development.The dataset combines information from twenty sources (databases/articles), harmonizing and downscaling them to the municipal level using GIS and programming tools (mainly QGIS, R, and Python). This is the most extensive dataset of its kind in terms of temporal depth and spatial granularity available for Spain.This project has been developed thanks to funding from the Ramón Areces Foundation, without which it would not have been possible.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This portfolio includes three sets of data, used and explained in Lazarus, Williams & Goldstein (preprint: https://doi.org/10.31223/X5JH1X):
washover morphometry measured from lidar-derived topographic change along the coastline of New Jersey, USA, following Hurricane Sandy (2012) ('NJ_Sandy_metrics.csv');
the geospatial data layers used to generate those measurements ('WashoverGIS.zip');
and a compilation of washover morphometry reported in the literature ('washover_LAV_literature_examples.csv').
Washover morphometry datasets
NJ_Sandy_metrics.csv – The lidar-derived washover morphometry dataset includes: deposit width (m), intrusion length (m), deposit area (m2), deposit volume (m3), deposit perimeter (m), built fraction, the storm event (Sandy 2012), and a general location note.
washover_LAV_literature_examples.csv – Also included here are 35 measurements of washover morphometry reported in the literature by six different studies, sampling different storm events in different coastal barrier settings (Carruthers et al., 2013; Williams, 2015; Jamison-Todd et al., 2020; Rodriguez et al. 2020; Hansen et al., 2021; Williams & Rains, 2022). The literature-based dataset includes: intrusion length (m), deposit area (m2), deposit volume (m3), the reference (dataset) in which the measurements were reported, and additional notes.
Geospatial data layers
The lidar data underpinning the geospatial data layers here are available from the NOAA Digital Coast Data Viewer (https://coast.noaa.gov/dataviewer/#/): "2012 USGS EAARL-B Lidar: Pre-Sandy" (pre-storm), and "2012 USGS EAARL-B Lidar: Post-Sandy" (post-storm).
Geospatial analysis was done in QGIS version 3.22.5. We masked both the pre- and post-storm surfaces to isolate only positive elevations, and subtracted the pre-storm surface from the post-storm surface to calculated the difference between them; we then retained only the positive differences in the resulting surface to isolate sites of sediment deposition. We manually digitized the perimeters of depositional forms we interpreted as washover, corroborated by aerial imagery (https://storms.ngs.noaa.gov/).
Basic geometric characteristics (perimeter, area) were taken directly from the washover polygons; washover length and width were taken from oriented minimum bounding boxes around each polygon. Volume for each washover polygon was measured using the Volume Calculation Tool (version 0.4) plugin for QGIS (https://github.com/REDcatch/Volume_calculation_for_QGIS3). In built settings, each washover deposit was associated with a locally estimated built fraction (Lazarus et al., 2021). Elements of the built environment (i.e., buildings) were isolated by creating a binary mask of the pre-storm surface, such that all elevations ³5 m were set to a value = 1, and all elevations <5 m set to zero. Minimum enclosing circles were drawn around each washover polygon, and the total built area (masked value = 1) within each circle summed using the QGIS Zonal Statistics tool. Here, local built fraction is the total built area within a minimum enclosing circle divided by the area of that circle.
Geospatial files here include:
NJ_north_wash_metrics.shp // NJ_south_wash_metrics.shp – shapefiles of the digitized washover deposits, with morphometric characteristics compiled in their attribute tables
NJ_north_BBs.shp // NJ_south_BBs.shp – oriented bounding boxes to determine deposit intrusion length & width
NJ_north_MECs.shp // NJ_south_MECs.shp – minimum enclosing circles, used for calculating local built fraction
NJ_north_dSandy_POS.tif // NJ_south_dSandy_POS.tif – positive [post-storm - pre-storm] elevation differences
NJ_north_rooftops_th05.tif // NJ_north_rooftops_th05.tif – binary mask based on the pre-storm lidar layer ("2012 USGS EAARL-B Lidar: Pre-Sandy") used for calculating built fraction, in which all topographic elements >= 5 m are set = 1, and all < 5 m are set = 0
Facebook
TwitterReason for Selection Protected natural areas in urban environments provide urban residents a nearby place to connect with nature and offer refugia for some species. They help foster a conservation ethic by providing opportunities for people to connect with nature, and also support ecosystem services like offsetting heat island effects (Greene and Millward 2017, Simpson 1998), water filtration, stormwater retention, and more (Hoover and Hopton 2019). In addition, parks, greenspace, and greenways can help improve physical and psychological health in communities (Gies 2006). Urban park size complements the equitable access to potential parks indicator by capturing the value of existing parks.Input DataSoutheast Blueprint 2024 extentFWS National Realty Tracts, accessed 12-13-2023Protected Areas Database of the United States(PAD-US):PAD-US 3.0 national geodatabase -Combined Proclamation Marine Fee Designation Easement, accessed 12-6-20232020 Census Urban Areas from the Census Bureau’s urban-rural classification; download the data, read more about how urban areas were redefined following the 2020 censusOpenStreetMap data “multipolygons” layer, accessed 12-5-2023A polygon from this dataset is considered a beach if the value in the “natural” tag attribute is “beach”. Data for coastal states (VA, NC, SC, GA, FL, AL, MS, LA, TX) were downloaded in .pbf format and translated to an ESRI shapefile using R code. OpenStreetMap® is open data, licensed under theOpen Data Commons Open Database License (ODbL) by theOpenStreetMap Foundation (OSMF). Additional credit to OSM contributors. Read more onthe OSM copyright page.2021 National Land Cover Database (NLCD): Percentdevelopedimperviousness2023NOAA coastal relief model: volumes 2 (Southeast Atlantic), 3 (Florida and East Gulf of America), 4 (Central Gulf of America), and 5 (Western Gulf of America), accessed 3-27-2024Mapping StepsCreate a seamless vector layer to constrain the extent of the urban park size indicator to inland and nearshore marine areas <10 m in depth. The deep offshore areas of marine parks do not meet the intent of this indicator to capture nearby opportunities for urban residents to connect with nature. Shallow areas are more accessible for recreational activities like snorkeling, which typically has a maximum recommended depth of 12-15 meters. This step mirrors the approach taken in the Caribbean version of this indicator.Merge all coastal relief model rasters (.nc format) together using QGIS “create virtual raster”.Save merged raster to .tif and import into ArcPro.Reclassify the NOAA coastal relief model data to assign areas with an elevation of land to -10 m a value of 1. Assign all other areas (deep marine) a value of 0.Convert the raster produced above to vector using the “RasterToPolygon” tool.Clip to 2024 subregions using “Pairwise Clip” tool.Break apart multipart polygons using “Multipart to single parts” tool.Hand-edit to remove deep marine polygon.Dissolve the resulting data layer.This produces a seamless polygon defining land and shallow marine areas.Clip the Census urban area layer to the bounding box of NoData surrounding the extent of Southeast Blueprint 2024.Clip PAD-US 3.0 to the bounding box of NoData surrounding the extent of Southeast Blueprint 2024.Remove the following areas from PAD-US 3.0, which are outside the scope of this indicator to represent parks:All School Trust Lands in Oklahoma and Mississippi (Loc Des = “School Lands” or “School Trust Lands”). These extensive lands are leased out and are not open to the public.All tribal and military lands (“Des_Tp” = "TRIBL" or “Des_Tp” = "MIL"). Generally, these lands are not intended for public recreational use.All BOEM marine lease blocks (“Own_Name” = "BOEM"). These Outer Continental Shelf lease blocks do not represent actively protected marine parks, but serve as the “legal definition for BOEM offshore boundary coordinates...for leasing and administrative purposes” (BOEM).All lands designated as “proclamation” (“Des_Tp” = "PROC"). These typically represent the approved boundary of public lands, within which land protection is authorized to occur, but not all lands within the proclamation boundary are necessarily currently in a conserved status.Retain only selected attribute fields from PAD-US to get rid of irrelevant attributes.Merged the filtered PAD-US layer produced above with the OSM beaches and FWS National Realty Tracts to produce a combined protected areas dataset.The resulting merged data layer contains overlapping polygons. To remove overlapping polygons, use the Dissolve function.Clip the resulting data layer to the inland and nearshore extent.Process all multipart polygons (e.g., separate parcels within a National Wildlife Refuge) to single parts (referred to in Arc software as an “explode”).Select all polygons that intersect the Census urban extent within 0.5 miles. We chose 0.5 miles to represent a reasonable walking distance based on input and feedback from park access experts. Assuming a moderate intensity walking pace of 3 miles per hour, as defined by the U.S. Department of Health and Human Service’s physical activity guidelines, the 0.5 mi distance also corresponds to the 10-minute walk threshold used in the equitable access to potential parks indicator.Dissolve all the park polygons that were selected in the previous step.Process all multipart polygons to single parts (“explode”) again.Add a unique ID to the selected parks. This value will be used in a later step to join the parks to their buffers.Create a 0.5 mi (805 m) buffer ring around each park using the multiring plugin in QGIS. Ensure that “dissolve buffers” is disabled so that a single 0.5 mi buffer is created for each park.Assess the amount of overlap between the buffered park and the Census urban area using “overlap analysis”. This step is necessary to identify parks that do not intersect the urban area, but which lie within an urban matrix (e.g., Umstead Park in Raleigh, NC and Davidson-Arabia Mountain Nature Preserve in Atlanta, GA). This step creates a table that is joined back to the park polygons using the UniqueID.Remove parks that had ≤10% overlap with the urban areas when buffered. This excludes mostly non-urban parks that do not meet the intent of this indicator to capture parks that provide nearby access for urban residents. Note: The 10% threshold is a judgement call based on testing which known urban parks and urban National Wildlife Refuges are captured at different overlap cutoffs and is intended to be as inclusive as possible.Calculate the GIS acres of each remaining park unit using the Add Geometry Attributes function.Buffer the selected parks by 15 m. Buffering prevents very small and narrow parks from being left out of the indicator when the polygons are converted to raster.Reclassify the parks based on their area into the 7 classes seen in the final indicator values below. These thresholds were informed by park classification guidelines from the National Recreation and Park Association, which classify neighborhood parks as 5-10 acres, community parks as 30-50 acres, and large urban parks as optimally 75+ acres (Mertes and Hall 1995).Assess the impervious surface composition of each park using the NLCD 2021 impervious layer and the Zonal Statistics “MEAN” function. Retain only the mean percent impervious value for each park.Extract only parks with a mean impervious pixel value <80%. This step excludes parks that do not meet the intent of the indicator to capture opportunities to connect with nature and offer refugia for species (e.g., the Superdome in New Orleans, LA, the Astrodome in Houston, TX, and City Plaza in Raleigh, NC).Extract again to the inland and nearshore extent.Export the final vector file to a shapefile and import to ArcGIS Pro.Convert the resulting polygons to raster using the ArcPy Feature to Raster function and the area class field.Assign a value of 0 to all other pixels in the Southeast Blueprint 2024 extent not already identified as an urban park in the mapping steps above. Zero values are intended to help users better understand the extent of this indicator and make it perform better in online tools.Use the land and shallow marine layer and “extract by mask” tool to save the final version of this indicator.Add color and legend to raster attribute table.As a final step, clip to the spatial extent of Southeast Blueprint 2024.Note: For more details on the mapping steps, code used to create this layer is available in theSoutheast Blueprint Data Downloadunder > 6_Code. Final indicator valuesIndicator values are assigned as follows:6= 75+ acre urban park5= 50 to <75 acre urban park4= 30 to <50 acre urban park3= 10 to <30 acre urban park2=5 to <10acreurbanpark1 = <5 acre urban park0 = Not identified as an urban parkKnown IssuesThis indicator does not include park amenities that influence how well the park serves people and should not be the only tool used for parks and recreation planning. Park standards should be determined at a local level to account for various community issues, values, needs, and available resources.This indicator includes some protected areas that are not open to the public and not typically thought of as “parks”, like mitigation lands, private easements, and private golf courses. While we experimented with excluding them using the public access attribute in PAD, due to numerous inaccuracies, this inadvertently removed protected lands that are known to be publicly accessible. As a result, we erred on the side of including the non-publicly accessible lands.The NLCD percent impervious layer contains classification inaccuracies. As a result, this indicator may exclude parks that are mostly natural because they are misclassified as mostly impervious. Conversely, this indicator may include parks that are mostly impervious because they are misclassified as mostly
Facebook
TwitterCable-based technologies have been a backbone for harvesting on steep slopes. The layout of a single cable road is challenging because one must identify intermediate support locations and heights that guarantee structural safety and operational efficiency while minimizing set-up and dismantling costs. Seilaplan optimizes the layout of a cable road by Seilaplan stands for Cable Road Layout Planner. Seilaplan is able to calculate the optimal rope line layout (position and height of the supports) between defined start and end coordinates on the basis of a digital elevation model (DEM). The program is designed for Central European conditions and is designed on the basis of a fixed suspension rope anchored at both ends. For the calculation of the properties of the load path curve an iterative method is used, which was described by Zweifel (1960) and was developed especially for standing skylines. When testing the feasibility of the cable line, care is taken that 1) the maximum permissible stresses in the skyline are not exceeded, 2) there is a minimum distance between the load path and the ground and 3) when using a gravitational system, there is a minimum inclination in the load path. The solution is selected which has a minimum number of supports in the first priority and minimizes the support height in the second priority. The newly developed method calculates the load path curve and the forces occurring in it more accurately than tools available on the market to date (status 2019) and is able to determine the optimum position and height of the intermediate supports. The reason for the more accurate results of the new tool is the assumption that the skyline is anchored at both end points. Forest cable yarders used in Europe have a skyline that is fixed at both ends. The behaviour of fixed-anchored suspension ropes is very difficult to describe mathematically and cannot be solved analytically. For this reason, simplified linearized assumptions have so far been used in the forestry sector, which corresponds to the behaviour of a weight-tensioned suspension rope and is known as the Pestal method (1961). Weight-tensioned suspension ropes are used for passenger transport. For the calculation of the load path curve we use an iterative method, which was described by Zweifel (1960) and developed especially for fixed anchored suspension ropes. This makes mathematics much more demanding, but leads to more accurate and realistic results. Since there are no current models which describe the installation costs with adequate accuracy, the solution sought is the one which has a minimum number of supports in the first priority and minimises the support height in the second priority (Figure 2). The presented method is the first one, which starts from a fixed anchored supporting rope and identifies the mathematically optimal column layout at the same time. In contrast to methods that assume a weight-tensioned suspension rope, this approach achieves more realistic solutions with longer spans and lower support heights, which ultimately leads to lower installation costs. Background information on rope mechanics and calculation methods is documented in Bont and Heinimann (2012). License: GNU, General Public License, Version 2 or newer. Literature: Bont, L., & Heinimann, H. R. (2012). Optimum geometric layout of a single cable road. European journal of forest research, 131(5), 1439-1448.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
At the onset of the full reopening in Spring 2023 of the Difficult-to-Return Zone of Northeastern Japan following the Fukushima Daiichi Nuclear Power Plant (FDNPP) accident that took place in March 2011, several spatial layers were regrouped and compiled to facilitate environmental studies dealing with the redistribution of radiocesium fallout across landscapes.
The current dataset is composed of 23 shapefiles including those of the delineations of different spatial zones (Intensive Contamination Survey Areas – ICAs, Special Decontamination Zones – SDZ, Difficult-to-Return Zone – DTRZ, and FNDPP location) (Evrard et al. 2019), municipalities where mushroom consumption restrictions were enforced (restricted and partially lifted restrictions), river hydrographic networks and their respective drainage areas (Mano, Niida, Ota, Takase, and Ukedo), dam reservoirs and drainage areas (Mano, Ogaki, Takanokura, and Yokokawa), multiple administrative delineations in Japan (whole Japan administrative boundaries, Prefectures, and municipalities) (GIS, 2016), and one raster file of the reconstruction of initial 137Cs fallout across eastern Japan (from Kato et al., 2019).
The current dataset provides a support to a publication submitted to the SOIL journal:
All map processing was carried out using QGIS 3.26.0 (QGIS, 2022) and under the EPSG:WGS 84 projection system.
The 137Cs fallout raster (in Bq m-2, decay-corrected to July 2011) was generated from the point grid of Kato et al. (2019). A total of 126 tiles (0.25 x 0.25 degree) were generated by Inverse Distance Weighted (IDW) interpolation using the 'IDW interpolation' tool with the following settings: distance coefficient P = 1.0 and pixel size (x and y) = 0.0015 degree. Tiles were then merged into a single tile using the raster 'Merge' tool. The initial point grid footprint was manually delineated to define the spatial applicability zone of the airborne survey. A buffer zone corresponding to half plus 10% of the longest distance between two airborne points (x = 0.002, y = 0.003), i.e. 0.0017 degree, was generated using the 'buffer' tool. The single tile was then cut according to the footprint of the buffer zone using the 'clip a raster by a mask layer' tool. A single-band pseudo-colour scale is provided and displays pixels with a value above 1000 Bq m-2 (eq. global background).
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes a series of R scripts required to carry out some of the practical exercises in the book “Land Use Cover Datasets and Validation Tools”, available in open access.
The scripts have been designed within the context of the R Processing Provider, a plugin that integrates the R processing environment into QGIS. For all the information about how to use these scripts in QGIS, please refer to Chapter 1 of the book referred to above.
The dataset includes 15 different scripts, which can implement the calculation of different metrics in QGIS:
Change statistics such as absolute change, relative change and annual rate of change (Change_Statistics.rsx)
Areal and spatial agreement metrics, either overall (Overall Areal Inconsistency.rsx, Overall Spatial Agreement.rsx, Overall Spatial Inconsistency.rsx) or per category (Individual Areal Inconsistency.rsx, Individual Spatial Agreement.rsx)
The four components of change (gross gains, gross losses, net change and swap) proposed by Pontius Jr. (2004) (LUCCBudget.rsx)
The intensity analysis proposed by Aldwaik and Pontius (2012) (Intensity_analysis.rsx)
The Flow matrix proposed by Runfola and Pontius (2013) (Stable_change_flow_matrix.rsx, Flow_matrix_graf.rsx)
Pearson and Spearman correlations (Correlation.rsx)
The Receiver Operating Characteristic (ROC) (ROCAnalysis.rsx)
The Goodness of Fit (GOF) calculated using the MapCurves method proposed by Hargrove et al. (2006) (MapCurves_raster.rsx, MapCurves_vector.rsx)
The spatial distribution of overall, user and producer’s accuracies, obtained through Geographical Weighted Regression methods (Local accuracy assessment statistics.rsx).
Descriptions of all these methods can be found in different chapters of the aforementioned book.
The dataset also includes a readme file listing all the scripts provided, detailing their authors and the references on which their methods are based.