Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A continuous dataset of Land Surface Temperature (LST) is vital for climatological and environmental studies. LST can be regarded as a combination of seasonal mean temperature (climatology) and daily anomaly, which is attributed mainly to the synoptic-scale atmospheric circulation (weather). To reproduce LST in cloudy pixels, time series (2002-2019) of cloud-free 1km MODIS Aqua LST images were generated and the pixel-based seasonality (climatology) was calculated using temporal Fourier analysis. To add the anomaly, we used the NCEP Climate Forecast System Version 2 (CFSv2) model, which provides air surface temperature under both cloudy and clear sky conditions. The combination of the two sources of data enables the estimation of LST in cloudy pixels.
Data structure
The dataset consists of geo-located continuous LST (Day, Night and Daily) which calculates LST values of cloudy pixels. The spatial domain of the data is the Eastern Mediterranean, at the resolution of the MYD11A1 product (~1 Km). Data are stored in GeoTIFF format as signed 16-bit integers using a scale factor of 0.02, with one file per day, each defined by 4 dimensions (Night LST Cont., Day LST Cont., Daily Average LST Cont., QA). The QA band stores information about the presence of cloud in the original pixel. If in both original files, Day LST and Night LST there was NoData due to clouds, then the QA value is 0. QA value of 1 indicates NoData at original Day LST, 2 indicates NoData at Night LST and 3 indicates valid data at both, day and night. File names follow this naming convention: LST_
The file LSTcont_validation.tif contains the validation dataset in which the MAE, RMSE, and Pearson (r) of the validation with true LST are provided. Data are stored in GeoTIFF format as signed 32-bit floats, with the same spatial extent and resolution as the LSTcont dataset. These data are stored with one file containing three bands (MAE, RMSE, and Perarson_r). The same data with the same structure is also provided in NetCDF format.
How to use
The data can be read in various of program languages such as Python, IDL, Matlab etc.and can be visualize in a GIS program such as ArcGis or Qgis. A short animation demonstrates how to visualize the data using the Qgis open source program is available in the project Github code reposetory.
Web application
The *LSTcont*web application (https://shilosh.users.earthengine.app/view/continuous-lst) is an Earth Engine app. The interface includes a map and a date picker. The user can select a date (July 2002 – present) and visualize *LSTcont*for that day anywhere on the globe. The web app calculate *LSTcont*on the fly based on ready-made global climatological files. The *LSTcont*can be downloaded as a GeoTiff with 5 bands in that order: Mean daily LSTcont, Night original LST, Night LSTcont, Day original LST, Day LSTcont.
Code availability
Datasets for other regions can be easily produced by the GEE platform with the code provided project Github code reposetory.
https://datacatalog.worldbank.org/public-licenses?fragment=cchttps://datacatalog.worldbank.org/public-licenses?fragment=cc
Developed by SOLARGIS and provided by the Global Solar Atlas (GSA), this data resource contains terrain elevation above sea level (ELE) in [m a.s.l.] covering the globe. Data is provided in a geographic spatial reference (EPSG:4326). The resolution (pixel size) of solar resource data (GHI, DIF, GTI, DNI) is 9 arcsec (nominally 250 m), PVOUT and TEMP 30 arcsec (nominally 1 km) and OPTA 2 arcmin (nominally 4 km).
The data is hyperlinked under 'resources' with the following characeristics:
ELE - GISdata (GeoTIFF)
Data format: GEOTIFF
File size : 826.8 MB
There are two temporal representation of solar resource and PVOUT data available:
• Longterm yearly/monthly average of daily totals (LTAym_AvgDailyTotals)
• Longterm average of yearly/monthly totals (LTAym_YearlyMonthlyTotals)
Both type of data are equivalent, you can select the summarization of your preference. The relation between datasets is described by simple equations:
• LTAy_YearlyTotals = LTAy_DailyTotals * 365.25
• LTAy_MonthlyTotals = LTAy_DailyTotals * Number_of_Days_In_The_Month
*For individual country or regional data downloads please see: https://globalsolaratlas.info/download (use the drop-down menu to select country or region of interest)
*For data provided in AAIGrid please see: https://globalsolaratlas.info/download/world.
For more information and terms of use, please, read metadata, provided in PDF and XML format for each data layer in a download file. For other data formats, resolution or time aggregation, please, visit Solargis website. Data can be used for visualization, further processing, and geo-analysis in all mainstream GIS software with raster data processing capabilities (such as open source QGIS, commercial ESRI ArcGIS products and others).
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The High Resolution Digital Elevation Model (HRDEM) product is derived from airborne LiDAR data (mainly in the south) and satellite images in the north. The complete coverage of the Canadian territory is gradually being established. It includes a Digital Terrain Model (DTM), a Digital Surface Model (DSM) and other derived data. For DTM datasets, derived data available are slope, aspect, shaded relief, color relief and color shaded relief maps and for DSM datasets, derived data available are shaded relief, color relief and color shaded relief maps. The productive forest line is used to separate the northern and the southern parts of the country. This line is approximate and may change based on requirements. In the southern part of the country (south of the productive forest line), DTM and DSM datasets are generated from airborne LiDAR data. They are offered at a 1 m or 2 m resolution and projected to the UTM NAD83 (CSRS) coordinate system and the corresponding zones. The datasets at a 1 m resolution cover an area of 10 km x 10 km while datasets at a 2 m resolution cover an area of 20 km by 20 km. In the northern part of the country (north of the productive forest line), due to the low density of vegetation and infrastructure, only DSM datasets are generally generated. Most of these datasets have optical digital images as their source data. They are generated at a 2 m resolution using the Polar Stereographic North coordinate system referenced to WGS84 horizontal datum or UTM NAD83 (CSRS) coordinate system. Each dataset covers an area of 50 km by 50 km. For some locations in the north, DSM and DTM datasets can also be generated from airborne LiDAR data. In this case, these products will be generated with the same specifications as those generated from airborne LiDAR in the southern part of the country. The HRDEM product is referenced to the Canadian Geodetic Vertical Datum of 2013 (CGVD2013), which is now the reference standard for heights across Canada. Source data for HRDEM datasets is acquired through multiple projects with different partners. Since data is being acquired by project, there is no integration or edgematching done between projects. The tiles are aligned within each project. The product High Resolution Digital Elevation Model (HRDEM) is part of the CanElevation Series created in support to the National Elevation Data Strategy implemented by NRCan. Collaboration is a key factor to the success of the National Elevation Data Strategy. Refer to the “Supporting Document” section to access the list of the different partners including links to their respective data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Related article: Bergroth, C., Järv, O., Tenkanen, H., Manninen, M., Toivonen, T., 2022. A 24-hour population distribution dataset based on mobile phone data from Helsinki Metropolitan Area, Finland. Scientific Data 9, 39.
In this dataset:
We present temporally dynamic population distribution data from the Helsinki Metropolitan Area, Finland, at the level of 250 m by 250 m statistical grid cells. Three hourly population distribution datasets are provided for regular workdays (Mon – Thu), Saturdays and Sundays. The data are based on aggregated mobile phone data collected by the biggest mobile network operator in Finland. Mobile phone data are assigned to statistical grid cells using an advanced dasymetric interpolation method based on ancillary data about land cover, buildings and a time use survey. The data were validated by comparing population register data from Statistics Finland for night-time hours and a daytime workplace registry. The resulting 24-hour population data can be used to reveal the temporal dynamics of the city and examine population variations relevant to for instance spatial accessibility analyses, crisis management and planning.
Please cite this dataset as:
Bergroth, C., Järv, O., Tenkanen, H., Manninen, M., Toivonen, T., 2022. A 24-hour population distribution dataset based on mobile phone data from Helsinki Metropolitan Area, Finland. Scientific Data 9, 39. https://doi.org/10.1038/s41597-021-01113-4
Organization of data
The dataset is packaged into a single Zipfile Helsinki_dynpop_matrix.zip which contains following files:
HMA_Dynamic_population_24H_workdays.csv represents the dynamic population for average workday in the study area.
HMA_Dynamic_population_24H_sat.csv represents the dynamic population for average saturday in the study area.
HMA_Dynamic_population_24H_sun.csv represents the dynamic population for average sunday in the study area.
target_zones_grid250m_EPSG3067.geojson represents the statistical grid in ETRS89/ETRS-TM35FIN projection that can be used to visualize the data on a map using e.g. QGIS.
Column names
YKR_ID : a unique identifier for each statistical grid cell (n=13,231). The identifier is compatible with the statistical YKR grid cell data by Statistics Finland and Finnish Environment Institute.
H0, H1 ... H23 : Each field represents the proportional distribution of the total population in the study area between grid cells during a one-hour period. In total, 24 fields are formatted as “Hx”, where x stands for the hour of the day (values ranging from 0-23). For example, H0 stands for the first hour of the day: 00:00 - 00:59. The sum of all cell values for each field equals to 100 (i.e. 100% of total population for each one-hour period)
In order to visualize the data on a map, the result tables can be joined with the target_zones_grid250m_EPSG3067.geojson data. The data can be joined by using the field YKR_ID as a common key between the datasets.
License Creative Commons Attribution 4.0 International.
Related datasets
Järv, Olle; Tenkanen, Henrikki & Toivonen, Tuuli. (2017). Multi-temporal function-based dasymetric interpolation tool for mobile phone data. Zenodo. https://doi.org/10.5281/zenodo.252612
Tenkanen, Henrikki, & Toivonen, Tuuli. (2019). Helsinki Region Travel Time Matrix [Data set]. Zenodo. http://doi.org/10.5281/zenodo.3247564
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
CanVec contains more than 60 topographic features classes organized into 8 themes: Transport Features, Administrative Features, Hydro Features, Land Features, Manmade Features, Elevation Features, Resource Management Features and Toponymic Features. This multiscale product originates from the best available geospatial data sources covering Canadian territory. It offers quality topographic information in vector format complying with international geomatics standards. CanVec can be used in Web Map Services (WMS) and geographic information systems (GIS) applications and used to produce thematic maps. Because of its many attributes, CanVec allows for extensive spatial analysis. Related Products: Constructions and Land Use in Canada - CanVec Series - Manmade Features Lakes, Rivers and Glaciers in Canada - CanVec Series - Hydrographic Features Administrative Boundaries in Canada - CanVec Series - Administrative Features Mines, Energy and Communication Networks in Canada - CanVec Series - Resources Management Features Wooded Areas, Saturated Soils and Landscape in Canada - CanVec Series - Land Features Transport Networks in Canada - CanVec Series - Transport Features Elevation in Canada - CanVec Series - Elevation Features Map Labels - CanVec Series - Toponymic Features
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for: Bedding scale correlation on Mars in western Arabia Terra
A.M. Annex et al.
Data Product Overview
This repository contains all source data for the publication. Below is a description of each general data product type, software that can load the data, and a list of the file names along with the short description of the data product.
HiRISE Digital Elevation Models (DEMs).
HiRISE DEMs produced using the Ames Stereo Pipeline are in geotiff format ending with ‘*X_0_DEM-adj.tif’, the “X” prefix denotes the spatial resolution of the data product in meters. Geotiff files are able to be read by free GIS software like QGIS.
HiRISE map-projected imagery (DRGs).
Map-projected HiRISE images produced using the Ames Stereo Pipeline are in geotiff format ending with ‘*0_Y_DRG-cog.tif’, the “Y” prefix denotes the spatial resolution of the data product in centimeters. Geotiff files are able to be read by free GIS software like QGIS. The DRG files are formatted as COG-geotiffs for enhanced compression and ease of use.
3D Topography files (.ply).
Traingular Mesh versions of the HiRISE/CTX topography data used for 3D figures in “.ply” format. Meshes are greatly geometrically simplified from source files. Topography files can be loaded in a variety of open source tools like ParaView and Meshlab. Textures can be applied using embedded texture coordinates.
3D Geological Model outputs (.vtk)
VTK 3D file format files of model output over the spatial domain of each study site. VTK files can be loaded by ParaView open source software. The “block” files contain the model evaluation over a regular grid over the model extent. The “surfaces” files contain just the bedding surfaces as interpolated from the “block” files using the marching cubes algorithm.
Geological Model geologic maps (geologic_map.tif).
Geologic maps from geological models are standard geotiffs readable by conventional GIS software. The maximum value for each geologic map is the “no-data” value for the map. Geologic maps are calculated at a lower resolution than the topography data for storage efficiency.
Beds Geopackage File (.gpkg).
Geopackage vector data file containing all mapped layers and associated metadata including dip corrected bed thickness as well as WKB encoded 3D linestrings representing the sampled topography data to which the bedding orientations were fit. Geopackage files can be read using GIS software like QGIS and ArcGIS as well as the OGR/GDAL suite. A full description of each column in the file is provided below.
Column
Type
Description
uuid
String
unique identifier
stratum_order
Real
0-indexed bed order
section
Real
section number
layer_id
Real
bed number/index
layer_id_bk
Real
unused backup bed number/index
source_raster
String
dem file path used
raster
String
dem file name
gsd
Real
ground sampling distant for dem
wkn
String
well known name for dem
rtype
String
raster type
minx
Real
minimum x position of trace in dem crs
miny
Real
minimum y position of trace in dem crs
maxx
Real
maximum x position of trace in dem crs
maxy
Real
maximum y position of trace in dem crs
method
String
internal interpolation method
sl
Real
slope in degrees
az
Real
azimuth in degrees
error
Real
maximum error ellipse angle
stdr
Real
standard deviation of the residuals
semr
Real
standard error of the residuals
X
Real
mean x position in CRS
Y
Real
mean y position in CRS
Z
Real
mean z position in CRS
b1
Real
plane coefficient 1
b2
Real
plane coefficient 2
b3
Real
plane coefficient 3
b1_se
Real
standard error plane coefficient 1
b2_se
Real
standard error plane coefficient 2
b3_se
Real
standard error plane coefficient 3
b1_ci_low
Real
plane coefficient 1 95% confidence interval low
b1_ci_high
Real
plane coefficient 1 95% confidence interval high
b2_ci_low
Real
plane coefficient 2 95% confidence interval low
b2_ci_high
Real
plane coefficient 2 95% confidence interval high
b3_ci_low
Real
plane coefficient 3 95% confidence interval low
b3_ci_high
Real
plane coefficient 3 95% confidence interval high
pca_ev_1
Real
pca explained variance ratio pc 1
pca_ev_2
Real
pca explained variance ratio pc 2
pca_ev_3
Real
pca explained variance ratio pc 3
condition_number
Real
condition number for regression
n
Integer64
number of data points used in regression
rls
Integer(Boolean)
unused flag
demeaned_regressions
Integer(Boolean)
centering indicator
meansl
Real
mean section slope
meanaz
Real
mean section azimuth
angular_error
Real
angular error for section
mB_1
Real
mean plane coefficient 1 for section
mB_2
Real
mean plane coefficient 2 for section
mB_3
Real
mean plane coefficient 3 for section
R
Real
mean plane normal orientation vector magnitude
num_valid
Integer64
number of valid planes in section
meanc
Real
mean stratigraphic position
medianc
Real
median stratigraphic position
stdc
Real
standard deviation of stratigraphic index
stec
Real
standard error of stratigraphic index
was_monotonic_increasing_layer_id
Integer(Boolean)
monotonic layer_id after projection to stratigraphic index
was_monotonic_increasing_meanc
Integer(Boolean)
monotonic meanc after projection to stratigraphic index
was_monotonic_increasing_z
Integer(Boolean)
monotonic z increasing after projection to stratigraphic index
meanc_l3sigma_std
Real
lower 3-sigma meanc standard deviation
meanc_u3sigma_std
Real
upper 3-sigma meanc standard deviation
meanc_l2sigma_sem
Real
lower 3-sigma meanc standard error
meanc_u2sigma_sem
Real
upper 3-sigma meanc standard error
thickness
Real
difference in meanc
thickness_fromz
Real
difference in Z value
dip_cor
Real
dip correction
dc_thick
Real
thickness after dip correction
dc_thick_fromz
Real
z thickness after dip correction
dc_thick_dev
Integer(Boolean)
dc_thick <= total mean dc_thick
dc_thick_fromz_dev
Integer(Boolean)
dc_thick <= total mean dc_thick_fromz
thickness_fromz_dev
Integer(Boolean)
dc_thick <= total mean thickness_fromz
dc_thick_dev_bg
Integer(Boolean)
dc_thick <= section mean dc_thick
dc_thick_fromz_dev_bg
Integer(Boolean)
dc_thick <= section mean dc_thick_fromz
thickness_fromz_dev_bg
Integer(Boolean)
dc_thick <= section mean thickness_fromz
slr
Real
slope in radians
azr
Real
azimuth in radians
meanslr
Real
mean slope in radians
meanazr
Real
mean azimuth in radians
angular_error_r
Real
angular error of section in radians
pca_ev_1_ok
Integer(Boolean)
pca_ev_1 < 99.5%
pca_ev_2_3_ratio
Real
pca_ev_2/pca_ev_3
pca_ev_2_3_ratio_ok
Integer(Boolean)
pca_ev_2_3_ratio > 15
xyz_wkb_hex
String
hex encoded wkb geometry for all points used in regression
Geological Model input files (.gpkg).
Four geopackage (.gpkg) files represent the input dataset for the geological models, one per study site as specified in the name of the file. The files contain most of the columns described above in the Beds geopackage file, with the following additional columns. The final seven columns (azimuth, dip, polarity, formation, X, Y, Z) constituting the actual parameters used by the geological model (GemPy).
Column
Type
Description
azimuth_mean
String
Mean section dip azimuth
azimuth_indi
Real
Individual bed azimuth
azimuth
Real
Azimuth of trace used by the geological model
dip
Real
Dip for the trace used by the geological mode
polarity
Real
Polarity of the dip vector normal vector
formation
String
String representation of layer_id required for GemPy models
X
Real
X position in the CRS of the sampled point on the trace
Y
Real
Y position in the CRS of the sampled point on the trace
Z
Real
Z position in the CRS of the sampled point on the trace
Stratigraphic Column Files (.gpkg).
Stratigraphic columns computed from the Geological Models come in three kinds of Geopackage vector files indicated by the postfixes _sc, rbsc, and rbssc. File names include the wkn site name.
sc (_sc.gpkg).
Geopackage vector data file containing measured bed thicknesses from Geological Model joined with corresponding Beds Geopackage file, subsetted partially. The columns largely overlap with the the list above for the Beds Geopackage but with the following additions
Column
Type
Description
X
Real
X position of thickness measurement
Y
Real
Y position of thickness measurement
Z
Real
Z position of thickness measurement
formation
String
Model required string representation of bed index
bed thickness (m)
Real
difference of bed elevations
azimuths
Real
azimuth as measured from model in degrees
dip_degrees
Real
dip as measured from model in
Notice: this is the latest Heat Island Anomalies image service.This layer contains the relative degrees Fahrenheit difference between any given pixel and the mean heat value for the city in which it is located, for every city in the contiguous United States, Alaska, Hawaii, and Puerto Rico. This 30-meter raster was derived from Landsat 8 imagery band 10 (ground-level thermal sensor) from the summer of 2023.To explore previous versions of the data, visit the links below:Full Range Heat Anomalies - USA 2022Full Range Heat Anomalies - USA 2021Full Range Heat Anomalies - USA 2020Federal statistics over a 30-year period show extreme heat is the leading cause of weather-related deaths in the United States. Extreme heat exacerbated by urban heat islands can lead to increased respiratory difficulties, heat exhaustion, and heat stroke. These heat impacts significantly affect the most vulnerable—children, the elderly, and those with preexisting conditions.The purpose of this layer is to show where certain areas of cities are hotter or cooler than the average temperature for that same city as a whole. This dataset represents a snapshot in time. It will be updated yearly, but is static between updates. It does not take into account changes in heat during a single day, for example, from building shadows moving. The thermal readings detected by the Landsat 8 sensor are surface-level, whether that surface is the ground or the top of a building. Although there is strong correlation between surface temperature and air temperature, they are not the same. We believe that this is useful at the national level, and for cities that don’t have the ability to conduct their own hyper local temperature survey. Where local data is available, it may be more accurate than this dataset. Dataset SummaryThis dataset was developed using proprietary Python code developed at The Trust for Public Land, running on the Descartes Labs platform through the Descartes Labs API for Python. The Descartes Labs platform allows for extremely fast retrieval and processing of imagery, which makes it possible to produce heat island data for all cities in the United States in a relatively short amount of time.In order to click on the image service and see the raw pixel values in a map viewer, you must be signed in to ArcGIS Online, then Enable Pop-Ups and Configure Pop-Ups.Using the Urban Heat Island (UHI) Image ServicesThe data is made available as an image service. There is a processing template applied that supplies the yellow-to-red or blue-to-red color ramp, but once this processing template is removed (you can do this in ArcGIS Pro or ArcGIS Desktop, or in QGIS), the actual data values come through the service and can be used directly in a geoprocessing tool (for example, to extract an area of interest). Following are instructions for doing this in Pro.In ArcGIS Pro, in a Map view, in the Catalog window, click on Portal. In the Portal window, click on the far-right icon representing Living Atlas. Search on the acronyms “tpl” and “uhi”. The results returned will be the UHI image services. Right click on a result and select “Add to current map” from the context menu. When the image service is added to the map, right-click on it in the map view, and select Properties. In the Properties window, select Processing Templates. On the drop-down menu at the top of the window, the default Processing Template is either a yellow-to-red ramp or a blue-to-red ramp. Click the drop-down, and select “None”, then “OK”. Now you will have the actual pixel values displayed in the map, and available to any geoprocessing tool that takes a raster as input. Below is a screenshot of ArcGIS Pro with a UHI image service loaded, color ramp removed, and symbology changed back to a yellow-to-red ramp (a classified renderer can also be used): A typical operation at this point is to clip out your area of interest. To do this, add your polygon shapefile or feature class to the map view, and use the Clip Raster tool to export your area of interest as a geoTIFF raster (file extension ".tif"). In the environments tab for the Clip Raster tool, click the dropdown for "Extent" and select "Same as Layer:", and select the name of your polygon. If you then need to convert the output raster to a polygon shapefile or feature class, run the Raster to Polygon tool, and select "Value" as the field.Other Sources of Heat Island InformationPlease see these websites for valuable information on heat islands and to learn about exciting new heat island research being led by scientists across the country:EPA’s Heat Island Resource CenterDr. Ladd Keith, University of ArizonaDr. Ben McMahan, University of Arizona Dr. Jeremy Hoffman, Science Museum of Virginia Dr. Hunter Jones, NOAA Daphne Lundi, Senior Policy Advisor, NYC Mayor's Office of Recovery and ResiliencyDisclaimer/FeedbackWith nearly 14,000 cities represented, checking each city's heat island raster for quality assurance would be prohibitively time-consuming, so The Trust for Public Land checked a statistically significant sample size for data quality. The sample passed all quality checks, with about 98.5% of the output cities error-free, but there could be instances where the user finds errors in the data. These errors will most likely take the form of a line of discontinuity where there is no city boundary; this type of error is caused by large temperature differences in two adjacent Landsat scenes, so the discontinuity occurs along scene boundaries (see figure below). The Trust for Public Land would appreciate feedback on these errors so that version 2 of the national UHI dataset can be improved. Contact Dale.Watt@tpl.org with feedback.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was generated by the Remote Sensing Group of the TU Wien Department of Geodesy and Geoinformation (https://mrs.geo.tuwien.ac.at/), within a dedicated project by the European Space Agency (ESA). Rights are reserved with ESA. Open use is granted under the CC BY 4.0 license.With this dataset publication, we open up a new perspective on Earth's land surface, providing a normalised microwave backscatter map from spaceborne Synthetic Aperture Radar (SAR) observations. The Sentinel-1 Global Backscatter Model (S1GBM) describes Earth for the period 2016-17 by the mean C-band radar cross section in VV- and VH-polarization at a 10 m sampling, giving a high-quality impression on surface- structures and -patterns.At TU Wien, we processed 0.5 million Sentinel-1 scenes totaling 1.1 PB and performed semi-automatic quality curation and backscatter harmonisation related to orbit geometry effects. The overall mosaic quality excels (the few) existing datasets, with minimised imprinting from orbit discontinuities and successful angle normalisation in large parts of the world. Supporting the designand verification of upcoming radar sensors, the obtained S1GBM data potentially also serve land cover classification and determination of vegetation and soil states, as well as water body mapping.We invite developers from the broader user community to exploit this novel data resource and to integrate S1GBM parameters in models for various variables of land cover, soil composition, or vegetation structure.Please be referred to our peer-reviewed article at TODO: LINK TO BE PROVIDED for details, generation methods, and an in-depth dataset analysis. In this publication, we demonstrate – as an example of the S1GBM's potential use – the mapping of permanent water bodies and evaluate the results against the Global Surface Water (GSW) benchmark.Dataset RecordThe VV and VH mosaics are sampled at 10 m pixel spacing, georeferenced to the Equi7Grid and divided into six continental zones (Africa, Asia, Europe, North America, Oceania, South America), which are further divided into square tiles of 100 km extent ("T1"-tiles). With this setup, the S1GBM consists of 16071 tiles over six continents, for VV and VH each, totaling to a compressed data volume of 2.67 TB.The tiles' file-format is a LZW-compressed GeoTIFF holding 16-bit integer values, with tagged metadata on encoding and georeference. Compatibility with common geographic information systems as QGIS or ArcGIS, and geodata libraries as GDAL is given.In this repository, we provide each mosaic as tiles that are organised in a folder structure per continent. With this, twelve zipped dataset-collections per continent are available for download.Web-Based Data ViewerIn addition to this data provision here, there is a web-based data viewer set up at the facilities of the Earth Observation Data Centre (EODC) under http://s1map.eodc.eu/. It offers an intuitive pan-and-zoom exploration of the full S1GBM VV and VH mosaics. It has been designed to quickly browse the S1GBM, providing an easy and direct visual impression of the mosaics.Code AvailabilityWe encourage users to use the open-source Python package yeoda, a datacube storage access layer that offers functions to read, write, search, filter, split and load data from the S1GBM datacube. The yeoda package is openly accessible on GitHub at https://github.com/TUW-GEO/yeoda.Furthermore, for the usage of the Equi7Grid we provide data and tools via the python package available on GitHub at https://github.com/TUW-GEO/Equi7Grid. More details on the grid reference can be found in https://www.sciencedirect.com/science/article/pii/S0098300414001629.AcknowledgementsThis study was partly funded by the project "Development of a Global Sentinel-1 Land Surface Backscatter Model", ESA Contract No. 4000122681/17/NL/MP for the European Union Copernicus Programme. The computational results presented have been achieved using the Vienna Scientific Cluster (VSC). We further would like to thank our colleagues at TU Wien and EODC for supporting us on technical tasks to cope with such a large and complex data set. Last but not least, we appreciate the kind assistance and swift support of the colleagues from the TU Wien Center for Research Data Management.
Notice: this is not the latest Heat Island Severity image service. For 2023 data, visit https://tpl.maps.arcgis.com/home/item.html?id=db5bdb0f0c8c4b85b8270ec67448a0b6. This layer contains the relative heat severity for every pixel for every city in the contiguous United States. This 30-meter raster was derived from Landsat 8 imagery band 10 (ground-level thermal sensor) from the summer of 2021, patched with data from 2020 where necessary.Federal statistics over a 30-year period show extreme heat is the leading cause of weather-related deaths in the United States. Extreme heat exacerbated by urban heat islands can lead to increased respiratory difficulties, heat exhaustion, and heat stroke. These heat impacts significantly affect the most vulnerable—children, the elderly, and those with preexisting conditions.The purpose of this layer is to show where certain areas of cities are hotter than the average temperature for that same city as a whole. Severity is measured on a scale of 1 to 5, with 1 being a relatively mild heat area (slightly above the mean for the city), and 5 being a severe heat area (significantly above the mean for the city). The absolute heat above mean values are classified into these 5 classes using the Jenks Natural Breaks classification method, which seeks to reduce the variance within classes and maximize the variance between classes. Knowing where areas of high heat are located can help a city government plan for mitigation strategies.This dataset represents a snapshot in time. It will be updated yearly, but is static between updates. It does not take into account changes in heat during a single day, for example, from building shadows moving. The thermal readings detected by the Landsat 8 sensor are surface-level, whether that surface is the ground or the top of a building. Although there is strong correlation between surface temperature and air temperature, they are not the same. We believe that this is useful at the national level, and for cities that don’t have the ability to conduct their own hyper local temperature survey. Where local data is available, it may be more accurate than this dataset. Dataset SummaryThis dataset was developed using proprietary Python code developed at The Trust for Public Land, running on the Descartes Labs platform through the Descartes Labs API for Python. The Descartes Labs platform allows for extremely fast retrieval and processing of imagery, which makes it possible to produce heat island data for all cities in the United States in a relatively short amount of time.What can you do with this layer?This layer has query, identify, and export image services available. Since it is served as an image service, it is not necessary to download the data; the service itself is data that can be used directly in any Esri geoprocessing tool that accepts raster data as input.In order to click on the image service and see the raw pixel values in a map viewer, you must be signed in to ArcGIS Online, then Enable Pop-Ups and Configure Pop-Ups.Using the Urban Heat Island (UHI) Image ServicesThe data is made available as an image service. There is a processing template applied that supplies the yellow-to-red or blue-to-red color ramp, but once this processing template is removed (you can do this in ArcGIS Pro or ArcGIS Desktop, or in QGIS), the actual data values come through the service and can be used directly in a geoprocessing tool (for example, to extract an area of interest). Following are instructions for doing this in Pro.In ArcGIS Pro, in a Map view, in the Catalog window, click on Portal. In the Portal window, click on the far-right icon representing Living Atlas. Search on the acronyms “tpl” and “uhi”. The results returned will be the UHI image services. Right click on a result and select “Add to current map” from the context menu. When the image service is added to the map, right-click on it in the map view, and select Properties. In the Properties window, select Processing Templates. On the drop-down menu at the top of the window, the default Processing Template is either a yellow-to-red ramp or a blue-to-red ramp. Click the drop-down, and select “None”, then “OK”. Now you will have the actual pixel values displayed in the map, and available to any geoprocessing tool that takes a raster as input. Below is a screenshot of ArcGIS Pro with a UHI image service loaded, color ramp removed, and symbology changed back to a yellow-to-red ramp (a classified renderer can also be used): Other Sources of Heat Island InformationPlease see these websites for valuable information on heat islands and to learn about exciting new heat island research being led by scientists across the country:EPA’s Heat Island Resource CenterDr. Ladd Keith, University of ArizonaDr. Ben McMahan, University of Arizona Dr. Jeremy Hoffman, Science Museum of Virginia Dr. Hunter Jones, NOAA Daphne Lundi, Senior Policy Advisor, NYC Mayor's Office of Recovery and ResiliencyDisclaimer/FeedbackWith nearly 14,000 cities represented, checking each city's heat island raster for quality assurance would be prohibitively time-consuming, so The Trust for Public Land checked a statistically significant sample size for data quality. The sample passed all quality checks, with about 98.5% of the output cities error-free, but there could be instances where the user finds errors in the data. These errors will most likely take the form of a line of discontinuity where there is no city boundary; this type of error is caused by large temperature differences in two adjacent Landsat scenes, so the discontinuity occurs along scene boundaries (see figure below). The Trust for Public Land would appreciate feedback on these errors so that version 2 of the national UHI dataset can be improved. Contact Dale.Watt@tpl.org with feedback.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Digital Cadastre is the spatial representation of every current parcel of land in Queensland, and its legal Lot on Plan description and relevant attributes. It provides the map base for systems dealing with land-related information. The Digital Cadastre is considered to be the point of truth for the graphical representation of property boundaries. It is not the point of truth for the legal property boundary or related attribute information, this will always be the plan of survey or the related titling information and administrative data sets. This data is updated weekly on Sunday.Data dictionary https://www.publications.qld.gov.au/dataset/queensland-digital-cadastral-database-supporting-documents/resource/b59bb1a1-3818-4754-8dc4-3669f0ec3f8b Spatial cadastre accuracy map https://www.publications.qld.gov.au/dataset/queensland-digital-cadastral-database-supporting-documents/resource/d6f029ad-b3a4-428b-bcf1-2f7c7326132b
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items.
Our corresponding paper (published at ITSC 2022) is available here. Further, we have applied 3DHD CityScenes to map deviation detection here.
Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises:
Python tools to read, generate, and visualize the dataset,
3DHDNet deep learning pipeline (training, inference, evaluation) for map deviation detection and 3D object detection.
The DevKit is available here:
https://github.com/volkswagen/3DHD_devkit.
The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany.
When using our dataset, you are welcome to cite:
@INPROCEEDINGS{9921866, author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and Fingscheidt, Tim}, booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)}, title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds}, year={2022}, pages={627-634}}
Acknowledgements
We thank the following interns for their exceptional contributions to our work.
Benjamin Sertolli: Major contributions to our DevKit during his master thesis
Niels Maier: Measurement campaign for data collection and data preparation
The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies.
The Dataset
After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following.
This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map.
During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet.
To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example.
import json
json_path = r"E:\3DHD_CityScenes\Dataset\train.json" with open(json_path) as jf: data = json.load(jf) print(data)
Map items are stored as lists of items in JSON format. In particular, we provide:
traffic signs,
traffic lights,
pole-like objects,
construction site locations,
construction site obstacles (point-like such as cones, and line-like such as fences),
line-shaped markings (solid, dashed, etc.),
polygon-shaped markings (arrows, stop lines, symbols, etc.),
lanes (ordinary and temporary),
relations between elements (only for construction sites, e.g., sign to lane association).
Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON.
Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API.
The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows.
x-coordinates: 4 byte integer
y-coordinates: 4 byte integer
z-coordinates: 4 byte integer
intensity of reflected beams: 2 byte unsigned integer
ground classification flag: 1 byte unsigned integer
After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance.
import numpy as np import pptk
file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin" pc_dict = {} key_list = ['x', 'y', 'z', 'intensity', 'is_ground'] type_list = ['<i4', '<i4', '<i4', '<u2', 'u1']
with open(file_path, "r") as fid: num_points = np.fromfile(fid, count=1, dtype='<u4')[0] # print(num_points)
# Init
for k, dtype in zip(key_list, type_list):
pc_dict[k] = np.zeros([num_points], dtype=dtype)
# Read all arrays
for k, t in zip(key_list, type_list):
pc_dict[k] = np.fromfile(fid, count=num_points, dtype=t)
# Unnorm
pc_dict['x'] = (pc_dict['x'] / 1000) + 500000
pc_dict['y'] = (pc_dict['y'] / 1000) + 5000000
pc_dict['z'] = (pc_dict['z'] / 1000)
pc_dict['intensity'] = pc_dict['intensity'] / 2**16
pc_dict['is_ground'] = pc_dict['is_ground'].astype(np.bool_)
fid.close()
print(pc_dict)
x_utm = pc_dict['x'] - np.mean(pc_dict['x']) y_utm = pc_dict['y'] - np.mean(pc_dict['y']) z_utm = pc_dict['z'] xyz = np.column_stack((x_utm, y_utm, z_utm)) viewer = pptk.viewer(xyz) viewer.attributes(pc_dict['intensity']) viewer.set(point_size=0.03)
We provide 15 real-world trajectories recorded during a measurement campaign covering the whole HD map. Trajectory samples are provided approx. with 30 Hz and are encoded in JSON.
These trajectories were used to provide the samples in train.json, val.json. and test.json with realistic geolocations and orientations of the ego vehicle.
OP1 – OP5 cover the majority of the map with 5 trajectories.
RH1 – RH10 cover the majority of the map with 10 trajectories.
Note that OP5 is split into three separate parts, a-c. RH9 is split into two parts, a-b. Moreover, OP4 mostly equals OP1 (thus, we speak of 14 trajectories in our paper). For completeness, however, we provide all recorded trajectories here.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for: Regional Correlations in the layered deposits of Arabia Terra, Mars
Overview:
This repository contains the map-projected HiRISE Digital Elevation Models (DEMs) and the map-projected HiRISE image for each DEM and for each site in the study. Also contained in the repository is a GeoPackage file (beds_2019_08_28_09_29.gpkg) that contains the dip corrected bed thickness measurements, longitude and latitude positions, and error information for each bed measured in the study. GeoPackage files supersede shapefiles as a standard geospatial data format and can be opened in a variety of open source tools including QGIS, and proprietary tools such as recent versions of ArcGIS. For more information about GeoPackage files, please use https://www.geopackage.org/ as a resource. A more detailed description of columns in the beds_2019_08_28_09_29.gpkg file is described below in a dedicated section. Table S1 from the supplementary is also included as an excel spreadsheet file (table_s1.xlsx).
HiRISE DEMs and Images:
Each HiRISE DEM, and corresponding map-projected image used in the study are included in this repository as GeoTiff files (ending with .tif). The file names correspond to the combination of the HiRISE Image IDs listed in Table 1 that were used to produce the DEM for the site, with the image with the smallest emission angle (most-nadir) listed first. Files ending with “_align_1-DEM-adj.tif” are the DEM files containing the 1 meter per pixel elevation values, and files ending with “_align_1-DRG.tif” are the corresponding map-projected HiRISE (left) image. Table 1 Image Pairs correspond to filenames in this repository in the following way: In Table 1, Sera Crater corresponds to HiRISE Image Pair: PSP_001902_1890/PSP_002047_1890, which corresponds to files: “PSP_001902_1890_PSP_002047_1890_align_1-DEM-adj.tif” for the DEM file and “PSP_001902_1890_PSP_002047_1890_align_1-DRG.tif” for the map-projected image file. Each site is listed below with the DEM and map-projected image filenames that correspond to the site as listed in Table 1. The DEM and Image files can be opened in a variety of open source tools including QGIS, and proprietary tools such as recent versions of ArcGIS.
· Sera
o DEM: PSP_001902_1890_PSP_002047_1890_align_1-DEM-adj.tif
o Image: PSP_001902_1890_PSP_002047_1890_align_1-DRG.tif
· Banes
o DEM: ESP_013611_1910_ESP_014033_1910_align_1-DEM-adj.tif
o Image: ESP_013611_1910_ESP_014033_1910_align_1-DRG.tif
· Wulai 1
o DEM: ESP_028129_1905_ESP_028195_1905_align_1-DEM-adj.tif
o Image: ESP_028129_1905_ESP_028195_1905_align_1-DRG.tif
· Wulai 2
o DEM: ESP_028129_1905_ESP_028195_1905_align_1-DEM-adj.tif
o Image: ESP_028129_1905_ESP_028195_1905_align_1-DRG.tif
· Jiji
o DEM: ESP_016657_1890_ESP_017013_1890_align_1-DEM-adj.tif
o Image: ESP_016657_1890_ESP_017013_1890_align_1-DRG.tif
· Alofi
o DEM: ESP_051825_1900_ESP_051970_1900_align_1-DEM-adj.tif
o Image: ESP_051825_1900_ESP_051970_1900_align_1-DRG.tif
· Yelapa
o DEM: ESP_015958_1835_ESP_016235_1835_align_1-DEM-adj.tif
o Image: ESP_015958_1835_ESP_016235_1835_align_1-DRG.tif
· Danielson 1
o DEM: PSP_002733_1880_PSP_002878_1880_align_1-DEM-adj.tif
o Image: PSP_002733_1880_PSP_002878_1880_align_1-DRG.tif
· Danielson 2
o DEM: PSP_008205_1880_PSP_008930_1880_align_1-DEM-adj.tif
o Image: PSP_008205_1880_PSP_008930_1880_align_1-DRG.tif
· Firsoff
o DEM: ESP_047184_1820_ESP_039404_1820_align_1-DEM-adj.tif
o Image: ESP_047184_1820_ESP_039404_1820_align_1-DRG.tif
· Kaporo
o DEM: PSP_002363_1800_PSP_002508_1800_align_1-DEM-adj.tif
o Image: PSP_002363_1800_PSP_002508_1800_align_1-DRG.tif
Description of beds_2019_08_28_09_29.gpkg:
The GeoPackage file “beds_2019_08_28_09_29.gpkg” contains the dip corrected bed thickness measurements among other columns described below. The file can be opened in a variety of open source tools including QGIS, and proprietary tools such as recent versions of ArcGIS.
(Column_Name: Description)
sitewkn: Site name corresponding to the bed (i.e. Danielson 1)
section: Section ID of the bed (sections contain multiple beds)
meansl: The mean slope (dip) in degrees for the section
meanaz: The mean azimuth (dip-direction) in degrees for the section
ang_error: Angular error for a section derived from individual azimuths in the section
B_1: Plane coefficient 1 for the section
B_2: Plane coefficient 2 for the section
lon: Longitude of the centroid of the Bed
lat: Latitude of the centroid of the Bed
thickness: Thickness of the bed BEFORE dip correction
dipcor_thick: Dip-corrected bed thickness
lon1: Longitude of the centroid of the lower layer for the bed (each bed has a lower and upper layer)
lon2: Longitude of the centroid of the upper layer for the bed
lat1: Latitude of the centroid of the lower layer for the bed
lat2: Latitude of the centroid of the upper layer for the bed
meanc1: Mean stratigraphic position of the lower layer for the bed
meanc2: Mean stratigraphic position of the upper layer for the bed
uuid1: Universally unique identifier of the lower layer for the bed
uuid2: Universally unique identifier of the upper layer for the bed
stdc1: Standard deviation of the stratigraphic position of the lower layer for the bed
stdc2: Standard deviation of the stratigraphic position of the upper layer for the bed
sl1: Individual Slope (dip) of the lower layer for the bed
sl2: Individual Slope (dip) of the upper layer for the bed
az1: Individual Azimuth (dip-direction) of the lower layer for the bed
az2: Individual Azimuth (dip-direction) of the upper layer for the bed
meanz: Mean elevation of the bed
meanz1: Mean elevation of the lower layer for the bed
meanz2: Mean elevation of the upper layer for the bed
rperr1: Regression error for the plane fit of the lower layer for the bed
rperr2: Regression error for the plane fit of the upper layer for the bed
rpstdr1: Standard deviation of the residuals for the plane fit of the lower layer for the bed
rpstdr2: Standard deviation of the residuals for the plane fit of the upper layer for the bed
Notice: this is not the latest Heat Island Anomalies image service. For 2023 data visit https://tpl.maps.arcgis.com/home/item.html?id=e89a556263e04cb9b0b4638253ca8d10.This layer contains the relative degrees Fahrenheit difference between any given pixel and the mean heat value for the city in which it is located, for every city in the contiguous United States, Alaska, Hawaii, and Puerto Rico. This 30-meter raster was derived from Landsat 8 imagery band 10 (ground-level thermal sensor) from the summer of 2022, with patching from summer of 2021 where necessary.Federal statistics over a 30-year period show extreme heat is the leading cause of weather-related deaths in the United States. Extreme heat exacerbated by urban heat islands can lead to increased respiratory difficulties, heat exhaustion, and heat stroke. These heat impacts significantly affect the most vulnerable—children, the elderly, and those with preexisting conditions.The purpose of this layer is to show where certain areas of cities are hotter or cooler than the average temperature for that same city as a whole. This dataset represents a snapshot in time. It will be updated yearly, but is static between updates. It does not take into account changes in heat during a single day, for example, from building shadows moving. The thermal readings detected by the Landsat 8 sensor are surface-level, whether that surface is the ground or the top of a building. Although there is strong correlation between surface temperature and air temperature, they are not the same. We believe that this is useful at the national level, and for cities that don’t have the ability to conduct their own hyper local temperature survey. Where local data is available, it may be more accurate than this dataset. Dataset SummaryThis dataset was developed using proprietary Python code developed at The Trust for Public Land, running on the Descartes Labs platform through the Descartes Labs API for Python. The Descartes Labs platform allows for extremely fast retrieval and processing of imagery, which makes it possible to produce heat island data for all cities in the United States in a relatively short amount of time.In order to click on the image service and see the raw pixel values in a map viewer, you must be signed in to ArcGIS Online, then Enable Pop-Ups and Configure Pop-Ups.Using the Urban Heat Island (UHI) Image ServicesThe data is made available as an image service. There is a processing template applied that supplies the yellow-to-red or blue-to-red color ramp, but once this processing template is removed (you can do this in ArcGIS Pro or ArcGIS Desktop, or in QGIS), the actual data values come through the service and can be used directly in a geoprocessing tool (for example, to extract an area of interest). Following are instructions for doing this in Pro.In ArcGIS Pro, in a Map view, in the Catalog window, click on Portal. In the Portal window, click on the far-right icon representing Living Atlas. Search on the acronyms “tpl” and “uhi”. The results returned will be the UHI image services. Right click on a result and select “Add to current map” from the context menu. When the image service is added to the map, right-click on it in the map view, and select Properties. In the Properties window, select Processing Templates. On the drop-down menu at the top of the window, the default Processing Template is either a yellow-to-red ramp or a blue-to-red ramp. Click the drop-down, and select “None”, then “OK”. Now you will have the actual pixel values displayed in the map, and available to any geoprocessing tool that takes a raster as input. Below is a screenshot of ArcGIS Pro with a UHI image service loaded, color ramp removed, and symbology changed back to a yellow-to-red ramp (a classified renderer can also be used): A typical operation at this point is to clip out your area of interest. To do this, add your polygon shapefile or feature class to the map view, and use the Clip Raster tool to export your area of interest as a geoTIFF raster (file extension ".tif"). In the environments tab for the Clip Raster tool, click the dropdown for "Extent" and select "Same as Layer:", and select the name of your polygon. If you then need to convert the output raster to a polygon shapefile or feature class, run the Raster to Polygon tool, and select "Value" as the field.Other Sources of Heat Island InformationPlease see these websites for valuable information on heat islands and to learn about exciting new heat island research being led by scientists across the country:EPA’s Heat Island Resource CenterDr. Ladd Keith, University of ArizonaDr. Ben McMahan, University of Arizona Dr. Jeremy Hoffman, Science Museum of Virginia Dr. Hunter Jones, NOAA Daphne Lundi, Senior Policy Advisor, NYC Mayor's Office of Recovery and ResiliencyDisclaimer/FeedbackWith nearly 14,000 cities represented, checking each city's heat island raster for quality assurance would be prohibitively time-consuming, so The Trust for Public Land checked a statistically significant sample size for data quality. The sample passed all quality checks, with about 98.5% of the output cities error-free, but there could be instances where the user finds errors in the data. These errors will most likely take the form of a line of discontinuity where there is no city boundary; this type of error is caused by large temperature differences in two adjacent Landsat scenes, so the discontinuity occurs along scene boundaries (see figure below). The Trust for Public Land would appreciate feedback on these errors so that version 2 of the national UHI dataset can be improved. Contact Dale.Watt@tpl.org with feedback.
Notice: this is not the latest Heat Island Severity image service. For 2023 data, visit https://tpl.maps.arcgis.com/home/item.html?id=db5bdb0f0c8c4b85b8270ec67448a0b6. This layer contains the relative heat severity for every pixel for every city in the United States. This 30-meter raster was derived from Landsat 8 imagery band 10 (ground-level thermal sensor) from the summers of 2018 and 2019.Federal statistics over a 30-year period show extreme heat is the leading cause of weather-related deaths in the United States. Extreme heat exacerbated by urban heat islands can lead to increased respiratory difficulties, heat exhaustion, and heat stroke. These heat impacts significantly affect the most vulnerable—children, the elderly, and those with preexisting conditions.The purpose of this layer is to show where certain areas of cities are hotter than the average temperature for that same city as a whole. Severity is measured on a scale of 1 to 5, with 1 being a relatively mild heat area (slightly above the mean for the city), and 5 being a severe heat area (significantly above the mean for the city). The absolute heat above mean values are classified into these 5 classes using the Jenks Natural Breaks classification method, which seeks to reduce the variance within classes and maximize the variance between classes. Knowing where areas of high heat are located can help a city government plan for mitigation strategies.This dataset represents a snapshot in time. It will be updated yearly, but is static between updates. It does not take into account changes in heat during a single day, for example, from building shadows moving. The thermal readings detected by the Landsat 8 sensor are surface-level, whether that surface is the ground or the top of a building. Although there is strong correlation between surface temperature and air temperature, they are not the same. We believe that this is useful at the national level, and for cities that don’t have the ability to conduct their own hyper local temperature survey. Where local data is available, it may be more accurate than this dataset. Dataset SummaryThis dataset was developed using proprietary Python code developed at The Trust for Public Land, running on the Descartes Labs platform through the Descartes Labs API for Python. The Descartes Labs platform allows for extremely fast retrieval and processing of imagery, which makes it possible to produce heat island data for all cities in the United States in a relatively short amount of time.What can you do with this layer?This layer has query, identify, and export image services available. Since it is served as an image service, it is not necessary to download the data; the service itself is data that can be used directly in any Esri geoprocessing tool that accepts raster data as input.Using the Urban Heat Island (UHI) Image ServicesThe data is made available as an image service. There is a processing template applied that supplies the yellow-to-red or blue-to-red color ramp, but once this processing template is removed (you can do this in ArcGIS Pro or ArcGIS Desktop, or in QGIS), the actual data values come through the service and can be used directly in a geoprocessing tool (for example, to extract an area of interest). Following are instructions for doing this in Pro.In ArcGIS Pro, in a Map view, in the Catalog window, click on Portal. In the Portal window, click on the far-right icon representing Living Atlas. Search on the acronyms “tpl” and “uhi”. The results returned will be the UHI image services. Right click on a result and select “Add to current map” from the context menu. When the image service is added to the map, right-click on it in the map view, and select Properties. In the Properties window, select Processing Templates. On the drop-down menu at the top of the window, the default Processing Template is either a yellow-to-red ramp or a blue-to-red ramp. Click the drop-down, and select “None”, then “OK”. Now you will have the actual pixel values displayed in the map, and available to any geoprocessing tool that takes a raster as input. Below is a screenshot of ArcGIS Pro with a UHI image service loaded, color ramp removed, and symbology changed back to a yellow-to-red ramp (a classified renderer can also be used): Other Sources of Heat Island InformationPlease see these websites for valuable information on heat islands and to learn about exciting new heat island research being led by scientists across the country:EPA’s Heat Island Resource CenterDr. Ladd Keith, University of Arizona Dr. Ben McMahan, University of Arizona Dr. Jeremy Hoffman, Science Museum of Virginia Dr. Hunter Jones, NOAADaphne Lundi, Senior Policy Advisor, NYC Mayor's Office of Recovery and ResiliencyDisclaimer/FeedbackWith nearly 14,000 cities represented, checking each city's heat island raster for quality assurance would be prohibitively time-consuming, so The Trust for Public Land checked a statistically significant sample size for data quality. The sample passed all quality checks, with about 98.5% of the output cities error-free, but there could be instances where the user finds errors in the data. These errors will most likely take the form of a line of discontinuity where there is no city boundary; this type of error is caused by large temperature differences in two adjacent Landsat scenes, so the discontinuity occurs along scene boundaries (see figure below). The Trust for Public Land would appreciate feedback on these errors so that version 2 of the national UHI dataset can be improved. Contact Dale.Watt@tpl.org with feedback.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Introduction
This travel time matrix records travel times and travel distances for routes between all centroids (N = 13132) of a 250 × 250 m grid over the populated areas in the Helsinki metropolitan area by walking, cycling, public transportation, and private car. If applicable, the routes have been calculated for different times of the day (rush hour, midday, off-peak), and assuming different physical abilities (such as walking and cycling speeds), see details below.
The grid follows the geometric properties and enumeration of the versatile Yhdyskuntarakenteen seurantajärjestelmä (YKR) grid used in applications across many domains in Finland, and covers the municipalities of Helsinki, Espoo, Kauniainen, and Vantaa in the Finnish capital region.
Data formats
The data is available in multiple different formats that cater to different requirements, such as different software environments. All data formats share a common set of columns (see below), and can be used interchangeably.
Helsinki_Travel_Time_Matrix_2023.csv.zst: comma-separated values (CSV) of all data columns, without geometries. This data set contains all routes in one file, and can be filtered by origin or destination according to the analysis at hand. The data records can also be joined to the geometries as available below. The file is compressed using the Zstandard algorithm, that many data science libraries, for instance, pandas, support transparently, directly, and automatically.
Helsinki_Travel_Time_Matrix_2023_travel_times.gpkg.zip: an OGC GeoPackage standard file containing all data columns and the geometries that relate to the destination grid cell. The data set is delivered as a ZIP archive, which many GIS systems and libraries, e.g., GDAL/OGR, QGIS, or geopandas, support natively.
Helsinki_Travel_Matrix_2023_travel_times.csv.zip: a set of 13132 comma-separated value files containing the routes to one destination grid cell each. The files contain all data columns, no geometry, and can be joined to the geometries as available below. Filenames of the individual files within the ZIP archive follow the pattern Helsinki_Travel_Time_Matrix_2023_travel_times_to_5787545.csv where 5787545 is replaced by the to_id by which the rows in the file are grouped. Use the from_id column to join with the geometries from one of the files below.
Geometry, only:
Helsinki_Travel_Time_Matrix_2023_grid.gpkg.zip: an OGC GeoPackage standard file containing the geometries and IDs of the grid used in the analysis. This file can be joined both to the from_id and to_id columns of the data files. The data set is delivered as a ZIP archive, which many GIS systems and libraries, e.g., GDAL/OGR, QGIS, or geopandas, support natively.
Helsinki_Travel_Time_Matrix_2023_grid.shp.zip: an ESRI Shapefile archive containing the geometries and IDs of the grid used in the analysis. This file can be joined both to the from_id and to_id columns of the data files.
Table structure
from_id: ID number of the origin grid cell to_id: ID number of the destination grid cell walk_avg: Travel time in minutes from origin to destination by walking at an average speed walk_slo: Travel time in minutes from origin to destination by walking slowly bike_avg: Travel time in minutes from origin to destination by cycling at an average speedbike_fst: Travel time in minutes from origin to destination by cycling fastbike_slo: Travel time in minutes from origin to destination by cycling slowlypt_r_avg: Travel time in minutes from origin to destination by public transportation in rush hour traffic, walking at an average speed pt_r_slo: Travel time in minutes from origin to destination by public transportation in rush hour traffic, walking at a slower speed pt_m_avg: Travel time in minutes from origin to destination by public transportation in midday traffic, walking at an average speed pt_m_slo: Travel time in minutes from origin to destination by public transportation in midday traffic, walking at a slower speed pt_n_avg: Travel time in minutes from origin to destination by public transportation in nighttime traffic, walking at an average speed pt_n_slo: Travel time in minutes from origin to destination by public transportation in nighttime traffic, walking at a lower speed car_r: Travel time in minutes from origin to destination by private car in rush hour traffic car_m: Travel time in minutes from origin to destination by private car in midday traffic car_n: Travel time in minutes from origin to destination by private car in nighttime traffic walk_d: Distance from origin to destination, in meters, on foot
Data for 2013, 2015, and 2018
At the Digital Geography Lab, we started computing travel time matrices in 2013. Our methodology has changed in between the iterations, and naturally, there are systematic differences between the iterations' results. Not all input data sets are available to recompute the historical matrices with new methods, however, we were able to repeat the 2018 calculation using the same methods as the 2023 data set, please find the results below, in the same format.
For the travel time matrices for 2013 and 2015, as well as for 2018 using an older methodology, please refer to DOI:10.5281/zenodo.3247563.
Methodology
Computations were carried out for Wednesday, 15 February, 2023, and Monday, 29 January, 2018, respectively. 'Rush hour' refers to an 1-hour window between 8 and 9 am, 'midday' to 12 noon to 1 pm, and 'nighttime' to 2-3 am.
All routes have been calculated using r5py, a Python library making use of the R5 engine by Conveyal, with modifications to consider local characteristics of the Helsinki use case and to inform the computation models from local real-world data sets. In particular, we made the following modifications:
Walking
Walking speeds, and in turn walking times, are based on the findings of Willberg et al., 2023, in which we measured walking speeds of people of different age groups in varying road surface conditions in Helsinki. Specifically, we chose to use the average measured walking speed in summer conditions for walk_avg (as well as the respective pt_*_walk_avg), and the slowest quintile of all measured walker across all conditions for walk_slo (and the respective pt_*_walk_slo).
Cycling
Cycling speeds are derived from two input data sets. First, we averaged cycling speeds per network segment from Strava data, and computed a ratio between the speed ridden in each segment and the overall average speed. We then use these ratios to compute fast, slow, and average cycling speeds for each segment, based on the mean overall Strava speed, the mean speeds cycled in the Helsinki City Bike bike-share system, and the mean between the two.
Further, in line with the values observed by Jäppinen (2012), we add a flat 30 seconds each for unlocking and locking the bicycle at the origin and destination.
Public Transport
We used public transport schedules in General Transit Feed Specification (GTFS) format published by the Helsinki Regional Transport Authority, and adjusted the walking speeds (for connections between vehicles, as well as for access and egress to and from public transport stops) using the same methods as described above for walking.
Private motorcar
To represent road speeds actually driven in the Helsinki metropolitan region, we used floating car data of a representative sample of the roads in the region to derive the differences between the speed limit and the driven speed on different road classes, and by speed limit, see Perola (2023) for a detailed description of the methodology. Because these per-segment speeds factor in potential waiting times at road crossings, we eliminated turn penalties from R5.
Our modifications were carried out in two ways: some changes can be controlled by preparing input data sets in a certain way, or by setting model parameters outside of R5 or r5py. Other modifications required more profound changes to the source code of the R5 engine.
You can find a fully patched fork of the R5 engine in the Digital Geography Lab's GitHub repositories at github.com/DigitalGeographyLab/r5. The code that handles input data mangling and model parameter estimations is kept together with the logic to read input parameters and to collate output data, in the repository at github.com/DigitalGeographyLab/Helsinki-Travel-Time-Matrices.
The Chicago Historic Resources Survey (CHRS), completed in 1995, was a decade-long research effort by the City of Chicago to analyze the historic and architectural importance of all buildings, objects, structures, and sites constructed in the city prior to 1940. During 12 years of field work and follow-up research that started in 1983, CHRS surveyors identified approximately 9,900 properties which were considered to have some historic or architectural importance. Please note that this CHRS dataset is limited and does not include the entire survey: A color-coded ranking system was used to identify historic and architectural significance relative to age, degree of external physical integrity, and level of possible significance. This dataset only includes buildings identified with the two highest color codes: "Red" and "Orange." Buildings and structures coded "Red" or "Orange" (unless designated as a Chicago Landmark or located within a Chicago Landmark District) are subject to the City of Chicago’s Demolition-Delay Ordinance (link to: http://www.cityofchicago.org/city/en/depts/dcd/supp_info/demolition_delay.html), adopted by City Council in 2003. Only buildings are included in this dataset; structures and objects such as bridges, park structures, monuments and mausoleums, generally are not represented. Likewise, garages, coach houses, and other secondary structures associated with a building may not be consistently depicted or color-coded. If an “Orange”- or “Red”-rated building was demolished after 2008, it may still appear in the map. The CHRS occasionally rated only part of a building or part of a group of joined buildings as “Orange” or “Red;” however the entire building or group of joined buildings may be incorrectly identified as “Orange” or “Red.” Additional information about the CHRS is available at www.cityofchicago.org/Landmarks/ or by contacting the Historic Preservation Division at (312) 744-3200. To view or use these shapefiles, compression software and special GIS software, such as ESRI ArcGIS or QGIS, is required. To download this file, right-click the "Download" link above and choose "Save link as."
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Urban areas are expanding rapidly, with the majority of the global and US population inhabiting them. Urban forests are critically important for providing ecosystem services to the growing urban populace, but their health is threatened by invasive insects. Insect density and damage are highly variable in different sites across urban landscapes, such that trees in some sites experience outbreaks and are severely damaged while others are relatively unaffected. To protect urban forests against damage from invasive insects and support future delivery of ecosystem services, we must first understand the factors that affect insect density and damage to their hosts across urban landscapes. This study explores how a variety of environmental factors that vary across urban habitats influence density of invasive insects. Specifically, we evaluate how vegetational complexity, distance to buildings, impervious surface, canopy temperature, host availability, and density of co-occurring herbivores impact three invasive pests of elm trees: the elm leaf beetle (Xanthogaleruca luteola), the elm flea weevil (Orchestes steppensis), and the elm leafminer (Fenusa ulmi). Except for building distance, all environmental factors were associated with density of at least one pest species. Furthermore, insect responses to these factors were species-specific, with direction and strength of associations influenced by insect life history. These findings can be used to inform future urban pest management and tree care efforts, making urban forests more resilient in an era where globalization and climate change make them particularly vulnerable to attack. Keywords: urban forest, invasive species, impervious surface, temperature, species interactions. Methods Insect Density At each sampling period, we measured insect density on four branches of each tree, one branch in each cardinal direction (N, S, E, and W). The sampling unit was a 30 cm terminal branch (Dahlsten et al., 1993; Rodrigo et al., 2019), and we assumed equal leaf area per branch. All sampled branches were in the lower canopy up to 3 meters from the ground, and branches that could not be reached from the ground were accessed using a ladder. Sampled branches were haphazardly chosen from a distance where insects were not distinguishable to avoid sampling bias. On each tree branch, we counted individuals of each observable insect stage: beetle eggs, larvae, and adults (the beetle pupates in cryptic locations such as under bark or in the soil, and thus pupae were not counted); weevil leaf mines and adults; and the number of leaves with leafminer mines. Individual leafminer mines were not counted because adult females lay multiple eggs per leaf, and it is common for mines to merge and become indistinguishable from one another as larvae develop. Thus, it was not possible to count the number of individual mines for this species. Leafminer adults were not counted because this stage had disappeared for the season by the start of the first sampling period. The total number of leaves on each branch was also recorded. In addition to serving as the response variable for our environmental hypotheses, insect density of each species was also used as predictor variables for the co-occurring herbivore hypothesis. Tree 0 indicates the end of the dataset. Urban Site Factors Host Availability (AllElm_Density) We measured host availability digitally by counting the number of elm trees within a 100 meter buffer around each tree using QGIS version 3.10.12 (QGIS Development Team, 2022) and a dataset of publicly managed trees provided by municipal forestry departments. We chose a 100 meter radius because significant changes in insect density are detectable for multiple insect species at this spatial scale (Sperry et al., 2001). Although Siberian elm is a preferred host of the insects in this system, other species of elm may also serve as hosts and were thus included in this data set. Following digital assessment, we verified all counts in situ to capture any visible privately owned trees and verify that trees in the dataset were still alive and present in the field. Despite efforts to avoid spatial autocorrelation, four trees had 100 meter buffers that overlapped with the buffer of another tree (that is, two locations where two trees had overlapping buffers). Because the maximum overlap was <14% of the buffer area, we retained these trees in our analyses. Vegetational Complexity (SCI_0_500) We measured the structural complexity of the vegetation in a 10 x 10 meter area around each tree following Shrewsbury & Raupp (2000, 2006). Specifically, we sectioned off a 10 x 10 meter area around each study tree and divided this area into one hundred 12 meter plots. In each of these plots, we recorded five vegetation categories: ground cover (e.g., mulch or turf grass), herbaceous plants (e.g., garden annuals/perennials, tall native grasses), shrubs (e.g., hydrangea, boxwood, barberry), understory trees (e.g., juniper, plum, crabapple, small Siberian Elm), or overstory trees (those with mature canopy including ash, pine, and other elm). One point was awarded for each vegetation type present, resulting in 0-5 points awarded in each plot. To quantify complexity of the vegetation in a continuous way, points were summed for all one hundred plots. Thus, each tree received a vegetational complexity score between 0 and 500. Building Distance (Building Distance_m) To assess the local availability of structures for insect overwintering, we measured the distance of each sampled tree to the nearest building in meters as in Speight et al (1998). This was performed digitally using QGIS version 3.10.12 (QGIS Development Team, 2022) and the ESRI Standard Basemap, which displays built structures. Impervious Surface (ImperviousSurface_20m) Impervious surface data were obtained through the USGS Multi-Resolution Land Characteristics Consortium (Dewitz & US Geological Survey, 2021) on a 30 x 30 meter scale and processed using QGIS version 3.10.12 (QGIS Development Team, 2022). We used the zonal statistics tool to calculate the percentage of impervious surface within a 20 meter buffer surrounding each sampled tree, which is more predictive of herbivorous insect density than impervious surface at larger spatial scales (Just et al., 2019). Although impervious surface data were not available at a smaller spatial scale, the zonal statistics tool allowed us to obtain an estimate of impervious surface within 20 meters of each tree using 30 x 30 meter data by computing an average impervious surface value based on weighted averages of the extent to which each 30 x 30 meter pixel overlapped with the 20 meter buffer around a tree. Canopy Temperature (MeanTemp_Night) Canopy temperature at each tree was measured every 1.5 hours via the iButton Thermochron (model DS1921G-F5). Temperature logging began at 7:30AM MST on June 12 and ended at 7:30AM MST on August 25 for a total of 1,185 data points per logger. We placed each logger in a compostable container to prevent contact with direct sunlight and attached them with a zip tie to branches approximately 2-3 meters from the ground. We placed temperature loggers on the east side of the tree wherever possible or on the west side of the tree if a stable eastern location was not available. Despite efforts to minimize contact with direct sunlight, several loggers recorded artificially inflated temperatures. This made mean and maximum temperatures impractical for analysis. We used mean nighttime temperature in the following analyses (7:30PM-7:30AM MST, n=666 measurements per logger) because the urban heat island effect is less variable, occurs more frequently, and is more intense in urban canopies at night compared to the day (Du et al., 2021; Sun et al., 2019).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Introduction
This travel time matrix records travel times and travel distances for routes between all centroids (N = 13231) of a 250 × 250 m grid over the populated areas in the Helsinki metropolitan area by walking, cycling, public transportation, and private car. If applicable, the routes have been calculated for different times of the day (rush hour, midday, off-peak), and assuming different physical abilities (such as walking and cycling speeds), see details below.
The grid follows the geometric properties and enumeration of the versatile Yhdyskuntarakenteen seurantajärjestelmä (YKR) grid used in applications across many domains in Finland, and covers the municipalities of Helsinki, Espoo, Kauniainen, and Vantaa in the Finnish capital region.
Data formats
The data is available in multiple different formats that cater to different requirements, such as different software environments. All data formats share a common set of columns (see below), and can be used interchangeably.
Geometry, only:
Table structure
from_id | ID number of the origin grid cell |
to_id | ID number of the destination grid cell |
walk_avg | Travel time in minutes from origin to destination by walking at an average speed |
walk_slo | Travel time in minutes from origin to destination by walking slowly |
bike_avg | Travel time in minutes from origin to destination by cycling at an average speed; incl. extra time (1 min) to unlock and lock bicycle |
bike_fst | Travel time in minutes from origin to destination by cycling fast; incl. extra time (1 min) to unlock and lock bicycle |
bike_slo | Travel time in minutes from origin to destination by cycling slowly; incl. extra time (1 min) to unlock and lock bicycle |
pt_r_avg | Travel time in minutes from origin to destination by public transportation in rush hour traffic, walking at an average speed |
pt_r_slo | Travel time in minutes from origin to destination by public transportation in rush hour traffic, walking at a slower speed |
pt_m_avg | Travel time in minutes from origin to destination by public transportation in midday traffic, walking at an average speed |
pt_m_slo | Travel time in minutes from origin to destination by public transportation in midday traffic, walking at a slower speed |
pt_n_avg | Travel time in minutes from origin to destination by public transportation in nighttime traffic, walking at an average speed |
pt_n_slo | Travel time in minutes from origin to destination by public transportation in nighttime traffic, walking at a lower speed |
car_r | Travel time in minutes from origin to destination by private car in rush hour traffic |
car_m | Travel time in minutes from origin to destination by private car in midday traffic |
car_n | Travel time in minutes from origin to destination by private car in nighttime traffic |
walk_d | Distance from origin to destination, in metres, on foot |
Data for 2013, 2015, and 2018
At the Digital Geography Lab, we started computing travel time matrices in 2013. Our methodology has changed in between the iterations, and naturally, there are systematic differences between the iterations’ results. Not all input data sets are available to recompute the historical matrices with new methods, however, we were able to repeat the 2018 calculation using the same methods as the 2023 data set, please find the results below, in the same format.
For the travel time matrices for 2013 and 2015, as well as for 2018 using an older methodology, please refer to DOI:10.5281/zenodo.3247563.
Methodology
Computations were carried out for Wednesday, 15 February, 2023, and Monday, 29 January, 2018, respectively. ‘Rush hour’ refers to an 1-hour window between 8 and 9 am, ‘midday’ to 12 noon to 1 pm, and ‘nighttime’ to 2-3 am.
All routes have been calculated using r5py, a Python library making use of the R5 engine by Conveyal, with modifications to consider local characteristics of the Helsinki use case and to inform the computation models from local real-world data sets. In particular, we made the following modifications:
Walking
Walking speeds, and in turn walking times, are based on the findings of Willberg et al., 2023, in which we measured walking speeds of people of different age groups in varying road surface conditions in Helsinki. Specifically, we chose to use the average measured walking speed in summer conditions for `walk_avg` (as well as the respective `pt_*_walk_avg`), and the slowest quintile of all measured walker across all conditions for `walk_slo` (and the respective `pt_*_walk_slo`).
Cycling
Cycling speeds are derived from two input data sets. First, we averaged cycling speeds per network segment from Strava data, and computed a ratio between the speed ridden in each segment and the overall average speed. We then use these ratios to compute fast, slow, and average cycling speeds for each segment, based on the mean overall Strava speed, the mean speeds cycled in the Helsinki City Bike bike-share system, and the mean between the two.
Further, in line with the values observed by Jäppinen (2012), we add a flat 30 seconds each for unlocking and locking the bicycle at the origin and destination.
Public Transport
We used public transport schedules in General Transit Feed Specification (GTFS) format published by
Reason for SelectionProtected natural areas in urban environments provide urban residents a nearby place to connect with nature and offer refugia for some species. They help foster a conservation ethic by providing opportunities for people to connect with nature, and also support ecosystem services like offsetting heat island effects (Greene and Millward 2017, Simpson 1998), water filtration, stormwater retention, and more (Hoover and Hopton 2019). In addition, parks, greenspace, and greenways can help improve physical and psychological health in communities (Gies 2006). Urban park size complements the equitable access to potential parks indicator by capturing the value of existing parks.Input DataSoutheast Blueprint 2024 extentFWS National Realty Tracts, accessed 12-13-2023Protected Areas Database of the United States(PAD-US):PAD-US 3.0national geodatabase -Combined Proclamation Marine Fee Designation Easement, accessed 12-6-20232020 Census Urban Areas from the Census Bureau’s urban-rural classification; download the data, read more about how urban areas were redefined following the 2020 censusOpenStreetMap data “multipolygons” layer, accessed 12-5-2023A polygon from this dataset is considered a beach if the value in the “natural” tag attribute is “beach”. Data for coastal states (VA, NC, SC, GA, FL, AL, MS, LA, TX) were downloaded in .pbf format and translated to an ESRI shapefile using R code. OpenStreetMap® is open data, licensed under theOpen Data Commons Open Database License (ODbL) by theOpenStreetMap Foundation (OSMF). Additional credit to OSM contributors. Read more onthe OSM copyright page.2021 National Land Cover Database (NLCD): Percentdevelopedimperviousness2023NOAA coastal relief model: volumes 2 (Southeast Atlantic), 3 (Florida and East Gulf of America), 4 (Central Gulf of America), and 5 (Western Gulf of America), accessed 3-27-2024Mapping StepsCreate a seamless vector layer to constrain the extent of the urban park size indicator to inland and nearshore marine areas <10 m in depth. The deep offshore areas of marine parks do not meet the intent of this indicator to capture nearby opportunities for urban residents to connect with nature. Shallow areas are more accessible for recreational activities like snorkeling, which typically has a maximum recommended depth of 12-15 meters. This step mirrors the approach taken in the Caribbean version of this indicator.Merge all coastal relief model rasters (.nc format) together using QGIS “create virtual raster”.Save merged raster to .tif and import into ArcPro.Reclassify the NOAA coastal relief model data to assign areas with an elevation of land to -10 m a value of 1. Assign all other areas (deep marine) a value of 0.Convert the raster produced above to vector using the “RasterToPolygon” tool.Clip to 2024 subregions using “Pairwise Clip” tool.Break apart multipart polygons using “Multipart to single parts” tool.Hand-edit to remove deep marine polygon.Dissolve the resulting data layer.This produces a seamless polygon defining land and shallow marine areas.Clip the Census urban area layer to the bounding box of NoData surrounding the extent of Southeast Blueprint 2024.Clip PAD-US 3.0 to the bounding box of NoData surrounding the extent of Southeast Blueprint 2024.Remove the following areas from PAD-US 3.0, which are outside the scope of this indicator to represent parks:All School Trust Lands in Oklahoma and Mississippi (Loc Des = “School Lands” or “School Trust Lands”). These extensive lands are leased out and are not open to the public.All tribal and military lands (“Des_Tp” = "TRIBL" or “Des_Tp” = "MIL"). Generally, these lands are not intended for public recreational use.All BOEM marine lease blocks (“Own_Name” = "BOEM"). These Outer Continental Shelf lease blocks do not represent actively protected marine parks, but serve as the “legal definition for BOEM offshore boundary coordinates...for leasing and administrative purposes” (BOEM).All lands designated as “proclamation” (“Des_Tp” = "PROC"). These typically represent the approved boundary of public lands, within which land protection is authorized to occur, but not all lands within the proclamation boundary are necessarily currently in a conserved status.Retain only selected attribute fields from PAD-US to get rid of irrelevant attributes.Merged the filtered PAD-US layer produced above with the OSM beaches and FWS National Realty Tracts to produce a combined protected areas dataset.The resulting merged data layer contains overlapping polygons. To remove overlapping polygons, use the Dissolve function.Clip the resulting data layer to the inland and nearshore extent.Process all multipart polygons (e.g., separate parcels within a National Wildlife Refuge) to single parts (referred to in Arc software as an “explode”).Select all polygons that intersect the Census urban extent within 0.5 miles. We chose 0.5 miles to represent a reasonable walking distance based on input and feedback from park access experts. Assuming a moderate intensity walking pace of 3 miles per hour, as defined by the U.S. Department of Health and Human Service’s physical activity guidelines, the 0.5 mi distance also corresponds to the 10-minute walk threshold used in the equitable access to potential parks indicator.Dissolve all the park polygons that were selected in the previous step.Process all multipart polygons to single parts (“explode”) again.Add a unique ID to the selected parks. This value will be used in a later step to join the parks to their buffers.Create a 0.5 mi (805 m) buffer ring around each park using the multiring plugin in QGIS. Ensure that “dissolve buffers” is disabled so that a single 0.5 mi buffer is created for each park.Assess the amount of overlap between the buffered park and the Census urban area using “overlap analysis”. This step is necessary to identify parks that do not intersect the urban area, but which lie within an urban matrix (e.g., Umstead Park in Raleigh, NC and Davidson-Arabia Mountain Nature Preserve in Atlanta, GA). This step creates a table that is joined back to the park polygons using the UniqueID.Remove parks that had ≤10% overlap with the urban areas when buffered. This excludes mostly non-urban parks that do not meet the intent of this indicator to capture parks that provide nearby access for urban residents. Note: The 10% threshold is a judgement call based on testing which known urban parks and urban National Wildlife Refuges are captured at different overlap cutoffs and is intended to be as inclusive as possible.Calculate the GIS acres of each remaining park unit using the Add Geometry Attributes function.Buffer the selected parks by 15 m. Buffering prevents very small and narrow parks from being left out of the indicator when the polygons are converted to raster.Reclassify the parks based on their area into the 7 classes seen in the final indicator values below. These thresholds were informed by park classification guidelines from the National Recreation and Park Association, which classify neighborhood parks as 5-10 acres, community parks as 30-50 acres, and large urban parks as optimally 75+ acres (Mertes and Hall 1995).Assess the impervious surface composition of each park using the NLCD 2021 impervious layer and the Zonal Statistics “MEAN” function. Retain only the mean percent impervious value for each park.Extract only parks with a mean impervious pixel value <80%. This step excludes parks that do not meet the intent of the indicator to capture opportunities to connect with nature and offer refugia for species (e.g., the Superdome in New Orleans, LA, the Astrodome in Houston, TX, and City Plaza in Raleigh, NC).Extract again to the inland and nearshore extent.Export the final vector file to a shapefile and import to ArcGIS Pro.Convert the resulting polygons to raster using the ArcPy Feature to Raster function and the area class field.Assign a value of 0 to all other pixels in the Southeast Blueprint 2024 extent not already identified as an urban park in the mapping steps above. Zero values are intended to help users better understand the extent of this indicator and make it perform better in online tools.Use the land and shallow marine layer and “extract by mask” tool to save the final version of this indicator.Add color and legend to raster attribute table.As a final step, clip to the spatial extent of Southeast Blueprint 2024.Note: For more details on the mapping steps, code used to create this layer is available in theSoutheast Blueprint Data Downloadunder > 6_Code.Final indicator valuesIndicator values are assigned as follows:6= 75+ acre urban park5= 50 to <75 acre urban park4= 30 to <50 acre urban park3= 10 to <30 acre urban park2=5 to <10acreurbanpark1 = <5 acre urban park0 = Not identified as an urban parkKnown IssuesThis indicator does not include park amenities that influence how well the park serves people and should not be the only tool used for parks and recreation planning. Park standards should be determined at a local level to account for various community issues, values, needs, and available resources.This indicator includes some protected areas that are not open to the public and not typically thought of as “parks”, like mitigation lands, private easements, and private golf courses. While we experimented with excluding them using the public access attribute in PAD, due to numerous inaccuracies, this inadvertently removed protected lands that are known to be publicly accessible. As a result, we erred on the side of including the non-publicly accessible lands.The NLCD percent impervious layer contains classification inaccuracies. As a result, this indicator may exclude parks that are mostly natural because they are misclassified as mostly impervious. Conversely, this indicator may include parks that are mostly impervious because they are misclassified as mostly
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A continuous dataset of Land Surface Temperature (LST) is vital for climatological and environmental studies. LST can be regarded as a combination of seasonal mean temperature (climatology) and daily anomaly, which is attributed mainly to the synoptic-scale atmospheric circulation (weather). To reproduce LST in cloudy pixels, time series (2002-2019) of cloud-free 1km MODIS Aqua LST images were generated and the pixel-based seasonality (climatology) was calculated using temporal Fourier analysis. To add the anomaly, we used the NCEP Climate Forecast System Version 2 (CFSv2) model, which provides air surface temperature under both cloudy and clear sky conditions. The combination of the two sources of data enables the estimation of LST in cloudy pixels.
Data structure
The dataset consists of geo-located continuous LST (Day, Night and Daily) which calculates LST values of cloudy pixels. The spatial domain of the data is the Eastern Mediterranean, at the resolution of the MYD11A1 product (~1 Km). Data are stored in GeoTIFF format as signed 16-bit integers using a scale factor of 0.02, with one file per day, each defined by 4 dimensions (Night LST Cont., Day LST Cont., Daily Average LST Cont., QA). The QA band stores information about the presence of cloud in the original pixel. If in both original files, Day LST and Night LST there was NoData due to clouds, then the QA value is 0. QA value of 1 indicates NoData at original Day LST, 2 indicates NoData at Night LST and 3 indicates valid data at both, day and night. File names follow this naming convention: LST_
The file LSTcont_validation.tif contains the validation dataset in which the MAE, RMSE, and Pearson (r) of the validation with true LST are provided. Data are stored in GeoTIFF format as signed 32-bit floats, with the same spatial extent and resolution as the LSTcont dataset. These data are stored with one file containing three bands (MAE, RMSE, and Perarson_r). The same data with the same structure is also provided in NetCDF format.
How to use
The data can be read in various of program languages such as Python, IDL, Matlab etc.and can be visualize in a GIS program such as ArcGis or Qgis. A short animation demonstrates how to visualize the data using the Qgis open source program is available in the project Github code reposetory.
Web application
The *LSTcont*web application (https://shilosh.users.earthengine.app/view/continuous-lst) is an Earth Engine app. The interface includes a map and a date picker. The user can select a date (July 2002 – present) and visualize *LSTcont*for that day anywhere on the globe. The web app calculate *LSTcont*on the fly based on ready-made global climatological files. The *LSTcont*can be downloaded as a GeoTiff with 5 bands in that order: Mean daily LSTcont, Night original LST, Night LSTcont, Day original LST, Day LSTcont.
Code availability
Datasets for other regions can be easily produced by the GEE platform with the code provided project Github code reposetory.