Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This archive reproduces a figure titled "Figure 3.2 Boone County population distribution" from Wang and vom Hofe (2007, p.60). The archive provides a Jupyter Notebook that uses Python and can be run in Google Colaboratory. The workflow uses the Census API to retrieve data, reproduce the figure, and ensure reproducibility for anyone accessing this archive.The Python code was developed in Google Colaboratory, or Google Colab for short, which is an Integrated Development Environment (IDE) of JupyterLab and streamlines package installation, code collaboration, and management. The Census API is used to obtain population counts from the 2000 Decennial Census (Summary File 1, 100% data). Shapefiles are downloaded from the TIGER/Line FTP Server. All downloaded data are maintained in the notebook's temporary working directory while in use. The data and shapefiles are stored separately with this archive. The final map is also stored as an HTML file.The notebook features extensive explanations, comments, code snippets, and code output. The notebook can be viewed in a PDF format or downloaded and opened in Google Colab. References to external resources are also provided for the various functional components. The notebook features code that performs the following functions:install/import necessary Python packagesdownload the Census Tract shapefile from the TIGER/Line FTP Serverdownload Census data via CensusAPI manipulate Census tabular data merge Census data with TIGER/Line shapefileapply a coordinate reference systemcalculate land area and population densitymap and export the map to HTMLexport the map to ESRI shapefileexport the table to CSVThe notebook can be modified to perform the same operations for any county in the United States by changing the State and County FIPS code parameters for the TIGER/Line shapefile and Census API downloads. The notebook can be adapted for use in other environments (i.e., Jupyter Notebook) as well as reading and writing files to a local or shared drive, or cloud drive (i.e., Google Drive).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Notice: this is not the latest Heat Island Severity image service.This layer contains the relative heat severity for every pixel for every city in the United States, including Alaska, Hawaii, and Puerto Rico. Heat Severity is a reclassified version of Heat Anomalies raster which is also published on this site. This data is generated from 30-meter Landsat 8 imagery band 10 (ground-level thermal sensor) from the summer of 2023.To explore previous versions of the data, visit the links below:Heat Severity - USA 2022Heat Severity - USA 2021Heat Severity - USA 2020Heat Severity - USA 2019Federal statistics over a 30-year period show extreme heat is the leading cause of weather-related deaths in the United States. Extreme heat exacerbated by urban heat islands can lead to increased respiratory difficulties, heat exhaustion, and heat stroke. These heat impacts significantly affect the most vulnerable—children, the elderly, and those with preexisting conditions.The purpose of this layer is to show where certain areas of cities are hotter than the average temperature for that same city as a whole. Severity is measured on a scale of 1 to 5, with 1 being a relatively mild heat area (slightly above the mean for the city), and 5 being a severe heat area (significantly above the mean for the city). The absolute heat above mean values are classified into these 5 classes using the Jenks Natural Breaks classification method, which seeks to reduce the variance within classes and maximize the variance between classes. Knowing where areas of high heat are located can help a city government plan for mitigation strategies.This dataset represents a snapshot in time. It will be updated yearly, but is static between updates. It does not take into account changes in heat during a single day, for example, from building shadows moving. The thermal readings detected by the Landsat 8 sensor are surface-level, whether that surface is the ground or the top of a building. Although there is strong correlation between surface temperature and air temperature, they are not the same. We believe that this is useful at the national level, and for cities that don’t have the ability to conduct their own hyper local temperature survey. Where local data is available, it may be more accurate than this dataset. Dataset SummaryThis dataset was developed using proprietary Python code developed at Trust for Public Land, running on the Descartes Labs platform through the Descartes Labs API for Python. The Descartes Labs platform allows for extremely fast retrieval and processing of imagery, which makes it possible to produce heat island data for all cities in the United States in a relatively short amount of time.What can you do with this layer?This layer has query, identify, and export image services available. Since it is served as an image service, it is not necessary to download the data; the service itself is data that can be used directly in any Esri geoprocessing tool that accepts raster data as input.In order to click on the image service and see the raw pixel values in a map viewer, you must be signed in to ArcGIS Online, then Enable Pop-Ups and Configure Pop-Ups.Using the Urban Heat Island (UHI) Image ServicesThe data is made available as an image service. There is a processing template applied that supplies the yellow-to-red or blue-to-red color ramp, but once this processing template is removed (you can do this in ArcGIS Pro or ArcGIS Desktop, or in QGIS), the actual data values come through the service and can be used directly in a geoprocessing tool (for example, to extract an area of interest). Following are instructions for doing this in Pro.In ArcGIS Pro, in a Map view, in the Catalog window, click on Portal. In the Portal window, click on the far-right icon representing Living Atlas. Search on the acronyms “tpl” and “uhi”. The results returned will be the UHI image services. Right click on a result and select “Add to current map” from the context menu. When the image service is added to the map, right-click on it in the map view, and select Properties. In the Properties window, select Processing Templates. On the drop-down menu at the top of the window, the default Processing Template is either a yellow-to-red ramp or a blue-to-red ramp. Click the drop-down, and select “None”, then “OK”. Now you will have the actual pixel values displayed in the map, and available to any geoprocessing tool that takes a raster as input. Below is a screenshot of ArcGIS Pro with a UHI image service loaded, color ramp removed, and symbology changed back to a yellow-to-red ramp (a classified renderer can also be used): A typical operation at this point is to clip out your area of interest. To do this, add your polygon shapefile or feature class to the map view, and use the Clip Raster tool to export your area of interest as a geoTIFF raster (file extension ".tif"). In the environments tab for the Clip Raster tool, click the dropdown for "Extent" and select "Same as Layer:", and select the name of your polygon. If you then need to convert the output raster to a polygon shapefile or feature class, run the Raster to Polygon tool, and select "Value" as the field.Other Sources of Heat Island InformationPlease see these websites for valuable information on heat islands and to learn about exciting new heat island research being led by scientists across the country:EPA’s Heat Island Resource CenterDr. Ladd Keith, University of ArizonaDr. Ben McMahan, University of Arizona Dr. Jeremy Hoffman, Science Museum of Virginia Dr. Hunter Jones, NOAA Daphne Lundi, Senior Policy Advisor, NYC Mayor's Office of Recovery and ResiliencyDisclaimer/FeedbackWith nearly 14,000 cities represented, checking each city's heat island raster for quality assurance would be prohibitively time-consuming, so Trust for Public Land checked a statistically significant sample size for data quality. The sample passed all quality checks, with about 98.5% of the output cities error-free, but there could be instances where the user finds errors in the data. These errors will most likely take the form of a line of discontinuity where there is no city boundary; this type of error is caused by large temperature differences in two adjacent Landsat scenes, so the discontinuity occurs along scene boundaries (see figure below). Trust for Public Land would appreciate feedback on these errors so that version 2 of the national UHI dataset can be improved. Contact Dale.Watt@tpl.org with feedback.
The iotrans extension enhances CKAN's capabilities by allowing users to convert datastore resources into various file formats and, for spatial data, transform them between Coordinate Reference Systems (EPSG). This extension addresses the need to download datastore resources in multiple formats and projections beyond CKAN's built-in options, leveraging Python libraries for data conversion. It provides CKAN actions to facilitate file format conversion and data transformation, primarily intended for administrative users. Key Features: Datastore Resource Conversion: Converts CKAN datastore resources to various formats, including CSV, GeoJSON, GPKG, SHP, JSON, and XML. Coordinate Reference System Transformation: Transforms spatial data between different EPSG codes, enabling data compatibility across various GIS applications. Admin-Only Actions: Introduces two CKAN actions, to_file and prune, accessible only to administrator users for file conversion and temporary file cleanup. Disk-Based Processing: Streams data from the CKAN datastore to a temporary CSV file, reducing memory consumption during format conversion. Spatial Data Handling: Identifies spatial data based on the presence of a "geometry" attribute and converts non-Multi geometry types to their Multi counterparts (e.g., Point to MultiPoint) to ensure consistent geometry types within output files. Shapefile Support: Handles shapefile-specific limitations by truncating column names longer than 10 characters, creating unique column names, and documenting the original-to-truncated name mapping in a zipped text file within the shapefile. Temporary File Management: Uses the /tmp directory as a staging area for file conversions, with a prune action to remove files or directories within this location. Technical Integration: The iotrans extension functions by adding new API actions to CKAN (to_file and prune). These actions interact directly with the CKAN datastore extension, retrieving data in chunks via sequential calls to CKAN's datastore_search API, converting it to different formats, and storing these in the /tmp directory. It requires the CKAN Datastore extension to be active. Benefits & Impact: The iotrans extension streamlines the process of extracting and transforming data from the CKAN datastore, enabling users to easily access data in preferred formats and coordinate reference systems. This enhances data usability and interoperability, making it easier to integrate CKAN data into other applications and workflows. It is especially useful for organizations that need to provide data in various formats to meet diverse user needs.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
LimnoSat-US is an analysis-ready remote sensing database that includes reflectance values spanning 36 years for 56,792 lakes across > 328,000 Landsat scenes. The database comes pre-processed with cross-sensor standardization and the effects of clouds, cloud shadows, snow, ice, and macrophytes removed. In total, it contains over 22 million individual lake observations with an average of 393 +/- 233 (mean +/- standard deviation) observations per lake over the 36 year period. The data and code contained within this repository are as follows:
HydroLakes_DP.shp: A shapefile containing the deepest points for all U.S. lakes within HydroLakes. For more information on the deepest point see https://doi.org/10.5281/zenodo.4136754 and Shen et al (2015).
LakeExport.py: Python code to extract reflectance values for U.S. lakes from Google Earth Engine.
GEE_pull_functions.py: Functions called within LakeExport.py
01_LakeExtractor.Rmd: An R Markdown file that takes the raw data from LakeExport.py and processes it for the final database.
SceneMetadata.csv: A file containing additional information such as scene cloud cover and sun angle for all Landsat scenes within the database. Can be joined to the final database using LandsatID.
srCorrected_us_hydrolakes_dp_20200628: The final LimnoSat-US database containing all cloud free observations of U.S. lakes from 1984-2020. Missing values for bands not shared between sensors (Aerosol and TIR2) are denoted by -99. dWL is the dominant wavelength calculated following Wang et al. (2015). pCount_dswe1 represents the number of high confidence water pixels within 120 meters of the deepest point. pCount_dswe3 represents the number of vegetated water pixels within 120 meters and can be used as a flag for potential reflectance noise. All reflectance values represent the median value of high confidence water pixels within 120 meters. The final database is provided in both as a .csv and .feather formats. It can be linked to SceneMetadata.cvs using LandsatID. All reflectance values are derived from USGS T1-SR Landsat scenes.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ArcGIS tool and tutorial to convert the shapefiles into network format. The latest version of the tool is available at http://csun.uic.edu/codes/GISF2E.htmlUpdate: we now have added QGIS and python tools. To download them and learn more, visit http://csun.uic.edu/codes/GISF2E.htmlPlease cite: Karduni,A., Kermanshah, A., and Derrible, S., 2016, "A protocol to convert spatial polyline data to network formats and applications to world urban road networks", Scientific Data, 3:160046, Available at http://www.nature.com/articles/sdata201646
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This archive reproduces a figure titled "Figure 3.2 Boone County population distribution" from Wang and vom Hofe (2007, p.60). The archive provides a Jupyter Notebook that uses Python and can be run in Google Colaboratory. The workflow uses the Census API to retrieve data, reproduce the figure, and ensure reproducibility for anyone accessing this archive.The Python code was developed in Google Colaboratory, or Google Colab for short, which is an Integrated Development Environment (IDE) of JupyterLab and streamlines package installation, code collaboration, and management. The Census API is used to obtain population counts from the 2000 Decennial Census (Summary File 1, 100% data). Shapefiles are downloaded from the TIGER/Line FTP Server. All downloaded data are maintained in the notebook's temporary working directory while in use. The data and shapefiles are stored separately with this archive. The final map is also stored as an HTML file.The notebook features extensive explanations, comments, code snippets, and code output. The notebook can be viewed in a PDF format or downloaded and opened in Google Colab. References to external resources are also provided for the various functional components. The notebook features code that performs the following functions:install/import necessary Python packagesdownload the Census Tract shapefile from the TIGER/Line FTP Serverdownload Census data via CensusAPI manipulate Census tabular data merge Census data with TIGER/Line shapefileapply a coordinate reference systemcalculate land area and population densitymap and export the map to HTMLexport the map to ESRI shapefileexport the table to CSVThe notebook can be modified to perform the same operations for any county in the United States by changing the State and County FIPS code parameters for the TIGER/Line shapefile and Census API downloads. The notebook can be adapted for use in other environments (i.e., Jupyter Notebook) as well as reading and writing files to a local or shared drive, or cloud drive (i.e., Google Drive).