You will need to download this zipped file and extract to a folder. Again, the file must be extracted to a folder you can find. For me, I like to have a folder simply named GIS and I dump all the files I use in the GIS in this folder.
The GEOL-QMAPS digital geological mapping solution comprises a QGIS field data entry template, designed as an open-source, collaborative tool with user-driven updates. It integrates with a custom QGIS plugin that facilitates the import of existing field data, fieldwork preparation, and field database management (available at https://github.com/swaxi/WAXI_QF). Although developed within the framework of stage 4 of the West African eXploration Initiative project (https://waxi4.org/), this template and plugin are not region-specific and can be adapted to any mapping guidelines. The downloadable archive includes: - a folder containing the customised QGIS mapping project template and related files for field data collection, - a .docx log file providing updates for the various releases of the QGIS template project, - a .txt file with a link to the UserGuide (https://github.com/swaxi/GEOL-QMAPS/blob/main/README.md).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This collection comprises geospatial datasets used to create the Beaverdam Valley Neighborhood Association community map and the resulting map in pdf and jpeg formats. This scope of the map covers the borders of Buncombe County, North Carolina, the city limits of Asheville, NC, and the three registered neighborhoods of the Beaverdam Valley (Beaverdam Valley, Hills of Beaverdam, and Beaverdam Run). The geospatial data includes the following layers and associated files:
"AVL City Limits.geojson": City of Asheville GIS municipal boundary data
"AVL City Limits.qmd": QGIS metadata file for the above
"AVL Neighborhoods.geojson": City of Asheville GIS registered neighborhood data
"AVL Neighborhoods.qmd": QGIS metadata file for the above
"Buncombe_County_Parcels.geojson": Buncombe County GIS parcel data.
"Buncombe_County_Parcels.qmd": QGIS metadata file for the above
"BV Boundaries.geojson": Beaverdam Valley Neighborhood boundaries.
"BV Boundaries.qmd": QGIS metadata file for the above
"BV Parcel Intersection.geojson": Intersection of the Beverdam Valley Neighborhood boundaries with the Buncombe County Parcel data.
"BV Parcel Intersection.qmd": QGIS metadata file for the above
"BVNA_Map_2022_v2.pdf": BVNA CIP Community Map
"BVNA_Map_2022_v2_825.jpg": BVNA CIP Community Map
"City Limits.geojson": Buncombe county boundaries and city limits boundaries witin the county.
"QGIS BVNA CIP.zip": Zip file containing the above layers in a QGIS project folder and file.
About the Project: The Beaverdam Valley Neighborhood Association (BVNA) Community Informatics Project aims to gain deeper understanding of the Beaverdam Valley community and to work towards gathering and sharing information about the community and its history. This collection represents a deliverable produced under the 2022-2023 City of Asheville Neighborhood Matching Grant program.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Today, deep neural networks are widely used in many computer vision problems, also for geographic information systems (GIS) data. This type of data is commonly used for urban analyzes and spatial planning. We used orthophotographic images of two residential districts from Kielce, Poland for research including urban sprawl automatic analysis with Transformer-based neural network application.Orthophotomaps were obtained from Kielce GIS portal. Then, the map was manually masked into building and building surroundings classes. Finally, the ortophotomap and corresponding classification mask were simultaneously divided into small tiles. This approach is common in image data preprocessing for machine learning algorithms learning phase. Data contains two original orthophotomaps from Wietrznia and Pod Telegrafem residential districts with corresponding masks and also their tiled version, ready to provide as a training data for machine learning models.Transformed-based neural network has undergone a training process on the Wietrznia dataset, targeted for semantic segmentation of the tiles into buildings and surroundings classes. After that, inference of the models was used to test model's generalization ability on the Pod Telegrafem dataset. The efficiency of the model was satisfying, so it can be used in automatic semantic building segmentation. Then, the process of dividing the images can be reversed and complete classification mask retrieved. This mask can be used for area of the buildings calculations and urban sprawl monitoring, if the research would be repeated for GIS data from wider time horizon.Since the dataset was collected from Kielce GIS portal, as the part of the Polish Main Office of Geodesy and Cartography data resource, it may be used only for non-profit and non-commertial purposes, in private or scientific applications, under the law "Ustawa z dnia 4 lutego 1994 r. o prawie autorskim i prawach pokrewnych (Dz.U. z 2006 r. nr 90 poz 631 z późn. zm.)". There are no other legal or ethical considerations in reuse potential.Data information is presented below.wietrznia_2019.jpg - orthophotomap of Wietrznia districtmodel's - used for training, as an explanatory imagewietrznia_2019.png - classification mask of Wietrznia district - used for model's training, as a target imagewietrznia_2019_validation.jpg - one image from Wietrznia district - used for model's validation during training phasepod_telegrafem_2019.jpg - orthophotomap of Pod Telegrafem district - used for model's evaluation after training phasewietrznia_2019 - folder with wietrznia_2019.jpg (image) and wietrznia_2019.png (annotation) images, divided into 810 tiles (512 x 512 pixels each), tiles with no information were manually removed, so the training data would contain only informative tilestiles presented - used for the model during training (images and annotations for fitting the model to the data)wietrznia_2019_vaidation - folder with wietrznia_2019_validation.jpg image divided into 16 tiles (256 x 256 pixels each) - tiles were presented to the model during training (images for validation model's efficiency); it was not the part of the training datapod_telegrafem_2019 - folder with pod_telegrafem.jpg image divided into 196 tiles (256 x 265 pixels each) - tiles were presented to the model during inference (images for evaluation model's robustness)Dataset was created as described below.Firstly, the orthophotomaps were collected from Kielce Geoportal (https://gis.kielce.eu). Kielce Geoportal offers a .pst recent map from April 2019. It is an orthophotomap with a resolution of 5 x 5 pixels, constructed from a plane flight at 700 meters over ground height, taken with a camera for vertical photos. Downloading was done by WMS in open-source QGIS software (https://www.qgis.org), as a 1:500 scale map, then converted to a 1200 dpi PNG image.Secondly, the map from Wietrznia residential district was manually labelled, also in QGIS, in the same scope, as the orthophotomap. Annotation based on land cover map information was also obtained from Kielce Geoportal. There are two classes - residential building and surrounding. Second map, from Pod Telegrafem district was not annotated, since it was used in the testing phase and imitates situation, where there is no annotation for the new data presented to the model.Next, the images was converted to an RGB JPG images, and the annotation map was converted to 8-bit GRAY PNG image.Finally, Wietrznia data files were tiled to 512 x 512 pixels tiles, in Python PIL library. Tiles with no information or a relatively small amount of information (only white background or mostly white background) were manually removed. So, from the 29113 x 15938 pixels orthophotomap, only 810 tiles with corresponding annotations were left, ready to train the machine learning model for the semantic segmentation task. Pod Telegrafem orthophotomap was tiled with no manual removing, so from the 7168 x 7168 pixels ortophotomap were created 197 tiles with 256 x 256 pixels resolution. There was also image of one residential building, used for model's validation during training phase, it was not the part of the training data, but was a part of Wietrznia residential area. It was 2048 x 2048 pixel ortophotomap, tiled to 16 tiles 256 x 265 pixels each.
In 2016-2017 zijn alle hunebedden van Nederland door het Groninger Instituut voor Archeologie opnieuw in kaart gebracht als 3D modellen (op basis van fotogrammetrie) met georeferentie (DGPS). Hieruit is een complete dataset met geografische informatie verkregen van elk van de 54 nog bestaande Nederlandse hunebedden (excl. Delfzijl). Van elk hunebed is een digital elevation model en orthofoto (loodrecht bovenaanzicht) beschikbaar in zeer hoge resolutie (DEM van 30 tot 100 MP, luchtfoto tot 400 MP) in coördinaten van het rijksdriehoeksstelsel (RD New - ESPG: 28992). Ook is van elk hunebed een gecorrigeerde positie (punt) en een digitale lijntekening (dissolved polyline) gemaakt in navolging van het werk van Van Giffen en zijn team in 1918. De dataset bevat tevens een QGIS file waarin de data is verzameld. Vier ZIP-files in deze dataset bevatten: Basisdata.zip (folder) - vectordata, punt, lijn en polygoon, waaronder een tabel met puntlocaties van alle hunebedden en basis metadata. Lijntekeningen van alle hunebedstenen op basis van bovenaanzicht 2017. Hondsrug en Rolderrug polygoon. (root file) VanGiffen2.qgz - hulpmiddel waarin naar de andere files en folders verwezen wordt vanuit een QGIS project. Ortho.zip (folder) - alle gegenereerde luchtfoto's op basis van low altitude aerial photography en verwerkt tot 3D model in (Metashape) Agisoft Photoscan. Hieruit geëxporteerd naar GeoTiff in 2 mm pixelresolutie. Hunebed D01 t/m D54 en G01. DEM.zip en DEM.z01 (folder) - 2 files - alle gegenereerde digital elevation models op basis van low altitude aerial photography en verwerkt tot 3D model in (Metashape) Agisoft Photoscan. Hieruit geëxporteerd naar GeoTiff in 2 mm pixelresolutie. Hunebed D01 t/m D54 en G01. 3D_assets.zip (folder OBJ) - alle 3D modellen in OBJ format geëxporteerd uit (Metashape) Agisoft Photoscan Apart from the 3D models all data is imagery prepared for a Geographical Information System in coordinate system RD New (ESPG: 28992). The dataset has been tested in QGIS and ESRO ArcGIS suites.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is a compilation of geographic rasters from multiple environmental data sources. It aims at making the life of SDM users easier. All rasters cover the metropolitan French territory, but have varying resolutions and projections. Each directory inside the main directory "0_mydata" contain a single environmental raster. Punctual extraction of raster values can be easily done for large sets of WGS84-(longitude,latitude) points coordinates and for multiple rasters at the same time through the R function get_variables of script _functions.R from Github repository: https://github.com/ChrisBotella/SamplingEffort. All data sources are accessible on the web and free of use, at least for scientific purpose. They have various conditions of citations. Anyone diffusing a work using the present data must reference along with the present DOI, the original source data employed. Those source data are described in the paragraphs below. We provide the articles to cite, when required, and webpages for access.
Pedologic Descriptors of the ESDB v2: 1 km × 1 km Raster Library : The library contains multiple soil pedology (physico-chemical properties of the soil) descriptors raster layers covering Eurasia at a resolution of 1 km. We selected 11 descriptors from the library. They come from the PTRDB. The PTRDB variables have been directly derived from the initial soil classification of the Soil Geographical Data Base of Europe (SGDBE) using expert rules. For more details, see [1, 2] and [3]. The data is maintained and distributed freely for scientific use by the European Soil Data Centre (ESDAC) at http://eusoils.jrc.ec.europa.eu/content/european-soil-databasev2-raster. The 11 rasters are in the directories "awc_top", "bs_top", "cec_top", "dimp", "crusting", "erodi", "dgh", "text", "vs", "oc_top", "pd_top".
Corine Land Cover 2012, Version 18.5.1, 12/2016 : It is a raster layer describing soil occupation with 48 categories across Europe (25 countries) at a resolution of 100 m. This data base of the European Union is freely accessible online for all use at http://land.copernicus.eu/pan-european/corine-land-cover/clc-2012. The raster of this variable is in the directory "clc".
Hydrographic Descriptor of BD Carthage v3: BD Carthage is a spatial relational database holding many informations on the structure and nature of the french metropolitan hydrological network. For the purpose of plants ecological niche, we focus on the geometric segments representing watercourses, and polygons representing hydrographic fresh surfaces. The data has been produced by the Institut National de l’information Géographique et forestière (IGN) from an interpretation of the BD Ortho IGN. It is maintained by the SANDRE under free license for non-profit use and downloadable at:
http://services.sandre.eaufrance.fr/telechargement/geo/ETH/BDCarthage/FX
From this shapefile, we derived a raster containing the binary value raster proxi_eau_fast, i.e. proximity to fresh water, all over France.We used qgis to rasterize to a 12.5m resolution, with a buffer of 50m, the shapefile COURS_D_EAU.shp on
one hand, and the polygons of SURFACES_HYDROGRAPHIQUES.shp with attribute NATURE=“Eau douce
permanente” on the other hand.We then created the maximum raster of the previous ones (So the value of 1 correspond to an approximate distance of less than 50m to a watercourse or hydrographic surface of fresh water). The raster is in the directory named "proxi_eau_fast".
USGS Digital Elevation Data : The Shuttle Radar Topography Mission achieved in 2010 by Endeavour shuttle measured elevation at three arc second resolution over most of the earth surface. Raw measures have been post-processed by NASA and NGA in order to correct detection anomalies. The data is available from the U.S. Geological Survey, and downloadable on the Earthexplorer (https://earthexplorer.usgs.gov/). One may refer to https://www.usgs.gov/centers/eros/science/usgs-eros-archive-digital-elevation-shuttle-radar-topography-mission-srtm-void?qt-science_center_objects=0#qt-science_center_objects for more informations. the elevation raster is in the directory named "alti".
Potential Evapotranspiration of CGIAR-CSI ETP : The CGIAR-CSI distributes this worldwide monthly potential-evapotranspiration raster data. It is pulled from a model developed by Antonio Trabucco [4, 5]. Those are estimated by the Hargreaves formula, using mean monthly surface temperatures and standard deviation from WorldClim 1:4 (http://www.worldclim. org/), and radiation on top of atmosphere. The raster is at a 1km resolution, and is
freely downloadable for a nonprofit use at: http://www.cgiar-csi.org/data/global-aridity-and-pet-database#description. This raster is in the directory "etp".
Bioclimatic Descriptors of Chelsea Climate Data 1.1: Those are raster data with worldwide coverage and 1 km resolution. A mechanistical climatic model is used to make spatial predictions of monthly mean-max-min temperatures, mean precipitations and 19 bioclimatic variables, which are downscaled with statistical models integrating historical measures of meteorologic stations from 1979 to today. The exact method is explained in the reference papers [6] and [7]. The data is under Creative Commons Attribution 4.0 International License and downloadable at (http://chelsa-climate.org/downloads/). The 19 bioclimatic rasters are located in the directories named "chbio_X".
ROUTE500 1.1: This database register classified road linkages between cities (highways, national roads, and departmental roads) in France in shapefile format, representing approxi-mately 500,000 km of roads. It is produced under free license (all uses) by the IGN. Data are available online at http://osm13.openstreetmap.fr/~cquest/route500/. For deriving the variable “droute_fast”, the distance to the main roads networks, we computed with qGis the distance raster to the union of all elements of the shapefile ROUTES.shp (segments).
References :
[1] Panagos, P. (2006). The European soil database. GEO: connexion, 5(7), 32–33.
[2] Panagos, P., Van Liedekerke, M., Jones, A., Montanarella, L. (2012). European Soil Data
Centre: Response to European policy support and public data requirements. Land Use Policy,
29(2),329–338.
[3] Van Liedekerke, M. Jones, A. & Panagos, P. (2006). ESDBv2 Raster Library-a set of rasters
derived from the European Soil Database distribution v2. 0. European Commission and the
European Soil Bureau Network, CDROM, EUR, 19945.
[4] Zomer, R., Bossio, D., Trabucco, A., Yuanjie, L., Gupta, D. & Singh, V. (2007). Trees and
water: smallholder agroforestry on irrigated lands in Northern India.
[5] Zomer, R., Trabucco, A., Bossio, D. & Verchot, L. (2008). Climate change mitigation: A
spatial analysis of global land suitability for clean development mechanism afforestation and
reforestation. Agriculture, ecosystems & environment, 126(1), 67–80.
[6] Karger, D. N., Conrad, O., Bohner, J., Kawohl, T., Kreft, H., Soria-Auza, R.W. & Kessler,
M. (2016). Climatologies at high resolution for the earth’s land surface areas. arXiv preprint
arXiv:1607.00217.
[7] Karger, D. N., Conrad, O., Bohner, J., Kawohl, T., Kreft, H., Soria-Auza, R.W. & Kessler, M.
(2016). CHELSEA climatologies at high resolution for the earth’s land surface areas (Version
1.1).
The data are derived from interpretation of seismic reflection profiles within the offshore Corinth Rift, Greece (the Gulf of Corinth) integrated with IODP scientific ocean drilling borehole data from IODP Expedition 381 (McNeill et al., 2019a, 2019b). The data include rift fault coordinate (location, geometry) information and slip rate and extension rate information for the major faults. Seismic reflection data were published in Taylor et al. (2011) and in Nixon et al. (2016). Preliminary fault interpretations and rate data, prior to IODP drilling, were published in Nixon et al. (2016). Details of datasets: The data can be viewed in GIS software (ArcGIS, QGIS) or the Excel and .dbf files can be used for viewing of rate data and import of fault coordinates into other software. The 4 folders are for different time periods with shape files for the N-Dipping and S-Dipping Faults in the offshore Corinth Rift and respective slip and extension (horizontal) rates. The shapefiles are digitised fault traces for the basement offsetting faults, picked from the Multichannel Seismic Data collected by the R/V Maurice Ewing. Fault traces are segmented and each segment has an average throw (vertical) rate (Tavg) in mm/yr. The rates for the segments are averages based on measurements at the ends of each segment. The major fault trace segments also have slip-rates (slip_rate) and extension-rates (ext_rate or extension_) in mm/yr. All rates as well as the names for major faults can be located in the attribute table of the shape files along with X- and Y-coordinates. The coordinate system is WGS84 UTM Zone 34N. The shape files can be loaded into a GIS (ArcGIS, QGIS etc.) allowing mapping and visualization of the fault traces and their activity rates. In addition, the attribute tables are .dbf files found within each folder. These have also been provided as .xlsx (Excel) files which include the fault coordinate information, and slip rates and extension rates along the major faults. References McNeill, L.C., Shillington, D.J., Carter, G.D.O., and the Expedition 381 Participants, 2019a. Corinth Active Rift Development. Proceedings of the International Ocean Discovery Program, 381: College Station, TX (International Ocean Discovery Program). McNeill, L.C., Shillington, D.J., et al., 2019b, High-resolution record reveals climate-driven environmental and sedimentary changes in an active rift, Scientific Reports, 9, 3116. Nixon, C.W., McNeill, L.C., Bull, J.M., Bell, R.E., Gawthorpe, R.L., Henstock, T.J., Christodoulou, D., Ford, M., Taylor, B., Sakellariou, S. et al., 2016. Rapid spatiotemporal variations in rift structure during development of the Corinth Rift, central Greece. Tectonics, 35, 1225–1248. Taylor, B., J. R. Weiss, A. M. Goodliffe, M. Sachpazi, M. Laigle, and A. Hirn (2011), The structures, stratigraphy and evolution of the Gulf of Corinth Rift, Greece, Geophys. J. Int., 185(3), 1189–1219.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This directory provides the source code, raw data and raw results from the paper titling "3D Walking Accessibility in Practice: Exploring the Imperfections from Data, Method, and Assumptions of Human-Space Interaction" from the International Journal of Geographical Information Science.This workspace includes the R scripts for preprocessing the data, conducting the analysis, and producing the results.The "data" folder includes all of the raw data. The "Readme.txt" in each folder inside provides the data sources. The "Preprocessed_data" folder within the "data" folder contains all preprocessed data generated by the R script "DataPreprocess.qmd". The "DataPreprocess.qmd" records the workflow of preprocessing the data for analysis.The "rFunction.qmd" file recorded all customised R functions for this study. The "Analysis01_cal_access.qmd" and "Analysis02_formalAnalysis.qmd" files record the R code for conducting the analysis and producing the results. Please first run the code in "rFunction.qmd" to reproduce the data preprocessing and data analysis. The "workspace.Rproj" is the R workspace for conducting the computation. You can open this workspace and import the R scripts from "rFunction.qmd", "DataPreprocess.qmd", "Analysis01_cal_access.qmd", and "Analysis02_formalAnalysis_update.qmd".The "result" folder comprises all raw figures, tables, and GIS data. Because R cannot produce the 3D map. Therefore, the "Qgis" folder comprises the QGIS files for visualising and mapping the produced GIS data. We used the Qgis2threejs plugin to visualise the 3D scene.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Instructions (with screenshots) to replicate results from Section 3 of the manuscript are available in "Step-by-step Instructions to Replicate Results.pdf".-------------------------------------------------------------------------------------------------------------------Step 1: Download the replication materialsDownload the whole replication folder on figshare containing the code, data and replication files.Step 2: Replicate Tables in Section 3All of the data is available inside the sub-folder replication/Data. To replicate Tables 1 and 2 from section 3 of the manuscript run the Python file replicate_section3_tables.py locally on your computer. This will produce two .csv files containing Tables 1 and 2 (already provided). Note that it is not necessary to run the code in order to replicate the tables. The output data needed for replication is provided.Step 3: Replicate Figures in QGISThe Figures must be replicated using QGIS, freely available at https://www.qgis.org/. Open the QGIS project replicate_figures.qgz inside the replication/Replicate Figures sub-folder. It should auto-find the layer data. The Figures are replicated as layers in the project. Step 4: Running the code from scratchThe accompanying code for the manuscript IJGIS-2024-1305, entitled "Route-based Geocoding of Traffic Congestion-Related Social Media Texts on a Complex Network" runs on Google Colab as Python notebooks. Please follow the instructions below to run the entire geocoder and network mapper from scratch. The expected running time is of the order of 10 hours on free tier Google Colab. 4a) Upload to Google DriveUpload the entire replication folder to your Google Drive. Note the path (location) to which you have uploaded it. There are two Google Colab notebooks that need to be executed in their entirety. These are Code/Geocoder/The_Geocoder.ipynb and Code/Complex_Network/Complex_network_code.ipynb. They need to be run in order (Geocoder first and Complex Network second). 4b) Set the path In each Google Colab notebook, you have to set the variable called “REPL_PATH” to the location on your Google Drive where you uploaded the replication folder. Include the replication folder in the path. For example "/content/drive/MyDrive/replication"4c) Run the codeThe code is available in two sub-folders, replication/Code/Geocoder and replication/Code/Complex_Network. You may simply open the Google Colab notebooks inside each folder, mount your Google Drive, set the path and run all cells.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data package contains supplementary materials related to development and validation of the Global Urban Heat Vulnerability Index (GUHVI) conducted for 8 Australian capital cities, and 9 diverse cities worldwide. This research was initiated by the Global Observatory of Healthy and Sustainable Cities for the 1000 Cities Challenge and inclusion in the Global Healthy & Sustainable City Indicators (GHSCI) open-source software.This data package contains the following:The Jupyter Notebook hosting a custom Python script that makes use of the Google Earth Engine API and geemap API to generate an overall heat vulnerability raster, three sub-index rasters, and ten input rasters.POPD, SHDI, and IMR input rasters. The remaining inputs into the GUHVI are automatically fetched from cloud storage as the script runs.GUHVI and iHVI visual comparison as .jpg file.A folder for each of the 8 Australian cities involved in the study, containing the 14 GUHVI rasters, urban centre boundary in .shp format, and hottest third of the year date range in .txt format in the 'GUHVI Outputs' folder. The 'iHVI Outputs' folder contains the input LST, NDVI, NDBI .csv files, and the output heat vulnerability and sub-index .csv files. The 'QGIS Data' folder contains the SA1 .shp files attributed with heat vulnerability scores, and the iHVI and GUHVI comparison rasters.A folder for each of the 9 international cities involved in the study, containing the 14 GUHVI rasters, urban centre boundary in .shp format, hottest third of the year date range in .txt format, and the QGIS project file provided to collaborators for validation.R scripts for generating the normalized mean results, population percentage per heat vulnerability class table, and combined box, half-violin and strip plots for each city.The instructional video provided to collaborators as a .mp4 file, which outlines how to navigate the QGIS project, and how to access and record comments in the live spreadsheet.The complete validation spreadsheet with comments included as a .pdf file.Supplementary tables listing the urban centre boundary source files for each city, and the OpenStreetMap data source used to perform the coastal pixel overlap methodology as a .pdf file.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
GIS datasets that were used in the study. The GIS data can be opened in either QGIS or ArcGIS. The files are separated into the folders 'CGF' and 'BGF-DPGF'. The root folders contain the research areas (ROI) and locations. The 'Indicator mineral' folder contains the initial Mineral Mapping raster of the five target minerals (Alu., Chd., Hem., Kln., and Opl.), which were converted to the point data of Two-Class Mineral Maps (including a Non-Prediction dataset for areas without mapped minerals). The folder also includes normalized Mineral Density Maps. The 'Fault' folder contains the initial Fault Traces, Fault Distance Maps, and normalized Fault Density Maps. The 'LST' folder contains normalized Multiclass Temperature Maps, and the Two-class Temperature Maps that include both converted point data of high temperature (HT) areas and low temperature (LT) areas.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Science Case in the Caribbean region presents records on landslides, precipitation, maps used as inputs of hazard models and drone imagery over the region of interest.
For the Carribean study-case, an analysis of open and proprietary satellite based dataset was used to facilitate the setup and evaluation of physically-based multi-hazard models. These allow for qualification and quantification of spatio-temporal multi-hazard patterns. These form a crucial input into the general hazard and risk assessment workflow.
Presented here are the datasets employed for Case Study 4 in Deliverable D3.1 with a short description, produced and saved within the following folders:
Dominica_landslide: the landslides datasets mapped by ITC using high-resolution satellite imagery. It is intended to calibrate and validate the flood and landslide modelling. The folder contains four shapefiles:
· Landslide_Part.shp - Shapefile containing landslide extent, flash flood extents, and their attributes.
· Cloud.shp – Shapefile represents the cloud-filled areas in the satellite imagery where no mapping was possible.
· The other two shapefiles are self-explanatory.
GPM_Maria: NASA Global Precipitation Mission (GPM) precipitation maps processed for model input in LISEM. GPM is a hybrid fusion with satellite datasets for precipitation estimates. Mean as input data to represent precipitation in the landslide and flood modelling.
Maps_Models_Input : Soil and land use and channels, lots of custom work, SOILGRIDS, and SPOT image classification; all the datasets are ready for model input for OpenLISEM and LISEM Hazard or FastFlood. The dataset is meant to calibrate and validate the flood and landslide modelling.
The raster files are either in Geotiff format or PCraster map format. Both can be opened by GIS systems such as GDAL or QGIS. The projection of each file is in UTM20N.
Some key files are:
StakeholderQuestionnaire_Survey_ITC: The stakeholder questionnaires particularly relating to the tools developed partly by this project on rapid hazard modelling. Stakeholder Engagement survey and Stakeholder Survey Results prepared and implemented by Sruthie Rajendran as part of her MSc Thesis Twin Framework For Decision Support In Flood Risk Management supervised by Dr. M.N. Koeva (Mila) and Dr. B. van den Bout (Bastian) submitted in July 2024.
·Drone_Images_ 2024: Images captured using a DJI drone of part of the Study area in February 2024. The file comprises three different regions: Coulibistrie, Pichelin and Point Michel. The 3D models for Coulibistrie were generated from the nadir drone images using photogrammetric techniques employed by the software Pix4D. The image Coordinate System is WGS 84 (EGM 96 Geoid0), but the Output Coordinate System of the 3D model is WGS 84 / UTM zone 20N (EGM 96 Geoid). The other two folders contain only the drone images captured for that particular region's Pichelin and Point Michel. The dataset is used with other datasets to prepare and create the digital twin framework tailored for flood risk management in the study area.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is part of the 2021 UN Open GIS Challenge 1 - Training on Satellite Data Analysis and Machine Learning with QGIS (Satellite_QGIS), Exercise 1: Supervised Change Detection: Monitoring deglaciation in Huascaran, Peru.
The folder structure is the following:
Clip: clipped images to the region of interest
Images: original images from Landsat 8, Sentinel-1 and Sentinel-2 satellites.
Preprocess: pre-processed images.
Reports: classification reports of the generated masks.
Results: classification maps.
RGB_Compositions: true color RGB compositions.
Stacks: multiband rasters with all bands stacked from Landsat 8 satellite.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database contains measurements and datasets acquired in Caribou Bog, ME, USA, over a 21 year period. Most of this work was supported by a sequence of grants from Hydrologic Sciences program of the National Science Foundation (NSF). The datasets were acquired to investigate peatland hydrology and methane cycling in this ombotrophic peatland. The datasets include coring, hydrological and aqueous chemistry measurements, geophysical measurements and methane chamber flux measurements. Data included in this database have been reported in a series of publications by the authors of this resource spanning from 2002-2020. The database will continue to evolve as work on Caribou Bog continues by the collaborators. The database includes a set of QGIS files that can be used to help visualize many elements of the database.
The extensive files and datasets are organized into folders based around key types of data. The entire set of folders for each dataset is uploaded as a single ZIP file. Users should download the large zip files and extract the folders/subfolders into a master directory. QGIS files are provided to assist with visualizing many of the datatypes in the database.
Datasets associated to the report titled 'Shore Channel Sedimentary Processes, Passability by Migrating Fish and Habitat Suitability'.Summary:The current study consists of three parts (1) an analysis of the sedimentary processes in the shore channels along the longitudinal dams in the River Waal (LTDs), (2) an assessment of the upstream passability of the shore channel inflows by migratory fish species, and (3) an analysis of the habitat suitability of the shore channels for fish, macroinvertebrates and macrophytes. For the first analysis, light detection and ranging (LiDAR), multibeam echosounder (MBES), and aerial photographs datasets were used to examine geomorphological processes (erosion and deposition), calculate the retreat rate of eroding banklines, and analyze the development of shoreline length over time in the mesohabitats of shore channels and reference study areas. The second part of the analysis focused on the use of acoustic doppler current profiler (ADCP) datasets to produce 3D lattices of flow velocity in the inflow openings of shore channels at high river discharge. This was combined with data and linear relations from scientific literature on the swimming performance of relevant migratory fish species in the Rhine. The third part consisted in assessing the habitat suitability of the shore channel with the data on substrate, water depth and flow velocity collected in 2020. This was done using the species sensitivity distributions (SSDs) available in the scientific literature for fish, macroinvertebrates and macrophytes occurring in the Rhine. The produced substrate maps of May 2020 were compared with substrate maps of April 2019. The main conclusions of the studies are:1. The shore channels of the LTDs showed a pattern of aggradation of the bed towards the dams and degradation towards the bank. From 2015 to 2019 there was net sediment loss in all three shore channels with Wamel having the least and Dreumel themost. Compared to the groyne field areas the Wamel shore channel is almost stable. The eroding banklines had a retreat rate of 1.6 m/y and the sand dominated mesohabitats in the shore channels had longer shorelines.2. Larger juveniles (TL = 70 mm) of fish species occurring in the Rhine passed some of the study sites during high discharge conditions and performed better during average discharge conditions. Adult fish had no problems passing the inflow opening, with the exception of Gasterosteus aculeatus aculeatus. Fish species were able to pass all 10 3D lattices produced once they reached a minimum TL of about 165 mm. The inflow of the Ophemert shore channel was the least passable of the LTDs.3. The habitats in the center and bank lines of the shore channels were most suitable for all species groups studied because of substrate heterogeneity, shallow water and relatively low flow velocities. All three shore channels had more mixed substrate typein 2020 than in 2019.The dataset includes:1. Sedimentary Processes.zip includes:a. ErosionDeposition.zip: file with the GeoTIFF files of the erosion and deposition analysis results for all of the periods (files name: SubtractionYear1_Year2Location_Channel or vegetated bank(VegBank).tif; coordinate system: Amersfoort/RD New; opens with ArcGIS or QGIS).b. ErodedBankline.zip: file with the manually digitized bankline shapefiles per year (file name: Location_Year.shp; coordinate system: Amersfoort/RD New; opens with ArcGIS or QGIS).c. ShorelineLength.zip: file with all of the shoreline length analysis shapefiles (file name: MonthYear_Mesohabitats_NoStony if boulder areas are not included. shp; coordinate system: Amersfoort/RD New; opens with ArcGIS or QGIS).d. DTMs.zip: file with the GeoTIFF files (files name: Location_rYear_NN (gridding method Natural Neighbor).tif; coordinate system: Amersfoort/RD New; opens with ArcGIS or QGIS) of the combined LiDAR and MBES DTMs produced.2. Passability.zip includes:a. Passability_Lattices_FishSpecies.zip: file containing the images (.png; opens with Photos) of the 3D lattices of the final swimming speed for all of the fish species assessed.b. FlowVelocity.zip: file containing the images (.png; opens with Photos) of the 3D lattices of the flow velocities per year and location assessed.3. FINAL_Rasters.zip: file containing the GeoTIFF files (.tif; reference coordinate system: WGS84; opens with ArcGIS or QGIS) for the water depth (Depth_Clipped_WGS84_March2020 folder), flow velocity (Flow Velocity_Clipped_WGS84) and substrate types for allof the study sites. Substrate files have the Potentially Occurring Fraction of EPT macroinvertebrates (Substrate_EPTs_WGS84 folder) or mussels (Substrate_Mussels_WGS84 folder) as cell values.4. SubstrateClassification.zip: file with all of the substrate classification polygon shapefiles (reference coordinate system: WGS84; opens with ArcGIS or QGIS).
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Source:
This dataset supports the publication: Martin, H.K., Edmonds, D.A., Yanites, B.J. & Niemi, N.A. (2024) Quantifying landscape change following catastrophic dam failures in Edenville and Sanford, Michigan, USA. Earth Surf. Process. Landforms, Available from: https://doi.org/10.1002/esp.5855. All of the details about how these data were collected and processed are provided in that paper, particularly in the Supporting Information.
Brief Summary:
On May 19, 2020, the Edenville and Sanford dams near Midland, Michigan, USA failed as the result of significant rainfall over the preceding two days. We analyzed the geomorphic impacts of these failures using a pre-failure airborne lidar dataset and three UAV-based lidar surveys collected two weeks ("2020-06"), three months ("2020-08)", and eleven months ("2021-04") post-failure. The pre-failure airborne lidar dataset was merged from two datasets hosted on OpenTopography as part of the USGS 3DEP project. These are linked to in the Supporting Information document for the above manuscript. This upload contains data from the three UAV-based surveys we collected.
Structure/Contents:
The .zip file is structured first by dates, with each folder corresponding to one of our three field campaigns.
Within each date folder, there are two folders: one for data collected at Edenville and another for data collected at Sanford.
Within each location folder, there are two items.
1. One is a .las file that contains the ground-classified point cloud data. For Edenville, these point clouds are merged from data collected from multiple launch locations. The point cloud file has had corrections applied to it. It corresponds to the output that results from the Supporting Information through the end of section 2.3: Lidar processing. These files can be opened and worked with using most-any software that handles point clouds; users preferring free and open source software could consider using CloudCompare [https://cloudcompare.org/].
2. The second is a folder containing DEMs, which are raster images derived from the point cloud data in the corresponding location folder. Each pixel contains the elevation of that area in meters above sea level. Two copies are provided: one with the pixel size set to 50 cm by 50 cm, and the other set to 1 m by 1 m. These were created by constructing a triangular lattice between ground-classified points in the point cloud and then sampling the mesh. These maps are provided for convenience for users who prefer to work with raster data over point clouds or would otherwise prefer not to have to rasterize point cloud data themselves. These files can be opened and worked with using any GIS software; users preferring free and open source software could consider using QGIS [https://www.qgis.org/en/site/]. The CRS is NAD83 UTM Zone 16N.
Citation/License:
Please use our data! We ask that, if you do so and any published work results, you please cite the following manuscript as well as this zenodo dataset:
Martin, H.K., Edmonds, D.A., Yanites, B.J. & Niemi, N.A. (2024) Quantifying landscape change following catastrophic dam failures in Edenville and Sanford, Michigan, USA. Earth Surf. Process. Landforms, Available from: https://doi.org/10.1002/esp.5855.
These data are released under the CCBY-NC-SA 4.0 license [https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en].
In plain language, it means that you should feel free to download these data and use or modify them for whatever purpose you wish, as long as you i) attribute us as original authors of the dataset, ii) do not use them for commercial purposes, and iii) use the same license for any derivative works you create using these data.
For any questions or concerns, please don't hesitate to reach out to Harrison Martin at hkm@caltech.edu, or via https://harrison.studies.rocks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items.
Our corresponding paper (published at ITSC 2022) is available here. Further, we have applied 3DHD CityScenes to map deviation detection here.
Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises:
Python tools to read, generate, and visualize the dataset,
3DHDNet deep learning pipeline (training, inference, evaluation) for map deviation detection and 3D object detection.
The DevKit is available here:
https://github.com/volkswagen/3DHD_devkit.
The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany.
When using our dataset, you are welcome to cite:
@INPROCEEDINGS{9921866, author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and Fingscheidt, Tim}, booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)}, title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds}, year={2022}, pages={627-634}}
Acknowledgements
We thank the following interns for their exceptional contributions to our work.
Benjamin Sertolli: Major contributions to our DevKit during his master thesis
Niels Maier: Measurement campaign for data collection and data preparation
The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies.
The Dataset
After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following.
This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map.
During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet.
To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example.
import json
json_path = r"E:\3DHD_CityScenes\Dataset\train.json" with open(json_path) as jf: data = json.load(jf) print(data)
Map items are stored as lists of items in JSON format. In particular, we provide:
traffic signs,
traffic lights,
pole-like objects,
construction site locations,
construction site obstacles (point-like such as cones, and line-like such as fences),
line-shaped markings (solid, dashed, etc.),
polygon-shaped markings (arrows, stop lines, symbols, etc.),
lanes (ordinary and temporary),
relations between elements (only for construction sites, e.g., sign to lane association).
Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON.
Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API.
The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows.
x-coordinates: 4 byte integer
y-coordinates: 4 byte integer
z-coordinates: 4 byte integer
intensity of reflected beams: 2 byte unsigned integer
ground classification flag: 1 byte unsigned integer
After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance.
import numpy as np import pptk
file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin" pc_dict = {} key_list = ['x', 'y', 'z', 'intensity', 'is_ground'] type_list = ['
Tykkelsen af de kvartære aflejringer i Danmark er beregnet som forskellen mellem terræn (Geodatastyrelsen - højdemodel 2008) og dybden til toppen af de prækvartære aflejringer (GEUS - Prækvartæroverflades højdeforhold). Data leveres som en ArcGIS Pro MapPackage-fil, samt en mappe med filer til brug i QGIS. Thickness of Quaternary deposits in Denmark calculated as the difference between terrain (2008 terrain model from Geodatastyrelsen) and depth to top Quaternary (GEUS map). Data are delivered as an ArcGIS Pro map package file and a folder with files for QGIS.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This zip file contains thirteen thermal orthomosaics depicting an aerial view of eight flying-fox (Pteropus spp.) colonies throughout the Greater Sydney region, Australia. The zip file contains thirteen folders, each containing the georeferenced thermal orthomosaic depicting the flying-fox colony, as well as four semi-automatically classified image products, classifying the thermal image into two classes: flying-fox, and background. The number of image segments classified as flying-fox were counted to give an estimate of the number of flying-foxes in the original thermal orthomosaic. Each thermal orthomosaic was classified using (1) a Computer Vision pipeline (denoted CV), (2) Object-based image segmentation and Random Forest classification (denoted RF), (3) Object-based image segmentation and Support Vector Machines classification (denoted SVM) and (4) Object-based image segmentation and Maximum Likelihood classification (denoted ML).Each folder is named with the colony location and the date of image capture, for example ‘CamelliaGardens200220’ contains an orthomosaic and classified image products depicting the Camellia Gardens flying-fox colony on the 20th of February 2020. Within each folder, images are named with their type, colony location and the date of image capture. For example, the image name ‘CVCamelliaGardens200220’ is the Computer vision pipeline classified orthomosaic of the Camellia Gardens colony on the 20th of February 2020.This file also contains a fourteenth folder entitled 'Orthomosaic precision study' which contains orthomosaics and other material pertaining to the precision assessment presented in this paper.All thermal orthomosaics contained in this file are georeferenced .tif files and can be viewed in Geographical Information Systems software such as ArcGIS, QGIS or R. All classified image products contained in this file are .tif files and can be viewed in Geographical Information Systems software such as ArcGIS, QGIS or R. All files can be viewed in any .tif image viewing software.For all colonies depicted here, landowner permission was obtained prior to drone flights.
https://creativecommons.org/share-your-work/public-domain/pdmhttps://creativecommons.org/share-your-work/public-domain/pdm
This collection consists of geospatial data layers and summary data at the country and country sub-division levels that are part of USAID's Demographic Health Survey Spatial Data Repository. This collection includes geographically-linked health and demographic data from the DHS Program and the U.S. Census Bureau for mapping in a geographic information system (GIS). The data includes indicators related to: fertility, family planning, maternal and child health, gender, HIV/AIDS, literacy, malaria, nutrition, and sanitation. Each set of files is associated with a specific health survey for a given year for over 90 different countries that were part of the following surveys:Demographic Health Survey (DHS)Malaria Indicator Survey (MIS)Service Provisions Assessment (SPA)Other qualitative surveys (OTH)Individual files are named with identifiers that indicate: country, survey year, survey, and in some cases the name of a variable or indicator. A list of the two-letter country codes is included in a CSV file.Datasets are subdivided into the following folders:Survey boundaries: polygon shapefiles of administrative subdivision boundaries for countries used in specific surveys. Indicator data: polygon shapefiles and geodatabases of countries and subdivisions with 25 of the most common health indicators collected in the DHS. Estimates generated from survey data.Modeled surfaces: geospatial raster files that represent gridded population and health indicators generated from survey data, for several countries.Geospatial covariates: CSV files that link survey cluster locations to ancillary data (known as covariates) that contain data on topics including population, climate, and environmental factors.Population estimates: spreadsheets and polygon shapefiles for countries and subdivisions with 5-year age/sex group population estimates and projections for 2000-2020 from the US Census Bureau, for designated countries in the PEPFAR program.Workshop materials: a tutorial with sample data for learning how to map health data using DHS SDR datasets with QGIS. Documentation that is specific to each dataset is included in the subfolders, and a methodological summary for all of the datasets is included in the root folder as an HTML file. File-level metadata is available for most files. Countries for which data included in the repository include: Afghanistan, Albania, Angola, Armenia, Azerbaijan, Bangladesh, Benin, Bolivia, Botswana, Brazil, Burkina Faso, Burundi, Cape Verde, Cambodia, Cameroon, Central African Republic, Chad, Colombia, Comoros, Congo, Congo (Democratic Republic of the), Cote d'Ivoire, Dominican Republic, Ecuador, Egypt, El Salvador, Equatorial Guinea, Eritrea, Eswatini (Swaziland), Ethiopia, Gabon, Gambia, Ghana, Guatemala, Guinea, Guyana, Haiti, Honduras, India, Indonesia, Jordan, Kazakhstan, Kenya, Kyrgyzstan, Lesotho, Liberia, Madagascar, Malawi, Maldives, Mali, Mauritania, Mexico, Moldova, Morocco, Mozambique, Myanmar, Namibia, Nepal, Nicaragua, Niger, Nigeria, Pakistan, Papua New Guinea, Paraguay, Peru, Philippines, Russia, Rwanda, Samoa, Sao Tome and Principe, Senegal, Sierra Leone, South Africa, Sri Lanka, Sudan, Tajikistan, Tanzania, Thailand, Timor-Leste, Togo, Trinidad and Tobago, Tunisia, Turkey, Turkmenistan, Uganda, Ukraine, Uzbekistan, Viet Nam, Yemen, Zambia, Zimbabwe
You will need to download this zipped file and extract to a folder. Again, the file must be extracted to a folder you can find. For me, I like to have a folder simply named GIS and I dump all the files I use in the GIS in this folder.