Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This .las file contains sample LiDAR point cloud data collected by National Ecological Observatory Network's Airborne Observation Platform. The .las file format is a commonly used file format to store LIDAR point cloud data.This teaching data set is used for several tutorials on the NEON website (neonscience.org). The dataset is for educational purposes, data for research purposes can be obtained from the NEON Data Portal (data.neonscience.org).
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This data set represents 3D point clouds acquired with LiDAR technology and related files from a subregion of 150*436 sqm in the ancient Nakadake Sanroku Kiln Site Center in South Japan. It is a densely vegetated mountainous region with varied topography and vegetation. The data set contains the original point cloud (reduced from a density of 5477 points per square meter to 100 points per square meter), a segmentation of the area based on characteristics in vegetation and topography, and filter pipelines for segments with different characteristics, and other data necessary. The data serve to test the AFwizard software which can create a DTM from the point cloud with varying filter and filter parameter selections based on varying segment characteristics (https://github.com/ssciwr/afwizard). The AFwizard adds flexibility to ground point filtering of 3D point clouds, which is a crucial step in a variety of applications of LiDAR technology. Digital Terrain Models (DTM) derived from filtered 3D point clouds serve various purposes and therefore, rather than creating one representation of the terrain that is supposed to be "true", a variety of models can be derived from the same point cloud according to the intended usage of the DTM. The sample data were acquired during an archaeological research project in a mountainous and densely forested region in South Japan -- the Nakadake-Sanroku Kiln Site Center: LiDAR data were acquired in a subregion of 0.5 sqkm, a relatively small area characterized by frequent and sudden changes in topography and vegetation. The point cloud is very dense due to the technology chosen (UAV multicopter GLYPHON DYNAMICS GD-X8-SP; LiDAR scanner RIEGL VUX-1 UAV). Usage of the data is restricted to the citation of the article mentioned below. Version 2.01: 2023-05-11; Article citation updated; 2022-07-21; Documentation (HowTo - Minimal Workflow) updated, data files tagged.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
3D point cloud representing all physical features (e.g. buildings, trees and terrain) across City of Melbourne. The data has been encoded into a .las file format containing geospatial coordinates and RGB values for each point. The download is a zip file containing compressed .las files for tiles across the city area.
The geospatial data has been captured in Map Grid of Australia (MGA) Zone 55 projection and is reflected in the xyz coordinates within each .las file. Also included are RGB (Red, Green, Blue) attributes to indicate the colour of each point.
Capture Information - Capture Date: May 2018 - Capture Pixel Size: 7.5cm ground sample distance - Map Projection: MGA Zone 55 (MGA55) - Vertical Datum: Australian Height Datum (AHD) - Spatial Accuracy (XYZ): Supplied survey control used for control (Madigan Surveying) – 25 cm absolute accuracy
Limitations: Whilst every effort is made to provide the data as accurate as possible, the content may not be free from errors, omissions or defects.
Sample Data: For an interactive sample of the data please see the link below. https://cityofmelbourne.maps.arcgis.com/apps/webappviewer3d/index.html?id=b3dc1147ceda46ffb8229117a2dac56dPreview:Download:A zip file containing the .las files representing tiles of point cloud data across City of Melbourne area. Download Point Cloud Data (4GB)
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This data collection of the 3D Elevation Program (3DEP) consists of Lidar Point Cloud (LPC) projects as provided to the USGS. These point cloud files contain all the original lidar points collected, with the original spatial reference and units preserved. These data may have been used as the source of updates to the 1/3-arcsecond, 1-arcsecond, and 2-arcsecond seamless 3DEP Digital Elevation Models (DEMs). The 3DEP data holdings serve as the elevation layer of The National Map, and provide foundational elevation information for earth science studies and mapping applications in the United States. Lidar (Light detection and ranging) discrete-return point cloud data are available in LAZ format. The LAZ format is a lossless compressed version of the American Society for Photogrammetry and Remote Sensing (ASPRS) LAS format. Point Cloud data can be converted from LAZ to LAS or LAS to LAZ without the loss of any information. Either format stores 3-dimensional point cloud data and point ...
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Point clouds of Tram bridge on Schipluiden, Zuid-Holland was acquired for the project “Laser Scanning for Automatic Bridge Assessment” or called “BridgeScan” funded through H2020 Marie Curie IF and for a big assignment of a course “3D Surveying of Civil and Offshore Infrastructure” of a Master program at TU Delft Dept. Geoscience and Remote sensing”. The Tram bridge is a truss steel bridge, which was for a tram to transport vegetables. Now it is used for light traffic (mainly pedestrian, bike and motorbike). The data points of the bridge were acquired using Leica ScanStation P40, from 14 stations with the sampling step 6.3mm at the measure range of 10.0m. The point clouds from different scanning stations were registered using the artificial targets through Leica Cylone software. The point cloud of the bridge was used to reconstruct a 3D model and identify surface damage. The data set was cleaned irrelevant points and down-sampled with the sampling step of 5mm.
The Caerbannog Point Clouds provide point-sampled 3D models occluded in clouds of points. We synthesized the 3D point clouds from polygonal models, point-sampling the models and surrounding them in a point cloud such that the shape of the model is occluded in any 2D projection. We obscure our model-of-interest by repeatedly surrounding it with an amorphous cloud of points, giving the overall point cloud a structure of organic nature, like that of shrubbery. In a user study, participants were significantly better at identifying the models when visualized as 3D scatterplots under rotation than in axis-aligned 2D scatterplots. We provide three point-clouds, both occluded and unoccluded: the Stanford bunny, Utah Teapot, and OSG Cow.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set is comprised of three packed zip files and they include text files of 3D information from terrestrial laser scanning (TLS) and aerial imagery from unmanned aerial vehicle (UAV) from individual Scots pine trees within 27 sample plots from three test sites located in southern Finland.
TLS data acquisition was carried out with Trimble TX5 3D laser scanner (Trible Navigation Limited, USA) for all three study sites between September and October 2018. Eight scans were placed to each sample plot and scan resolution of point distance approximately 6.3 mm at 10-m distance was used. Artificial constant sized spheres (i.e. diameter of 198 mm) were placed around sample plots and used as reference objects for registering the eight scans onto a single, aligned coordinate system. The registration was carried out with FARO Scene software (version 2018). Aerial images were obtained by using an UAV with Gryphon Dynamics quadcopter frame. Two Sony A7R II digital cameras were mounted on the UAV in +15° and -15° angles. Images were acquired in every two seconds and image locations were recorded for each image. The flights were carried out on October 2, 2018. For each study site, eight ground control points (GCPs) were placed and measured. Flying height of 140 m and a flying speed of 5 m/s was selected for all the flights, resulting in 1.6 cm ground sampling distance. Total of 639, 614 and 663 images were captured for study site 1, 2, and 3, respectively, resulting in 93% and 75% forward and side overlaps, respectively. Photogrammetric processing of aerial images was carried out following the workflow as presented in Viljanen et al. (2018). The processing produced photogrammetric point clouds for each study site with point density of 804 points/m2, 976 points/m2, and 1030 points/m2 for study site 1, 2, and 3, respectively.
The sample plots within the three test sites have been managed with different thinning treatments in either 2005 or 2006. The experimental design of the sample plots includes two levels of thinning intensity and three thinning types resulting in six different thinning treatments, namely i) moderate thinning from below, ii) moderate thinning from above, iii) moderate systematic thinning, iv) intensive thinning from below, v) intensive thinning from above, and vi) intensive systematic thinning, as well as a control plot where no thinning has been carried out since the establishment. More information about the study sites and samples plots as well as the thinning treatments can be found in Saarinen et al. (2020a).
The data set includes stem points of individual Scot pine trees extracted from the point clouds. More about the method of extraction can be found in Saarinen et al. (2020a, 2020b) and Yrttimaa et al. (2020). The title of the zip file refers to the study sites 1, 2, and 3. The title of the text files includes the information on the test site, the plot within the test site, and the tree within the plot. The text files contain stem points extracted from the TLS point clouds. The columns “x” and “y” contain x- and y-coordinates in a local coordinate system (in meters), in column “h” is the height of each point in meters above ground, and treeID is the tree identification number. The columns are separated by space.
Based on the study site and plot number, files from different thinning treatments can be identified by using the information in Table 1 in Saarinen et al. (2020b).
References
Saarinen, N., Kankare, V., Yrttimaa, T., Viljanen, N., Honkavaara, E., Holopainen, M., Hyyppä, J., Huuskonen, S., Hynynen, J., Vastaranta, M. 2020a. Assessing the effects of stand dynamics on stem growth allocation of individual Scots pines. bioRxiv 2020.03.02.972521. https://doi.org/10.1101/2020.03.02.972521
Saarinen, N., Kankare, V., Yrttimaa, T., Viljanen, N., Honkavaara, E., Holopainen, M., Hyyppä, J., Huuskonen, S., Hynynen, J., Vastaranta, M. 2020b. Detailed point cloud data on stem size and shape of Scots pine trees. bioRxiv 2020.03.09.983973. https://doi.org/10.1101/2020.03.09.983973
Viljanen, N., Honkavaara, E., Näsi, R., Hakala, T., Niemeläinen, O., Kaivosoja, J. 2018. A Novel Machine Learning Method for Estimating Biomass of Grass Swards Using a Photogrammetric Canopy Height Model, Images and Vegetation Indices Captured by a Drone. Agriculture 8: 70. https://doi.org/10.3390/agriculture8050070
Yrttimaa, T., Saarinen, N., Kankare, V., Hynynen, J., Huuskonen, S., Holopainen, M., Hyyppä, J., Vastaranta, M. 2020. Performance of terrestrial laser scanning to characterize managed Scots pine (Pinus sylvestris L.) stands is dependent on forest structural variation. EarthArXiv. March 5. https://doi.org/10.31223/osf.io/ybs7c
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items.
Our corresponding paper (published at ITSC 2022) is available here.
Further, we have applied 3DHD CityScenes to map deviation detection here.
Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises:
The DevKit is available here:
https://github.com/volkswagen/3DHD_devkit.
The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany.
When using our dataset, you are welcome to cite:
@INPROCEEDINGS{9921866,
author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and
Fingscheidt, Tim},
booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)},
title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds},
year={2022},
pages={627-634}}
Acknowledgements
We thank the following interns for their exceptional contributions to our work.
The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies.
The Dataset
After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following.
1. Dataset
This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map.
During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet.
To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example.
import json
json_path = r"E:\3DHD_CityScenes\Dataset\train.json"
with open(json_path) as jf:
data = json.load(jf)
print(data)
2. HD_Map
Map items are stored as lists of items in JSON format. In particular, we provide:
3. HD_Map_MetaData
Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON.
Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API.
4. HD_PointCloud_Tiles
The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows.
After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance.
import numpy as np
import pptk
file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin"
pc_dict = {}
key_list = ['x', 'y', 'z', 'intensity', 'is_ground']
type_list = ['
5. Trajectories
We provide 15 real-world trajectories recorded during a measurement campaign covering the whole HD map. Trajectory samples are provided approx. with 30 Hz and are encoded in JSON.
These trajectories were used to provide the samples in train.json, val.json. and test.json with realistic geolocations and orientations of the ego vehicle.
- OP1 – OP5 cover the majority of the map with 5 trajectories.
- RH1 – RH10 cover the majority of the map with 10 trajectories.
Note that OP5 is split into three separate parts, a-c. RH9 is split into two parts, a-b. Moreover, OP4 mostly equals OP1 (thus, we speak of 14 trajectories in our paper). For completeness, however, we provide all recorded trajectories here.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
a 3-D image sensor
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Point clouds of Constructie bridge on Westlandseweg in Delft was acquired for the project “Laser Scanning for Automatic Bridge Assessment” or called “BridgeScan” funded through H2020 Marie Curie IF and for a big assignment of a course “3D Surveying of Civil and Offshore Infrastructure” of a Master program at TU Delft Dept. Geoscience and Remote sensing”. The Constructie bridge is a concrete bridge with mix-vehicle and tram bridge with one span over a channel. The data points of the bridge were acquired using Leica ScanStation P40, from 9 stations (5 below, 2 on a top and 1 each side) with the sampling step 6.3mm at the measure range of 10.0m. The point clouds from different scanning stations were registered using the artificial targets through Leica Cylone software. The point cloud of the bridge was used to reconstruct a 3D model and identify surface damage. The data set was down-sampled with the sampling step of 5mm.
This portion of the data release presents topographic point clouds of the intertidal zone at Post Point, Bellingham Bay, WA. The point clouds were derived from structure-from-motion (SfM) processing of aerial imagery collected with an unmanned aerial system (UAS) on 2019-06-06. Two point clouds are presented with different resolutions: one point cloud (PostPoint_2019-06-06_pointcloud.zip) covers the entire survey area and has 145,653,2221 points with an average point density of 1,057 points per-square meter; the other point cloud (PostPointHighRes_2019-06-06_pointcloud.zip) has 139,427,055 points with an average point density of 3,487 points per-square meter and was derived from a lower-altitude flight covering an inset area within the main survey area. The point clouds are tiled to reduce individual files sizes and grouped within zip files for downloading. Each point in the point clouds contains an explicit horizontal and vertical coordinate, color, intensity, and classification. Water portions of the point cloud were classified using a polygon digitized from the orthomosaic imagery derived from these surveys (also available in this data release). No other classifications were performed. The raw imagery used to create these point clouds was acquired using a UAS fitted with a Ricoh GR II digital camera featuring a global shutter. The UAS was flown on pre-programmed autonomous flight lines spaced to provide approximately 70 percent overlap between images from adjacent lines. The camera was triggered at 1 Hz using a built-in intervalometer. For the main survey area point cloud, the UAS was flown at an approximate altitude of 70 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 1.8 centimeters per pixel. For the higher-resolution point cloud, the UAS was flown at an approximate altitude of 35 meters (AGL), resulting in a nominal ground-sample-distance (GSD) of 0.9 centimeters per pixel. The raw imagery was geotagged using positions from the UAS onboard single-frequency autonomous GPS. Nineteen temporary ground control points (GCPs) were distributed throughout each survey area to establish survey control. The GCPs consisted of a combination of small square tarps with black-and-white cross patterns and "X" marks placed on the ground using temporary chalk. The GCP positions were measured using post-processed kinematic (PPK) GPS, using corrections from a GPS base station located approximately 5 kilometers from the study area. The point clouds are formatted in LAZ format (LAS 1.2 specification).
Results for "classification of raw point clouds using deep learning & generating 3d building models"
https://choosealicense.com/licenses/odbl/https://choosealicense.com/licenses/odbl/
This dataset contains the 4D point cloud data from LiDAR sensors collected from fully autonomous and self-driving Indy race cars which raced in the Indy autonomous challenge. The dataset is in nuScenes format and is divided into 7,150 sweeps and 1,199 samples which contain fused sensor data from 3 LiDARs equipped by the vehicle. This dataset's scenario is PoliMove team’s Multi-Agent Slow on LVMS racetrack. Each .pcd file contains 4 dimensional data: (x,y,z) coordinates of the 3D space and… See the full description on the dataset page: https://huggingface.co/datasets/suwesh/RACECAR-multislow_poli.
The goal of the USGS 3D Elevation Program (3DEP) is to collect elevation data in the form of light detection and ranging (LiDAR) data over the conterminous United States, Hawaii, and the U.S. territories, with data acquired over an 8-year period. This dataset provides two realizations of the 3DEP point cloud data. The first resource is a public access organization provided in Entwine Point Tiles format, which a lossless, full-density, streamable octree based on LASzip (LAZ) encoding. The second resource is a Requester Pays of the original, Raw LAZ (Compressed LAS) 1.4 3DEP format, and more complete in coverage, as sources with incomplete or missing CRS, will not have an ETP tile generated. Resource names in both buckets correspond to the USGS project names.
Our project (STPLS3D) aims to provide a large-scale aerial photogrammetry dataset with synthetic and real annotated 3D point clouds for semantic and instance segmentation tasks.
Although various 3D datasets with different functions and scales have been proposed recently, it remains challenging for individuals to complete the whole pipeline of large-scale data collection, sanitization, and annotation (e.g., semantic and instance labels). Moreover, the created datasets usually suffer from extremely imbalanced class distribution or partial low-quality data samples. Motivated by this, we explore the procedurally synthetic 3D data generation paradigm to equip individuals with the full capability of creating large-scale annotated photogrammetry point clouds. Specifically, we introduce a synthetic aerial photogrammetry point clouds generation pipeline that takes full advantage of open geospatial data sources and off-the-shelf commercial packages. Unlike generating synthetic data in virtual games, where the simulated data usually have limited gaming environments created by artists, the proposed pipeline simulates the reconstruction process of the real environment by following the same UAV flight pattern on a wide variety of synthetic terrain shapes and building densities, which ensure similar quality, noise pattern, and diversity with real data. In addition, the precise semantic and instance annotations can be generated fully automatically, avoiding the expensive and time-consuming manual annotation process. Based on the proposed pipeline, we present a richly-annotated synthetic 3D aerial photogrammetry point cloud dataset, termed STPLS3D, with more than 16 km^2 of landscapes and up to 18 fine-grained semantic categories. For verification purposes, we also provide a parallel dataset collected from four areas in the real environment.
This portion of the data release presents a topographic point cloud of the debris flow at South Fork Campground in Sequoia National Park. The point cloud was derived from structure-from-motion (SfM) photogrammetry using aerial imagery acquired during an uncrewed aerial systems (UAS) survey on 30 April 2024, conducted under authorization from the National Park Service. The raw imagery was acquired with a Ricoh GR II digital camera featuring a global shutter. The UAS was flown on pre-programmed autonomous flight lines spaced to provide approximately 70 percent overlap between images from adjacent lines, from an approximate altitude of 110 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 2.9 centimeters per pixel. The imagery was geotagged using positions from the UAS onboard single-frequency autonomous GPS. Survey control was established using temporary ground control points (GCPs) consisting of a combination of small square tarps with black-and-white cross patterns and temporary chalk marks placed on the ground. The GCP positions were measured using dual-frequency real-time kinematic (RTK) GPS with corrections referenced to a static base station operating nearby. The images and GCP positions were used for structure-from-motion (SfM) photogrammetric processing to create a topographic point cloud, a high-resolution orthomosaic image, and a DSM. The point cloud contains 284,906,970 points with an average point-spacing of one point every three centimeters. The point cloud has not been classified, however points with confidence less than three (a measure of the number of depth maps used to generate a point) have been assigned a classification value of 7, which represents low noise. The point cloud is provided in a cloud optimized LAZ format to facilitate cloud-based queries and display.
Detroit Street View (DSV) is an urban remote sensing program run by the Enterprise Geographic Information Systems (EGIS) Team within the Department of Innovation and Technology at the City of Detroit. The mission of Detroit Street View is ‘To continuously observe and document Detroit’s changing physical environment through remote sensing, resulting in freely available foundational data that empowers effective city operations, informed decision making, awareness, and innovation.’ LiDAR (as well as panoramic imagery) is collected using a vehicle-mounted mobile mapping system.
Due to variations in processing, index lines are not currently available for all existing LiDAR datasets, including all data collected before September 2020. Index lines represent the approximate path of the vehicle within the time extent of the given LiDAR file. The actual geographic extent of the LiDAR point cloud varies dependent on line-of-sight.
Compressed (LAZ format) point cloud files may be requested by emailing gis@detroitmi.gov with a description of the desired geographic area, any specific dates/file names, and an explanation of interest and/or intended use. Requests will be filled at the discretion and availability of the Enterprise GIS Team. Deliverable file size limitations may apply and requestors may be asked to provide their own online location or physical media for transfer.
LiDAR was collected using an uncalibrated Trimble MX2 mobile mapping system. The data is not quality controlled, and no accuracy assessment is provided or implied. Results are known to vary significantly. Users should exercise caution and conduct their own comprehensive suitability assessments before requesting and applying this data.
Sample Dataset: https://detroitmi.maps.arcgis.com/home/item.html?id=69853441d944442f9e79199b57f26fe3
Classifying trees from point cloud data is useful in applications such as high-quality 3D basemap creation, urban planning, and forestry workflows. Trees have a complex geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.Using the modelFollow the guide to use the model. The model can be used with the 3D Basemaps solution and ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.InputThe model accepts unclassified point clouds with the attributes: X, Y, Z, and Number of Returns.Note: This model is trained to work on unclassified point clouds that are in a projected coordinate system, where the units of X, Y, and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The provided deep learning model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification.This model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time and compute resources while improving accuracy. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block, and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following 2 classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 5 Trees / High-vegetationApplicable geographiesThis model is expected to work well in all regions globally, with an exception of mountainous regions. However, results can vary for datasets that are statistically dissimilar to training data.Model architectureThis model uses the PointCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. Class Precision Recall F1-score Trees / High-vegetation (5) 0.975374 0.965929 0.970628Training dataThis model is trained on a subset of UK Environment Agency's open dataset. The training data used has the following characteristics: X, Y and Z linear unit meter Z range -19.29 m to 314.23 m Number of Returns 1 to 5 Intensity 1 to 4092 Point spacing 0.6 ± 0.3 Scan angle -23 to +23 Maximum points per block 8192 Extra attributes Number of Returns Class structure [0, 5]Sample resultsHere are a few results from the model.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into building and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Building is useful in applications such as high-quality 3D basemap creation, urban planning, and planning climate change response.Building could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Building in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.The model is trained with classified LiDAR that follows the The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 6 BuildingApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Auckland, Christchurch, Kapiti, Wellington Testing dataset - Auckland, WellingtonValidation/Evaluation dataset - Hutt City Dataset City Training Auckland, Christchurch, Kapiti, Wellington Testing Auckland, Wellington Validating HuttModel architectureThis model uses the SemanticQueryNetwork model architecture implemented in ArcGIS Pro.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.984921 0.975853 0.979762 Building 0.951285 0.967563 0.9584Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 75~%, Test: 25~%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-137.74 m to 410.50 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-17 to +17 Maximum points per block8192 Block Size50 Meters Class structure[0, 6]Sample resultsModel to classify a dataset with 23pts/m density Wellington city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This .las file contains sample LiDAR point cloud data collected by National Ecological Observatory Network's Airborne Observation Platform. The .las file format is a commonly used file format to store LIDAR point cloud data.This teaching data set is used for several tutorials on the NEON website (neonscience.org). The dataset is for educational purposes, data for research purposes can be obtained from the NEON Data Portal (data.neonscience.org).