The point cloud was delivered with data in the following classifications: Class 1 - Processed but Unclassified; Class 2 - Bare Earth Ground; Class 3 - Low Vegetation; Class 4 - Medium Vegetation; Class 5 - High Vegetation, Class 6 - Buildings; Class 7 - Low Point (Noise); Class 9 - Water; Class 17 - Bridge Decks; Class 18 - High Noise; Class 20 - Ignored Ground.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
cars
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The Ontario Point Cloud (Lidar-Derived) consists of points containing elevation and intensity information derived from returns collected by an airborne topographic lidar sensor. The point cloud is structured into non-overlapping 1 km by 1 km tiles in LAZ format. The following classification codes are applied to the data: * unclassified * ground * water * high noise * low noise This dataset is a compilation of lidar data from multiple acquisition projects, so specifications, parameters, accuracy and sensors may vary by project. This data is for geospatial tech specialists, and is used by government, municipalities, conservation authorities and the private sector for land use planning and environmental analysis. Related data: Raster derivatives have been created from the point clouds. These products may meet your needs and are available for direct download. For a representation of bare earth, see the Ontario Digital Terrain Model (Lidar-Derived). For a model representing all surface features, see the Ontario Digital Surface Model (Lidar-Derived).
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Raw lidar data consist of positions (x, y) and intensity values. They must undergo a classification process before individual points can be identified as belonging to ground, building, vegetation, etc., features. By completing this tutorial, you will become comfortable with the following skills:Converting .zlas files to .las for editing,Reassigning LAS class codes,Using automated lidar classification tools, andUsing 2D and 3D features to classify lidar data.Software Used: ArcGIS Pro 3.3Time to Complete: 60 - 90 minutesFile Size: 57mbDate Created: September 25, 2020Last Updated: September 27, 2024
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into building and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Building is useful in applications such as high-quality 3D basemap creation, urban planning, and planning climate change response.Building could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Building in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.The model is trained with classified LiDAR that follows the The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 6 BuildingApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Auckland, Christchurch, Kapiti, Wellington Testing dataset - Auckland, WellingtonValidation/Evaluation dataset - Hutt City Dataset City Training Auckland, Christchurch, Kapiti, Wellington Testing Auckland, Wellington Validating HuttModel architectureThis model uses the SemanticQueryNetwork model architecture implemented in ArcGIS Pro.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.984921 0.975853 0.979762 Building 0.951285 0.967563 0.9584Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 75~%, Test: 25~%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-137.74 m to 410.50 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-17 to +17 Maximum points per block8192 Block Size50 Meters Class structure[0, 6]Sample resultsModel to classify a dataset with 23pts/m density Wellington city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains all of the data used to develop the algorithms for our "Machine learning based region of interest detection in airborne lidar fisheries surveys" paper, which has been published in the SPIE Journal of Applied Remote Sensing. The software that processes the data can be found at DOI 10.5281/zenodo.5021330.
Please cite our journal article if you use the data for research purposes: T. C. Vannoy et al., "Machine learning based region of interest detection in airborne lidar fisheries surveys," SPIE Journal of Applied Remote Sensing 15(3), 038503 (2021). DOI 10.1117/1.JRS.15.038503.
Classifying trees from point cloud data is useful in applications such as high-quality 3D basemap creation, urban planning, and forestry workflows. Trees have a complex geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.Using the modelFollow the guide to use the model. The model can be used with the 3D Basemaps solution and ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.InputThe model accepts unclassified point clouds with the attributes: X, Y, Z, and Number of Returns.Note: This model is trained to work on unclassified point clouds that are in a projected coordinate system, where the units of X, Y, and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The provided deep learning model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification.This model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time and compute resources while improving accuracy. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block, and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following 2 classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 5 Trees / High-vegetationApplicable geographiesThis model is expected to work well in all regions globally, with an exception of mountainous regions. However, results can vary for datasets that are statistically dissimilar to training data.Model architectureThis model uses the PointCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. Class Precision Recall F1-score Trees / High-vegetation (5) 0.975374 0.965929 0.970628Training dataThis model is trained on a subset of UK Environment Agency's open dataset. The training data used has the following characteristics: X, Y and Z linear unit meter Z range -19.29 m to 314.23 m Number of Returns 1 to 5 Intensity 1 to 4092 Point spacing 0.6 ± 0.3 Scan angle -23 to +23 Maximum points per block 8192 Extra attributes Number of Returns Class structure [0, 5]Sample resultsHere are a few results from the model.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
LiDAR_Point_Clouds, Classified. AHD have been preocessed to conform to the Australian Height Datum and converted from files collected as swaths in to tiles of data. The file formats is LAS.
LAS is an industry format created and maintained by the American Society for Photogrammetry and Remote Sensing (ASPRS). LAS is a published standard file format for the interchange of lidar data. It maintains specific information related to lidar data. It is a way for vendors and clients to interchange data and maintain all information specific to that data. Each LAS file contains metadata of the lidar survey in a header block followed by individual records for each laser pulse recorded. The header portion of each LAS file holds attribute information on the lidar survey itself: data extents, flight date, flight time, number of point records, number of points by return, any applied data offset, and any applied scale factor. The following lidar point attributes are maintained for each laser pulse of a LAS file: x,y,z location information, GPS time stamp, intensity, return number, number of returns, point classification values, scan angle, additional RGB values, scan direction, edge of flight line, user data, point source ID and waveform information. Each and every lidar point in a LAS file can have a classification code set for it. Classifying lidar data allows you to organize mass points into specific data classes while still maintaining them as a whole data collection in LAS files. Typically, these classification codes represent the type of object that has reflected the laser pulse. Point classification is usually completed by data vendors using semi-automated techniques on the point cloud to assign the feature type associated with each point. Lidar points can be classified into a number of categories including bare earth or ground, top of canopy, and water. The different classes are defined using numeric integer codes in the LAS files. The following table contains the LAS classification codes as defined in the LAS 1.1 standard: Class code Classification type 0 Never classified 1 Unassigned 2 Ground 3 Low vegetation 4 Medium vegetation 5 High vegetation 6 Building 7 Noise 8 Model key 9 Water
Lineage: Fugro Spatial Solutions (FSS) were awarded a contract by Geoscience Australia to carry out an Aerial LiDAR Survey over the Kakadu National Park. The data will be used to examine the potential impacts of climate change and sea level rise on the West Alligator, South Alligator, East Alligator River systems and other minor areas. The project area was flight planned using parameters as specified. A FSS aircraft and aircrew were mobilised to site and the project area was captured using a Leica ALS60 system positioned using a DGPS base-station at Darwin airport. The Darwin base-station was positioned by DGPS observations from local control stations. A ground control survey was carried out by FSS surveyors to determine ground positions and heights for control and check points throughout the area. All data was returned to FSS office in Perth and processed. The deliverable datasets were generated and supplied to Geoscience Australia with this metadata information.
NEDF Metadata Acquisition Start Date: Saturday, 22 October 2011 Acquisition End Date: Wednesday, 16 November 2011 Sensor: LiDAR Device Name: Leica ALS60 (S/N: 6145) Flying Height (AGL): 1409 INS/IMU Used: uIRS-56024477 Number of Runs: 468 Number of Cross Runs: 28 Swath Width: 997 Flight Direction: Non-Cardinal Swath (side) Overlap: 20 Horizontal Datum: GDA94 Vertical Datum: AHD71 Map Projection: MGA53 Description of Aerotriangulation Process Used: Not Applicable Description of Rectification Process Used: Not Applicable Spatial Accuracy Horizontal: 0.8 Spatial Accuracy Vertical: 0.3 Average Point Spacing (per/sqm): 2 Laser Return Types: 4 pulses (1st 2nd 3rd 4th and intensity) Data Thinning: None Laser Footprint Size: 0.32 Calibration certification (Manufacturer/Cert. Company): Leica Limitations of the Data: To project specification Surface Type: Various Product Type: Other Classification Type: C0 Grid Resolution: 2 Distribution Format: Other Processing/Derivation Lineage: Capture, Geodetic Validation WMS: Not Applicable?
https://www.neonscience.org/data-samples/data-policies-citationhttps://www.neonscience.org/data-samples/data-policies-citation
Unclassified three-dimensional point cloud by flightline and classified point cloud by 1 km tile, provided in LAZ format. Classifications follow standard ASPRS definitions. All point coordinates are provided in meters. Horizontal coordinates are referenced in the appropriate UTM zone and the ITRF00 datum. Elevations are referenced to Geoid12A.
This Datasets contains the Kitti Object Detection Benchmark, created by Andreas Geiger, Philip Lenz and Raquel Urtasun in the Proceedings of 2012 CVPR ," Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite". This Kernel contains the object detection part of their different Datasets published for Autonomous Driving. It contains a set of images with their bounding box labels and velodyne point clouds. For more information visit the Website they published the data on (http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Set of 10 3D point clouds used in the validation of "Leaf and wood classification framework for terrestrial LiDAR point clouds". This dataset is a collection of single trees scanned around the globe, from different biomes (both forest and urban areas), using the Riegl VZ-400 terrestrial laser scanner.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Set of 200 3D point clouds used in the validation of "Leaf and wood classification framework for terrestrial LiDAR point clouds". This dataset is a collection of point clouds simulated by a Monte-Carlo ray tracing (librat) using four 3D tree models from the fourth phase RAMI exercise (Widlowski et al, 2015).
These files contain classified topo/bathy lidar data. Data are classified as 1 (valid non-ground topographic data), 2 (valid ground topographic data), 23 (submerged aquatic vegetation), and 29 (valid bathymetric data). Classes 1 and 2 are defined in accordance with the American Society for Photogrammetry and Remote Sensing (ASPRS) classification standards. These data were collected by the Coast...
The classification of point cloud datasets to identify distribution wires is useful for identifying vegetation encroachment around power lines. Such workflows are important for preventing fires and power outages and are typically manual, recurring, and labor-intensive. This model is designed to extract distribution wires at the street level. Its predictions for high-tension transmission wires are less consistent with changes in geography as compared to street-level distribution wires. In the case of high-tension transmission wires, a lower ‘recall’ value is observed as compared to the value observed for low-lying street wires and poles.Using the modelFollow the guide to use the model. The model can be used with ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.InputThe model accepts unclassified point clouds with point geometry (X, Y and Z values). Note: The model is not dependent on any additional attributes such as Intensity, Number of Returns, etc. This model is trained to work on unclassified point clouds that are in a projected coordinate system, in which the units of X, Y and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: Classcode Class Description 0 Background Class 14 Distribution Wires 15 Distribution Tower/PolesApplicable geographiesThe model is expected to work within any geography. It's seen to produce favorable results as shown here in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Model architectureThis model uses the RandLANet model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Background (0) 0.999679 0.999876 0.999778 Distribution Wires (14) 0.955085 0.936825 0.945867 Distribution Poles (15) 0.707983 0.553888 0.621527Training dataThis model is trained on manually classified training dataset provided to Esri by AAM group. The training data used has the following characteristics: X, Y, and Z linear unitmeter Z range-240.34 m to 731.17 m Number of Returns1 to 5 Intensity1 to 4095 Point spacing0.2 ± 0.1 Scan angle-42 to +35 Maximum points per block20000 Extra attributesNone Class structure[0, 14, 15]Sample resultsHere are a few results from the model.
This publication presents lidar data collected over the community of Golovin, on the southern coast of the Seward Peninsula in western Alaska (fig. 1). The original data were collected on November 5, 2013, by Quantum Spatial. The complete, classified lidar dataset was purchased by the State of Alaska Division of Geological & Geophysical Surveys in 2014 in support of coastal vulnerability mapping efforts. For the purposes of open access to lidar datasets in coastal regions of Alaska, this collection is being released as a Raw Data File with an open end-user license. The horizontal datum for this dataset is NAD83 (CORS96), the vertical datum is NAVD88, Geoid 09, and it is projected in UTM Zone 3 North. Units are in Meters. Data have been classified to Ground (class 2) and Default (class 1). Quantum Spatial collected the Golovin LiDAR data on 11/05/2013.
Original Product: These lidar data are processed Classified LAS 1.4 files, formatted to 654 individual 1000 m x 1000 m tiles; used to create intensity images, 3D breaklines, and hydro-flattened DEMs as necessary.
Original Dataset Geographic Extent: 4 counties (Alameda, Marin, San Francisco, San Mateo) in California, covering approximately 53 total square miles.
Original Dataset Descriptio...
This data represents LiDAR-derived classified LAS points for Columbia County, Wisconsin in 2011. Point classification uses semi-automated techniques on the point cloud to assign the feature type associated with each point. LiDAR points can be classified into a number of categories including bare earth or ground, top of canopy, and water. The different classes are defined using numeric integer codes in the LAS files. This data is also available as a series of tiles to enable downloads of smaller, more specific areas within the county. To access the tiled data, please visit: https://www.sco.wisc.edu/scoapps/lidar/tile-search/?layer=columbia
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
IntroductionThe unmanned aerial vehicle -based light detection and ranging (UAV-LiDAR) can quickly acquire the three-dimensional information of large areas of vegetation, and has been widely used in tree species classification.MethodsUAV-LiDAR point clouds of Populus alba, Populus simonii, Pinus sylvestris, and Pinus tabuliformis from 12 sample plots, 2,622 tree in total, were obtained in North China, training and testing sets were constructed through data pre-processing, individual tree segmentation, feature extraction, Non-uniform Grid and Farther Point Sampling (NGFPS), and then four tree species were classified efficiently by two machine learning algorithms and two deep learning algorithms.ResultsResults showed that PointMLP achieved the best accuracy for identification of the tree species (overall accuracy = 96.94%), followed by RF (overall accuracy = 95.62%), SVM (overall accuracy = 94.89%) and PointNet++(overall accuracy = 85.65%). In addition, the most suitable number of point cloud sampling of single tree is between 1,024 and 2048 when using the NGFPS method in the two deep learning models. Furthermore, feature value of elev_percentile_99th has an important influence on tree species classification and tree species with similar crown structures may lead to a higher misidentification rate.DiscussionThe study underscores the efficiency of PointMLP as a robust and streamlined solution, which offers a novel technological support for tree species classification in forestry resource management.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
The classification of airborne lidar data is a relevant task in different disciplines. The information about the geometry and the full waveform can be used in order to classify the 3D point cloud. In Wadden Sea areas the classification of lidar data is of main interest for the scientific monitoring of coastal morphology and habitats, but it becomes a challenging task due to flat areas with hardly any discriminative objects. For the classification we combine a Conditional Random Fields framework with a Random Forests approach. By classifying in this way, we benefit from the consideration of context on the one hand and from the opportunity to utilise a high number of classification features on the other hand. We investigate the relevance of different features for the lidar points in coastal areas as well as for the interaction of neighbouring points.
Lidar point cloud data with classifications – unclassified (1), ground (2), low vegetation (3), medium vegetation (4), high vegetation (5), buildings (6), low point - noise (7), reserved – model keypoint (8), high noise (18). / Données de nuages de points Lidar avec classification : non classifié (1); sol (2); végétation basse (3); végétation moyenne (4); végétation élevée (5); bâtiment (6); point bas – bruit (7); réservé – point de repère (8); bruit élevé (18).
The point cloud was delivered with data in the following classifications: Class 1 - Processed but Unclassified; Class 2 - Bare Earth Ground; Class 3 - Low Vegetation; Class 4 - Medium Vegetation; Class 5 - High Vegetation, Class 6 - Buildings; Class 7 - Low Point (Noise); Class 9 - Water; Class 17 - Bridge Decks; Class 18 - High Noise; Class 20 - Ignored Ground.