Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This .las file contains sample LiDAR point cloud data collected by National Ecological Observatory Network's Airborne Observation Platform. The .las file format is a commonly used file format to store LIDAR point cloud data.This teaching data set is used for several tutorials on the NEON website (neonscience.org). The dataset is for educational purposes, data for research purposes can be obtained from the NEON Data Portal (data.neonscience.org).
3D point cloud representing all physical features (e.g. buildings, trees and terrain) across City of Melbourne. The data has been encoded into a .las file format containing geospatial coordinates and RGB values for each point. The download is a zip file containing compressed .las files for tiles across the city area.
The geospatial data has been captured in Map Grid of Australia (MGA) Zone 55 projection and is reflected in the xyz coordinates within each .las file.
Also included are RGB (Red, Green, Blue) attributes to indicate the colour of each point.
Capture Information
- Capture Date: May 2018
- Capture Pixel Size: 7.5cm ground sample distance
- Map Projection: MGA Zone 55 (MGA55)
- Vertical Datum: Australian Height Datum (AHD)
- Spatial Accuracy (XYZ): Supplied survey control used for control (Madigan Surveying) – 25 cm absolute accuracy
Limitations:
Whilst every effort is made to provide the data as accurate as possible, the content may not be free from errors, omissions or defects.
Sample Data:
For an interactive sample of the data please see the link below.
https://cityofmelbourne.maps.arcgis.com/apps/webappviewer3d/index.html?id=b3dc1147ceda46ffb8229117a2dac56d
Preview:
Download:
A zip file containing the .las files representing tiles of point cloud data across City of Melbourne area.
Download Point Cloud Data (4GB)
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This data collection of the 3D Elevation Program (3DEP) consists of Lidar Point Cloud (LPC) projects as provided to the USGS. These point cloud files contain all the original lidar points collected, with the original spatial reference and units preserved. These data may have been used as the source of updates to the 1/3-arcsecond, 1-arcsecond, and 2-arcsecond seamless 3DEP Digital Elevation Models (DEMs). The 3DEP data holdings serve as the elevation layer of The National Map, and provide foundational elevation information for earth science studies and mapping applications in the United States. Lidar (Light detection and ranging) discrete-return point cloud data are available in LAZ format. The LAZ format is a lossless compressed version of the American Society for Photogrammetry and Remote Sensing (ASPRS) LAS format. Point Cloud data can be converted from LAZ to LAS or LAS to LAZ without the loss of any information. Either format stores 3-dimensional point cloud data and point ...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items.
Our corresponding paper (published at ITSC 2022) is available here.
Further, we have applied 3DHD CityScenes to map deviation detection here.
Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises:
The DevKit is available here:
https://github.com/volkswagen/3DHD_devkit.
The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany.
When using our dataset, you are welcome to cite:
@INPROCEEDINGS{9921866,
author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and
Fingscheidt, Tim},
booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)},
title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds},
year={2022},
pages={627-634}}
Acknowledgements
We thank the following interns for their exceptional contributions to our work.
The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies.
The Dataset
After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following.
1. Dataset
This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map.
During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet.
To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example.
import json
json_path = r"E:\3DHD_CityScenes\Dataset\train.json"
with open(json_path) as jf:
data = json.load(jf)
print(data)
2. HD_Map
Map items are stored as lists of items in JSON format. In particular, we provide:
3. HD_Map_MetaData
Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON.
Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API.
4. HD_PointCloud_Tiles
The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows.
After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance.
import numpy as np
import pptk
file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin"
pc_dict = {}
key_list = ['x', 'y', 'z', 'intensity', 'is_ground']
type_list = ['
5. Trajectories
We provide 15 real-world trajectories recorded during a measurement campaign covering the whole HD map. Trajectory samples are provided approx. with 30 Hz and are encoded in JSON.
These trajectories were used to provide the samples in train.json, val.json. and test.json with realistic geolocations and orientations of the ego vehicle.
- OP1 – OP5 cover the majority of the map with 5 trajectories.
- RH1 – RH10 cover the majority of the map with 10 trajectories.
Note that OP5 is split into three separate parts, a-c. RH9 is split into two parts, a-b. Moreover, OP4 mostly equals OP1 (thus, we speak of 14 trajectories in our paper). For completeness, however, we provide all recorded trajectories here.
The goal of the USGS 3D Elevation Program (3DEP) is to collect elevation data in the form of light detection and ranging (LiDAR) data over the conterminous United States, Hawaii, and the U.S. territories, with data acquired over an 8-year period. This dataset provides two realizations of the 3DEP point cloud data. The first resource is a public access organization provided in Entwine Point Tiles format, which a lossless, full-density, streamable octree based on LASzip (LAZ) encoding. The second resource is a Requester Pays of the original, Raw LAZ (Compressed LAS) 1.4 3DEP format, and more complete in coverage, as sources with incomplete or missing CRS, will not have an ETP tile generated. Resource names in both buckets correspond to the USGS project names.
https://www.neonscience.org/data-samples/data-policies-citationhttps://www.neonscience.org/data-samples/data-policies-citation
Unclassified three-dimensional point cloud by flightline and classified point cloud by 1 km tile, provided in LAZ format. Classifications follow standard ASPRS definitions. All point coordinates are provided in meters. Horizontal coordinates are referenced in the appropriate UTM zone and the ITRF00 datum. Elevations are referenced to Geoid12A.
This Datasets contains the Kitti Object Detection Benchmark, created by Andreas Geiger, Philip Lenz and Raquel Urtasun in the Proceedings of 2012 CVPR ," Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite". This Kernel contains the object detection part of their different Datasets published for Autonomous Driving. It contains a set of images with their bounding box labels and velodyne point clouds. For more information visit the Website they published the data on (http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a point cloud sampe data which was collected by a mobile Lidar system (MLS).
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Work in progress: data might be changed
The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies.
Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock.
The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/807618b6-5c38-4456-88a1-cb47500081ff/download/detection_map.png" alt="Overview map of detected vehicles" title="Overview map of detected vehicles">
Figure 1: Overview map of detected vehicles
The public parking areas were digitized manually using aerial images and the detected vehicles in order to exclude irregular parking spaces as far as possible. They were also tagged as to whether they were aligned parallel to the road and assigned to a use at the time of recording, as some are used for construction sites or outdoor catering, for example. Depending on the intended use, they can be filtered individually.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/16b14c61-d1d6-4eda-891d-176bdd787bf5/download/parking_area_example.png" alt="Example parking area occupation pattern" title="Visualization of example parking areas on top of an aerial image [by LGLN]">
Figure 2: Visualization of example parking areas on top of an aerial image [by LGLN]
For modelling the parking occupancy, single slots are sampled as center points every 5 m from the parking areas. In this way, they can be integrated into a street/routing graph, for example, as prepared in Wage et al. (2023). Own representations can be generated from the parking area and vehicle detections. Those parking points were intersected with the vehicle boxes to identify occupancy at the respective epochs.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/ca0b97c8-2542-479e-83d7-74adb2fc47c0/download/datenpub-bays.png" alt="Overview map of parking slots' average load" title="Overview map of parking slots' average load">
Figure 3: Overview map of average parking lot load
However, unoccupied spaces cannot be determined quite as trivially the other way around, since no detected vehicle can result just as from no measurement/observation. Therefore, a parking space is only recorded as unoccupied if a vehicle was detected at the same time in the neighborhood on the same parking lane and therefore it can be assumed that there is a measurement.
To close temporal gaps, interpolations were made by hour for each parking slot, assuming that between two consecutive observations with an occupancy the space was also occupied in between - or if both times free also free in between. If there was a change, this is indicated by a proportional value. To close spatial gaps, unobserved spaces in the area are drawn randomly from the ten closest occupation patterns around.
This results in an exemplary occupancy pattern of a synthetic day. Depending on the application, the value could be interpreted as occupancy probability or occupancy share.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/184a1f75-79ab-4d0e-bb1b-8ed170678280/download/occupation_example.png" alt="Example parking area occupation pattern" title="Example parking area occupation pattern">
Figure 4: Example parking area occupation pattern
This dataset contains 3D Lidar scans representative for 0.5 ha permanent sample plots at Caatinga, Brazil. Two plots were located in Serra das Almas Reserve (SDA) and one plot in Petrolina (PET). The dataset also includes scans completed inside and outside for 10 individual trees. Scans were taken between July 2017 and May 2019.
PLEASE NOTE: This dataset has been retired. A new version of the data is available here: https://environment.data.gov.uk/dataset/09ea3b37-df3a-4e8b-ac69-fb0842227b04
The LIDAR Composite DTM (Digital Terrain Model) is a raster elevation model covering >93% of England at 2m spatial resolution.
Produced by the Environment Agency in 2020, this dataset is derived from a combination of our Time Stamped archive and National LIDAR Programme, which has been merged and re-sampled to give the best possible coverage. Where repeat surveys have been undertaken the newest, best resolution data is used. Where data was resampled a bilinear interpolation was used before being merged.
The 2020 LIDAR Composite contains surveys undertaken between 6th June 2000 and 1st September 2020. Please refer to the survey index files which shows, for any location, what Time Stamped survey or National LIDAR Programme block went into the production of the LIDAR composite for a specific location.
The DTM (Digital Terrain Model) is produced from the last return LIDAR signal. We remove surface objects from the Digital Surface Model (DSM), using bespoke algorithms and manual editing of the data, to produce a terrain model of just the surface. Available to download as GeoTiff files in 5km grids, data is presented in metres, referenced to Ordinance Survey Newlyn, using the OSTN’15 transformation. All LIDAR data has a vertical accuracy of +/-15cm RMSE.
Light Detection and Ranging (LIDAR) is an airborne mapping technique, which uses a laser to measure the distance between the aircraft and the ground. Up to 500,000 measurements per second are made of the ground, allowing highly detailed terrain models to be generated at spatial resolutions of between 25cm and 2 metres. The Environment Agency’s open data LIDAR archives includes the Point Cloud data, and derived raster surface models of survey specific areas dating back to 1998 and composites of the best data available in any location.
This metadata record is for Approval for Access product AfA458.
Attribution statement: (c) Environment Agency copyright and/or database right 2021. All rights reserved. Attribution Statement: © Environment Agency copyright and/or database right 2015. All rights reserved.
LiDAR data is made available on the Hong Kong Common Spatial Data Infrastructure (CSDI) Portal.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of GeocomputationThis is the implementation for the paper "A Graph Convolutional Neural Network-based Method for Predicting Computational Intensity of Geocomputation".The framework is Learning-based Computing Framework for Geospatial data(LCF-G).Prediction, ParallelComputation and SampleGeneration.This paper includes three case studies, each corresponding to a folder. Each folder contains four subfolders: data, CIThe data folder contains geospatail data.The CIPrediction folder contains model training code.The ParallelComputation folder contains geographic computation code.The SampleGeneration folder contains code for sample generation.case: Generation of DEM from point cloud datastep 1: Data downloadDataset 1 has been uploaded to the directory 1point2dem/data. The other two datasets, Dataset 2 and Dataset 3, can be downloaded from the following website:OpenTopography: https://opentopography.org/Below are the steps for downloading Dataset 2 and Dataset 3, along with the query parameters:Dataset 2:Visit OpenTopography Website:Go to Dataset 2 Download Link.https://portal.opentopography.org/lidarDataset?opentopoID=OTLAS.112018.2193.1Coordinates & Classification:In the section "1. Coordinates & Classification", select the option "Manually enter selection coordinates".Set the coordinates as follows: Xmin = 1372495.692761,Ymin = 5076006.86821,Xmax = 1378779.529766,Ymax = 5085586.39531Point Cloud Data Download:
ARROWSMITH, Ramón, School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287, CROSBY, Christopher J., UNAVCO, 6350 Nautilus Drive, Boulder, CO 80301 and NANDIGAM, Viswanath, San Diego Supercomputer Center, University of California, San Diego, MC 0505, 9500 Gilman Drive, La Jolla, CA 92093-0505
OpenTopography (OT) democratizes access to topographic data, services, and knowledge, enabling fundamental discoveries and innovative applications. OT focuses on improved data access using best practices in cyberinfrastructure and geoinformatics. We deliver topographic data (laser, radar, and photogrammetry) at a range of scales. We enable efficient access to global raster data (30-100 m/pix), but our emphasis has always been high-resolution topography (HRT; <1m/pix or >1 sample/sq. meter). OT currently holds 274 lidar point cloud datasets covering ~217,000 km2. More than a trillion points are available for on-demand processing and download. This is a considerable investment in HRT, valued at greater than $30 million, and represent the efforts of the NSF research community, public agencies, and international partners. OT distributes these data at various product levels at no cost to users, saving time and driving scientific innovation and broader impacts. OT has over 22,000 unique visitors per month, and almost 75,000 unique users have accessed data and processing services via OT. In 2017, 66,061 browser-based jobs were run with another 33,344 jobs via API calls. These computations and analyses support substantial academic, educational, and applied use and reuse of the valuable OT data holdings. OT exemplifies domain cyberinfrastructure evolving to become a production data facility upon which researchers, educators, and many others rely. Our partners depend on OT for data management because of our efficient distribution of data to a wide and diverse community of users, thus increasing the impact and return on investment of the data. OT supports tiered data access including lightweight network linked kmz hillshades all the way to custom derived topographic products such as drainage network properties and solar insolation distributions (for global datasets). Newly implemented browser based visualization of point cloud datasets enables rich 3D interactions without the need to download data or additional software tools. OT is built on open source software, and actively contributes to such efforts.
http://reference.data.gov.uk/id/open-government-licencehttp://reference.data.gov.uk/id/open-government-licence
The LIDAR Composite DSM (Digital Surface Model) is a raster elevation model covering >60% of England at 1m spatial resolution. Produced by the Environment Agency, this dataset is derived from a combination of our full time stamped archive, which has been merged and re-sampled to give the best possible coverage. Where repeat surveys have been undertaken the newest, best resolution data is used. The composite is updated on an annual basis to include the latest surveys.
The DSM (Digital Surface Model) is produced from the last return LIDAR signal and includes heights of objects, such as vehicles, buildings and vegetation, as well as the terrain surface. Available to download as ASCII files in 5km grids, data is presented in metres, referenced to Ordinance Survey Newlyn, using the OSTN’15 transformation. All LIDAR data has a vertical accuracy of +/-15cm RMSE. A tinted shaded relief, which is an image showing what LIDAR looks like when loaded into specialist software, is also available as a WMS feed. You can also download survey index files which shows, for any location, what Time Stamped survey went into the production of the LIDAR composite.
Light Detection and Ranging (LIDAR) is an airborne mapping technique, which uses a laser to measure the distance between the aircraft and the ground. Up to 500,000 measurements per second are made of the ground, allowing highly detailed terrain models to be generated at spatial resolutions of between 25cm and 2 metres. The Environment Agency’s open data LIDAR archives includes the Point Cloud data, and derived raster surface models of survey specific areas and composites of the best data available in any location.
To find out more about LIDAR and the various surface models we produce please read our story map
This metadata record is for Approval for Access product AfA458. Attribution statement: (c) Environment Agency copyright and/or database right 2019. All rights reserved. Attribution statement: © Environment Agency copyright and/or database right 2019. All rights reserved.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A terrain surface dataset that represents the height value of all natural and built features of the surface of the city. Each pixel within the image contains an elevation value in accordance with the Australian Height Datum (AHD).
The data has been captured in May 2018 as GeoTiff files, and covers the entire municipality.
A KML tile index file can be found in the attachments to indicate the location of each tile, along with a sample image.
Capture Information:
Capture Pixel Resolution: 0.1 metres
Limitations:
Whilst every effort is made to provide the data as accurate as possible, the content may not be free from errors, omissions or defects.Preview:Download:A zip file containing all relevant files representing the Digital Surface ModelDownload Digital Surface Model data (12.0GB)
This dataset is no longer available on the Data Services Platform. New version of this dataset, published in June 2020 is available here: https://environment.data.gov.uk/dataset/73c25700-052a-4d3e-87cf-71326fe2d73a and on Survey Data Catalogue.
The LIDAR Composite DTM (Digital Terrain Model) is a raster elevation model covering >85% of England at 2m spatial resolution. Produced by the Environment Agency in 2019, this dataset is derived from a combination of our Time Stamped archive and National LIDAR Programme, which has been merged and re-sampled to give the best possible coverage. Where repeat surveys have been undertaken the newest, best resolution data is used. Where data was resampled a bilinear interpolation was used before being merged.
The 2019 LIDAR Composite contains surveys undertaken between 12th March 1998 and 1st September 2019. Please refer to the survey index files which shows, for any location, what Time Stamped survey or National LIDAR Programme block went into the production of the LIDAR composite for a specific location.
The DTM (Digital Terrain Model) is produced from the last return LIDAR signal. We remove surface objects from the Digital Surface Model (DSM), using bespoke algorithms and manual editing of the data, to produce a terrain model of just the surface. Available to download as GeoTiff files in 5km grids, data is presented in metres, referenced to Ordinance Survey Newlyn, using the OSTN’15 transformation. All LIDAR data has a vertical accuracy of +/-15cm RMSE.
Light Detection and Ranging (LIDAR) is an airborne mapping technique, which uses a laser to measure the distance between the aircraft and the ground. Up to 500,000 measurements per second are made of the ground, allowing highly detailed terrain models to be generated at spatial resolutions of between 25cm and 2 metres. The Environment Agency’s open data LIDAR archives includes the Point Cloud data, and derived raster surface models of survey specific areas dating back to 1998 and composites of the best data available in any location.
This metadata record is for Approval for Access product AfA458.
Attribution statement: (c) Environment Agency copyright and/or database right 2020. All rights reserved. Attribution statement: © Environment Agency copyright and/or database right 2015. All rights reserved.
This dataset contains forest structure measurements for a wooded area where the liana Hendra helix (common ivy) is present. The dataset comprises two overlapping 3D point clouds for a small wooded area covering nearly 18000 m2 of Bramcote Hills Park (Nottinghamshire, UK) for a sample of trees infested with liana as well as a non-infested sample of trees. This datasets was acquired using a terrestrial Light Detection And Ranging (LiDAR) scanner on 13 January 2023, as part of a study assessing the diversity and composition of soil and litter dwelling organisms associated with liana infestations within forests. The work was supported by the Natural Environment Research Council (Grant NE/X018083/1).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
As an important part of agricultural cash crops in Hainan Province, more attention should be paid to monitoring sample plots of rubber trees. We developed a terrestrial photogrammetry system combined with 3D point-cloud reconstruction technology based on the structure from motion with Multiview Stereo method and sample plot survey data. Deviation analyses and accuracy evaluations of quadrat information were performed in the study area, covering different age classes and morphological characteristics, to explore its practical value in monitoring rubber forest sample plots. Furthermore, the relationship between under branch height, diameter at breast height (DBH), and rubber tree volume was explored, and a rubber tree binary volume model was established. Here, we innovatively proposed a planning scheme for a terrestrial photogrammetry system for the sustainable management of rubber forests, to provide a novel solution to the issues faced by current sample plot monitoring practices. In the future, the application of a terrestrial photogrammetry system to other types of forest monitoring will gradually be explored.The following is the data package of our research: 1. The data table in the folder (Coordinate) is the point position coordinate information of our sample shooting image. 2. The data table in the folder (Data) is the data summary table of our study, which includes the measurement value of rubber forest volume, the measurement value of rubber forest DBH, the measurement value of under branch height and the specific information of each rubber tree in 19 sample plots. 3. The data in the folder (Figures) is the relevant illustration in the research paper. Due to some missing data in the process of saving, we have supplemented and uploaded the data of the rubber forest 3D point cloud data obtained by the terrestrial photogrammetry system in the data set. If you need, please feel free to contact us. Due to the large amount of data storage, can not upload data website, we will provide you with a net disk download link.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This .las file contains sample LiDAR point cloud data collected by National Ecological Observatory Network's Airborne Observation Platform. The .las file format is a commonly used file format to store LIDAR point cloud data.This teaching data set is used for several tutorials on the NEON website (neonscience.org). The dataset is for educational purposes, data for research purposes can be obtained from the NEON Data Portal (data.neonscience.org).