81 datasets found
  1. NEON Teaching Data: LiDAR Point Cloud (.las) Data

    • figshare.com
    bin
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NEON Data Skills Teaching Data Subsets (2023). NEON Teaching Data: LiDAR Point Cloud (.las) Data [Dataset]. http://doi.org/10.6084/m9.figshare.4307750.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    NEON Data Skills Teaching Data Subsets
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This .las file contains sample LiDAR point cloud data collected by National Ecological Observatory Network's Airborne Observation Platform. The .las file format is a commonly used file format to store LIDAR point cloud data.This teaching data set is used for several tutorials on the NEON website (neonscience.org). The dataset is for educational purposes, data for research purposes can be obtained from the NEON Data Portal (data.neonscience.org).

  2. h

    Data from: 3D Point Cloud from Nakadake Sanroku Kiln Site Center, Japan:...

    • heidata.uni-heidelberg.de
    Updated May 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maria Shinoto; Michael Doneus; Hideyuki Haijima; Hannah Weiser; Vivien Zahs; Dominic Kempf; Gwydion Daskalakis; Bernhard Höfle; Naoko Nakamura; Maria Shinoto; Michael Doneus; Hideyuki Haijima; Hannah Weiser; Vivien Zahs; Dominic Kempf; Gwydion Daskalakis; Bernhard Höfle; Naoko Nakamura (2023). 3D Point Cloud from Nakadake Sanroku Kiln Site Center, Japan: Sample Data for the Application of Adaptive Filtering with the AFwizard [Dataset]. http://doi.org/10.11588/DATA/TJNQZG
    Explore at:
    application/geo+json(18842), json(300), pdf(1655163), bin(3156804), json(563), json(312), bin(81458436), bin(2214936), application/geo+json(27071), bin(4220562), bin(2082268)Available download formats
    Dataset updated
    May 11, 2023
    Dataset provided by
    heiDATA
    Authors
    Maria Shinoto; Michael Doneus; Hideyuki Haijima; Hannah Weiser; Vivien Zahs; Dominic Kempf; Gwydion Daskalakis; Bernhard Höfle; Naoko Nakamura; Maria Shinoto; Michael Doneus; Hideyuki Haijima; Hannah Weiser; Vivien Zahs; Dominic Kempf; Gwydion Daskalakis; Bernhard Höfle; Naoko Nakamura
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Area covered
    Minami-Satsuma City, Japan, Kagoshima, Hanaze (Nakadake-Sanroku Kiln Site Center)
    Dataset funded by
    Japan Society for the Promotion of Science
    Description

    This data set represents 3D point clouds acquired with LiDAR technology and related files from a subregion of 150*436 sqm in the ancient Nakadake Sanroku Kiln Site Center in South Japan. It is a densely vegetated mountainous region with varied topography and vegetation. The data set contains the original point cloud (reduced from a density of 5477 points per square meter to 100 points per square meter), a segmentation of the area based on characteristics in vegetation and topography, and filter pipelines for segments with different characteristics, and other data necessary. The data serve to test the AFwizard software which can create a DTM from the point cloud with varying filter and filter parameter selections based on varying segment characteristics (https://github.com/ssciwr/afwizard). The AFwizard adds flexibility to ground point filtering of 3D point clouds, which is a crucial step in a variety of applications of LiDAR technology. Digital Terrain Models (DTM) derived from filtered 3D point clouds serve various purposes and therefore, rather than creating one representation of the terrain that is supposed to be "true", a variety of models can be derived from the same point cloud according to the intended usage of the DTM. The sample data were acquired during an archaeological research project in a mountainous and densely forested region in South Japan -- the Nakadake-Sanroku Kiln Site Center: LiDAR data were acquired in a subregion of 0.5 sqkm, a relatively small area characterized by frequent and sudden changes in topography and vegetation. The point cloud is very dense due to the technology chosen (UAV multicopter GLYPHON DYNAMICS GD-X8-SP; LiDAR scanner RIEGL VUX-1 UAV). Usage of the data is restricted to the citation of the article mentioned below. Version 2.01: 2023-05-11; Article citation updated; 2022-07-21; Documentation (HowTo - Minimal Workflow) updated, data files tagged.

  3. i

    3D point cloud library

    • ieee-dataport.org
    Updated Jun 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    shimaa ali (2025). 3D point cloud library [Dataset]. https://ieee-dataport.org/documents/3d-point-cloud-library
    Explore at:
    Dataset updated
    Jun 17, 2025
    Authors
    shimaa ali
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    .PCD v0.7 - Point Cloud Data file formatVERSION 0.7FIELDS x y z rgbSIZE 4 4 4 4TYPE F F F FCOUNT 1 1 1 1WIDTH 2057209HEIGHT 1VIEWPOINT 0 0 0 1 0 0 0POINTS 2057209DATA ascii # .PCD v0.7 - Point Cloud Data file formatVERSION 0.7FIELDS x y z rgbSIZE 4 4 4 4TYPE F F F FCOUNT 1 1 1 1WIDTH 921568HEIGHT 1VIEWPOINT 0 0 0 1 0 0 0POINTS 921568DATA ascii

  4. m

    City of Melbourne 3D Point Cloud 2018

    • data.melbourne.vic.gov.au
    • researchdata.edu.au
    Updated Nov 19, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). City of Melbourne 3D Point Cloud 2018 [Dataset]. https://data.melbourne.vic.gov.au/explore/dataset/city-of-melbourne-3d-point-cloud-2018/
    Explore at:
    Dataset updated
    Nov 19, 2019
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Melbourne, Melbourne
    Description

    3D point cloud representing all physical features (e.g. buildings, trees and terrain) across City of Melbourne. The data has been encoded into a .las file format containing geospatial coordinates and RGB values for each point. The download is a zip file containing compressed .las files for tiles across the city area.

    The geospatial data has been captured in Map Grid of Australia (MGA) Zone 55 projection and is reflected in the xyz coordinates within each .las file. Also included are RGB (Red, Green, Blue) attributes to indicate the colour of each point.

    Capture Information - Capture Date: May 2018 - Capture Pixel Size: 7.5cm ground sample distance - Map Projection: MGA Zone 55 (MGA55) - Vertical Datum: Australian Height Datum (AHD) - Spatial Accuracy (XYZ): Supplied survey control used for control (Madigan Surveying) – 25 cm absolute accuracy

    Limitations: Whilst every effort is made to provide the data as accurate as possible, the content may not be free from errors, omissions or defects.

    Sample Data: For an interactive sample of the data please see the link below. https://cityofmelbourne.maps.arcgis.com/apps/webappviewer3d/index.html?id=b3dc1147ceda46ffb8229117a2dac56dPreview:Download:A zip file containing the .las files representing tiles of point cloud data across City of Melbourne area. Download Point Cloud Data (4GB)

  5. U

    Lidar Point Cloud - USGS National Map 3DEP Downloadable Data Collection

    • data.usgs.gov
    • s.cnmilf.com
    • +1more
    Updated Feb 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Lidar Point Cloud - USGS National Map 3DEP Downloadable Data Collection [Dataset]. https://data.usgs.gov/datacatalog/data/USGS:b7e353d2-325f-4fc6-8d95-01254705638a
    Explore at:
    Dataset updated
    Feb 14, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    U.S. Geological Survey
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    This data collection of the 3D Elevation Program (3DEP) consists of Lidar Point Cloud (LPC) projects as provided to the USGS. These point cloud files contain all the original lidar points collected, with the original spatial reference and units preserved. These data may have been used as the source of updates to the 1/3-arcsecond, 1-arcsecond, and 2-arcsecond seamless 3DEP Digital Elevation Models (DEMs). The 3DEP data holdings serve as the elevation layer of The National Map, and provide foundational elevation information for earth science studies and mapping applications in the United States. Lidar (Light detection and ranging) discrete-return point cloud data are available in LAZ format. The LAZ format is a lossless compressed version of the American Society for Photogrammetry and Remote Sensing (ASPRS) LAS format. Point Cloud data can be converted from LAZ to LAS or LAS to LAZ without the loss of any information. Either format stores 3-dimensional point cloud data and point ...

  6. 4

    Point clouds of Tram bridge in Schipluiden, The Netherlands

    • data.4tu.nl
    zip
    Updated Jan 28, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Linh Truong; Annie Papalexiou; Ullas Rajvanshi; Anastasia Dagla; Timo Bisschop; Roderik Lindenbergh (2021). Point clouds of Tram bridge in Schipluiden, The Netherlands [Dataset]. http://doi.org/10.4121/13626368.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 28, 2021
    Dataset provided by
    4TU.ResearchData
    Authors
    Linh Truong; Annie Papalexiou; Ullas Rajvanshi; Anastasia Dagla; Timo Bisschop; Roderik Lindenbergh
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    Dataset funded by
    European Commission
    Description

    Point clouds of Tram bridge on Schipluiden, Zuid-Holland was acquired for the project “Laser Scanning for Automatic Bridge Assessment” or called “BridgeScan” funded through H2020 Marie Curie IF and for a big assignment of a course “3D Surveying of Civil and Offshore Infrastructure” of a Master program at TU Delft Dept. Geoscience and Remote sensing”. The Tram bridge is a truss steel bridge, which was for a tram to transport vegetables. Now it is used for light traffic (mainly pedestrian, bike and motorbike). The data points of the bridge were acquired using Leica ScanStation P40, from 14 stations with the sampling step 6.3mm at the measure range of 10.0m. The point clouds from different scanning stations were registered using the artificial targets through Leica Cylone software. The point cloud of the bridge was used to reconstruct a 3D model and identify surface damage. The data set was cleaned irrelevant points and down-sampled with the sampling step of 5mm.

  7. d

    Data from: Caerbannog Point Clouds

    • catalog.data.gov
    • data.openei.org
    • +2more
    Updated Jan 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Laboratory (2025). Caerbannog Point Clouds [Dataset]. https://catalog.data.gov/dataset/caerbannog-point-clouds-df20a
    Explore at:
    Dataset updated
    Jan 22, 2025
    Dataset provided by
    National Renewable Energy Laboratory
    Description

    The Caerbannog Point Clouds provide point-sampled 3D models occluded in clouds of points. We synthesized the 3D point clouds from polygonal models, point-sampling the models and surrounding them in a point cloud such that the shape of the model is occluded in any 2D projection. We obscure our model-of-interest by repeatedly surrounding it with an amorphous cloud of points, giving the overall point cloud a structure of organic nature, like that of shrubbery. In a user study, participants were significantly better at identifying the models when visualized as 3D scatterplots under rotation than in axis-aligned 2D scatterplots. We provide three point-clouds, both occluded and unoccluded: the Stanford bunny, Utah Teapot, and OSG Cow.

  8. Data from: Detailed point cloud data on stem size and shape of Scots pine...

    • zenodo.org
    pdf, zip
    Updated Jul 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ninni Saarinen; Ninni Saarinen; Ville Kankare; Ville Kankare; Tuomas Yrttimaa; Tuomas Yrttimaa; Niko Viljanen; Niko Viljanen; Eija Honkavaara; Eija Honkavaara; Markus Holopainen; Markus Holopainen; Juha Hyyppä; Juha Hyyppä; Saija Huuskonen; Jari Hynynen; Jari Hynynen; Mikko Vastaranta; Mikko Vastaranta; Saija Huuskonen (2024). Detailed point cloud data on stem size and shape of Scots pine trees [Dataset]. http://doi.org/10.5281/zenodo.3701271
    Explore at:
    zip, pdfAvailable download formats
    Dataset updated
    Jul 22, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ninni Saarinen; Ninni Saarinen; Ville Kankare; Ville Kankare; Tuomas Yrttimaa; Tuomas Yrttimaa; Niko Viljanen; Niko Viljanen; Eija Honkavaara; Eija Honkavaara; Markus Holopainen; Markus Holopainen; Juha Hyyppä; Juha Hyyppä; Saija Huuskonen; Jari Hynynen; Jari Hynynen; Mikko Vastaranta; Mikko Vastaranta; Saija Huuskonen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data set is comprised of three packed zip files and they include text files of 3D information from terrestrial laser scanning (TLS) and aerial imagery from unmanned aerial vehicle (UAV) from individual Scots pine trees within 27 sample plots from three test sites located in southern Finland.

    TLS data acquisition was carried out with Trimble TX5 3D laser scanner (Trible Navigation Limited, USA) for all three study sites between September and October 2018. Eight scans were placed to each sample plot and scan resolution of point distance approximately 6.3 mm at 10-m distance was used. Artificial constant sized spheres (i.e. diameter of 198 mm) were placed around sample plots and used as reference objects for registering the eight scans onto a single, aligned coordinate system. The registration was carried out with FARO Scene software (version 2018). Aerial images were obtained by using an UAV with Gryphon Dynamics quadcopter frame. Two Sony A7R II digital cameras were mounted on the UAV in +15° and -15° angles. Images were acquired in every two seconds and image locations were recorded for each image. The flights were carried out on October 2, 2018. For each study site, eight ground control points (GCPs) were placed and measured. Flying height of 140 m and a flying speed of 5 m/s was selected for all the flights, resulting in 1.6 cm ground sampling distance. Total of 639, 614 and 663 images were captured for study site 1, 2, and 3, respectively, resulting in 93% and 75% forward and side overlaps, respectively. Photogrammetric processing of aerial images was carried out following the workflow as presented in Viljanen et al. (2018). The processing produced photogrammetric point clouds for each study site with point density of 804 points/m2, 976 points/m2, and 1030 points/m2 for study site 1, 2, and 3, respectively.

    The sample plots within the three test sites have been managed with different thinning treatments in either 2005 or 2006. The experimental design of the sample plots includes two levels of thinning intensity and three thinning types resulting in six different thinning treatments, namely i) moderate thinning from below, ii) moderate thinning from above, iii) moderate systematic thinning, iv) intensive thinning from below, v) intensive thinning from above, and vi) intensive systematic thinning, as well as a control plot where no thinning has been carried out since the establishment. More information about the study sites and samples plots as well as the thinning treatments can be found in Saarinen et al. (2020a).

    The data set includes stem points of individual Scot pine trees extracted from the point clouds. More about the method of extraction can be found in Saarinen et al. (2020a, 2020b) and Yrttimaa et al. (2020). The title of the zip file refers to the study sites 1, 2, and 3. The title of the text files includes the information on the test site, the plot within the test site, and the tree within the plot. The text files contain stem points extracted from the TLS point clouds. The columns “x” and “y” contain x- and y-coordinates in a local coordinate system (in meters), in column “h” is the height of each point in meters above ground, and treeID is the tree identification number. The columns are separated by space.

    Based on the study site and plot number, files from different thinning treatments can be identified by using the information in Table 1 in Saarinen et al. (2020b).

    References

    Saarinen, N., Kankare, V., Yrttimaa, T., Viljanen, N., Honkavaara, E., Holopainen, M., Hyyppä, J., Huuskonen, S., Hynynen, J., Vastaranta, M. 2020a. Assessing the effects of stand dynamics on stem growth allocation of individual Scots pines. bioRxiv 2020.03.02.972521. https://doi.org/10.1101/2020.03.02.972521

    Saarinen, N., Kankare, V., Yrttimaa, T., Viljanen, N., Honkavaara, E., Holopainen, M., Hyyppä, J., Huuskonen, S., Hynynen, J., Vastaranta, M. 2020b. Detailed point cloud data on stem size and shape of Scots pine trees. bioRxiv 2020.03.09.983973. https://doi.org/10.1101/2020.03.09.983973

    Viljanen, N., Honkavaara, E., Näsi, R., Hakala, T., Niemeläinen, O., Kaivosoja, J. 2018. A Novel Machine Learning Method for Estimating Biomass of Grass Swards Using a Photogrammetric Canopy Height Model, Images and Vegetation Indices Captured by a Drone. Agriculture 8: 70. https://doi.org/10.3390/agriculture8050070

    Yrttimaa, T., Saarinen, N., Kankare, V., Hynynen, J., Huuskonen, S., Holopainen, M., Hyyppä, J., Vastaranta, M. 2020. Performance of terrestrial laser scanning to characterize managed Scots pine (Pinus sylvestris L.) stands is dependent on forest structural variation. EarthArXiv. March 5. https://doi.org/10.31223/osf.io/ybs7c

  9. Data from: 3DHD CityScenes: High-Definition Maps in High-Density Point...

    • zenodo.org
    bin, pdf
    Updated Jul 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christopher Plachetka; Benjamin Sertolli; Jenny Fricke; Marvin Klingner; Tim Fingscheidt; Christopher Plachetka; Benjamin Sertolli; Jenny Fricke; Marvin Klingner; Tim Fingscheidt (2024). 3DHD CityScenes: High-Definition Maps in High-Density Point Clouds [Dataset]. http://doi.org/10.5281/zenodo.7085090
    Explore at:
    bin, pdfAvailable download formats
    Dataset updated
    Jul 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Christopher Plachetka; Benjamin Sertolli; Jenny Fricke; Marvin Klingner; Tim Fingscheidt; Christopher Plachetka; Benjamin Sertolli; Jenny Fricke; Marvin Klingner; Tim Fingscheidt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview

    3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items.

    Our corresponding paper (published at ITSC 2022) is available here.
    Further, we have applied 3DHD CityScenes to map deviation detection here.

    Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises:

    • Python tools to read, generate, and visualize the dataset,
    • 3DHDNet deep learning pipeline (training, inference, evaluation) for
      map deviation detection and 3D object detection.

    The DevKit is available here:

    https://github.com/volkswagen/3DHD_devkit.

    The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany.

    When using our dataset, you are welcome to cite:

    @INPROCEEDINGS{9921866,
      author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and 
      Fingscheidt, Tim},
      booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)}, 
      title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds}, 
      year={2022},
      pages={627-634}}

    Acknowledgements

    We thank the following interns for their exceptional contributions to our work.

    • Benjamin Sertolli: Major contributions to our DevKit during his master thesis
    • Niels Maier: Measurement campaign for data collection and data preparation

    The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies.

    The Dataset

    After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following.

    1. Dataset

    This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map.

    During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet.

    To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example.

    import json
    
    json_path = r"E:\3DHD_CityScenes\Dataset\train.json"
    with open(json_path) as jf:
      data = json.load(jf)
    print(data)

    2. HD_Map

    Map items are stored as lists of items in JSON format. In particular, we provide:

    • traffic signs,
    • traffic lights,
    • pole-like objects,
    • construction site locations,
    • construction site obstacles (point-like such as cones, and line-like such as fences),
    • line-shaped markings (solid, dashed, etc.),
    • polygon-shaped markings (arrows, stop lines, symbols, etc.),
    • lanes (ordinary and temporary),
    • relations between elements (only for construction sites, e.g., sign to lane association).

    3. HD_Map_MetaData

    Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON.

    Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API.

    4. HD_PointCloud_Tiles

    The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows.

    • x-coordinates: 4 byte integer
    • y-coordinates: 4 byte integer
    • z-coordinates: 4 byte integer
    • intensity of reflected beams: 2 byte unsigned integer
    • ground classification flag: 1 byte unsigned integer

    After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance.

    import numpy as np
    import pptk
    
    file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin"
    pc_dict = {}
    key_list = ['x', 'y', 'z', 'intensity', 'is_ground']
    type_list = ['

    5. Trajectories

    We provide 15 real-world trajectories recorded during a measurement campaign covering the whole HD map. Trajectory samples are provided approx. with 30 Hz and are encoded in JSON.

    These trajectories were used to provide the samples in train.json, val.json. and test.json with realistic geolocations and orientations of the ego vehicle.

    • OP1 – OP5 cover the majority of the map with 5 trajectories.
    • RH1 – RH10 cover the majority of the map with 10 trajectories.

    Note that OP5 is split into three separate parts, a-c. RH9 is split into two parts, a-b. Moreover, OP4 mostly equals OP1 (thus, we speak of 14 trajectories in our paper). For completeness, however, we provide all recorded trajectories here.

  10. i

    Dataset of point cloud data obtained from indoor experiment with two...

    • ieee-dataport.org
    Updated May 18, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Masamichi Oka (2022). Dataset of point cloud data obtained from indoor experiment with two 3D-image sensors [Dataset]. https://ieee-dataport.org/documents/dataset-point-cloud-data-obtained-indoor-experiment-two-3d-image-sensors
    Explore at:
    Dataset updated
    May 18, 2022
    Authors
    Masamichi Oka
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    a 3-D image sensor

  11. 4

    Point clouds of Constructie bridge on Westlandseweg, Delft, The Netherlands

    • data.4tu.nl
    • figshare.com
    bin
    Updated Feb 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Linh Truong; Roderik Lindenbergh; Qian Bai; Harm Kathman; Owen O'Driscoll (2021). Point clouds of Constructie bridge on Westlandseweg, Delft, The Netherlands [Dataset]. http://doi.org/10.4121/13626188.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Feb 2, 2021
    Dataset provided by
    4TU.ResearchData
    Authors
    Linh Truong; Roderik Lindenbergh; Qian Bai; Harm Kathman; Owen O'Driscoll
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    Delft, Netherlands
    Description

    Point clouds of Constructie bridge on Westlandseweg in Delft was acquired for the project “Laser Scanning for Automatic Bridge Assessment” or called “BridgeScan” funded through H2020 Marie Curie IF and for a big assignment of a course “3D Surveying of Civil and Offshore Infrastructure” of a Master program at TU Delft Dept. Geoscience and Remote sensing”. The Constructie bridge is a concrete bridge with mix-vehicle and tram bridge with one span over a channel. The data points of the bridge were acquired using Leica ScanStation P40, from 9 stations (5 below, 2 on a top and 1 each side) with the sampling step 6.3mm at the measure range of 10.0m. The point clouds from different scanning stations were registered using the artificial targets through Leica Cylone software. The point cloud of the bridge was used to reconstruct a 3D model and identify surface damage. The data set was down-sampled with the sampling step of 5mm.

  12. d

    Data from: Topographic point cloud for the intertidal zone at Post Point,...

    • catalog.data.gov
    • data.usgs.gov
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Topographic point cloud for the intertidal zone at Post Point, Bellingham Bay, WA, 2019-06-06 [Dataset]. https://catalog.data.gov/dataset/topographic-point-cloud-for-the-intertidal-zone-at-post-point-bellingham-bay-wa-2019-06-06
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Bellingham Bay, Post Point, Washington
    Description

    This portion of the data release presents topographic point clouds of the intertidal zone at Post Point, Bellingham Bay, WA. The point clouds were derived from structure-from-motion (SfM) processing of aerial imagery collected with an unmanned aerial system (UAS) on 2019-06-06. Two point clouds are presented with different resolutions: one point cloud (PostPoint_2019-06-06_pointcloud.zip) covers the entire survey area and has 145,653,2221 points with an average point density of 1,057 points per-square meter; the other point cloud (PostPointHighRes_2019-06-06_pointcloud.zip) has 139,427,055 points with an average point density of 3,487 points per-square meter and was derived from a lower-altitude flight covering an inset area within the main survey area. The point clouds are tiled to reduce individual files sizes and grouped within zip files for downloading. Each point in the point clouds contains an explicit horizontal and vertical coordinate, color, intensity, and classification. Water portions of the point cloud were classified using a polygon digitized from the orthomosaic imagery derived from these surveys (also available in this data release). No other classifications were performed. The raw imagery used to create these point clouds was acquired using a UAS fitted with a Ricoh GR II digital camera featuring a global shutter. The UAS was flown on pre-programmed autonomous flight lines spaced to provide approximately 70 percent overlap between images from adjacent lines. The camera was triggered at 1 Hz using a built-in intervalometer. For the main survey area point cloud, the UAS was flown at an approximate altitude of 70 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 1.8 centimeters per pixel. For the higher-resolution point cloud, the UAS was flown at an approximate altitude of 35 meters (AGL), resulting in a nominal ground-sample-distance (GSD) of 0.9 centimeters per pixel. The raw imagery was geotagged using positions from the UAS onboard single-frequency autonomous GPS. Nineteen temporary ground control points (GCPs) were distributed throughout each survey area to establish survey control. The GCPs consisted of a combination of small square tarps with black-and-white cross patterns and "X" marks placed on the ground using temporary chalk. The GCP positions were measured using post-processed kinematic (PPK) GPS, using corrections from a GPS base station located approximately 5 kilometers from the study area. The point clouds are formatted in LAZ format (LAS 1.2 specification).

  13. a

    Classification of raw point clouds using deep learning & generating 3d...

    • gemelo-digital-en-arcgis-gemelodigital.hub.arcgis.com
    Updated Sep 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Python API Test (2020). Classification of raw point clouds using deep learning & generating 3d building models [Dataset]. https://gemelo-digital-en-arcgis-gemelodigital.hub.arcgis.com/datasets/geosaurus::classification-of-raw-point-clouds-using-deep-learning-generating-3d-building-models-1
    Explore at:
    Dataset updated
    Sep 14, 2020
    Dataset authored and provided by
    Python API Test
    Area covered
    Description

    Results for "classification of raw point clouds using deep learning & generating 3d building models"

  14. h

    RACECAR-multislow_poli

    • huggingface.co
    Updated Dec 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Suwesh Sah (2024). RACECAR-multislow_poli [Dataset]. https://huggingface.co/datasets/suwesh/RACECAR-multislow_poli
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 25, 2024
    Authors
    Suwesh Sah
    License

    https://choosealicense.com/licenses/odbl/https://choosealicense.com/licenses/odbl/

    Description

    This dataset contains the 4D point cloud data from LiDAR sensors collected from fully autonomous and self-driving Indy race cars which raced in the Indy autonomous challenge. The dataset is in nuScenes format and is divided into 7,150 sweeps and 1,199 samples which contain fused sensor data from 3 LiDARs equipped by the vehicle. This dataset's scenario is PoliMove team’s Multi-Agent Slow on LVMS racetrack. Each .pcd file contains 4 dimensional data: (x,y,z) coordinates of the 3D space and… See the full description on the dataset page: https://huggingface.co/datasets/suwesh/RACECAR-multislow_poli.

  15. o

    USGS 3DEP LiDAR Point Clouds

    • registry.opendata.aws
    Updated Jan 22, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    USGS 3DEP LiDAR Point Clouds [Dataset]. https://registry.opendata.aws/usgs-lidar/
    Explore at:
    Dataset updated
    Jan 22, 2019
    Dataset provided by
    <a href="https://hobu.co">Hobu, Inc.</a>
    Description

    The goal of the USGS 3D Elevation Program (3DEP) is to collect elevation data in the form of light detection and ranging (LiDAR) data over the conterminous United States, Hawaii, and the U.S. territories, with data acquired over an 8-year period. This dataset provides two realizations of the 3DEP point cloud data. The first resource is a public access organization provided in Entwine Point Tiles format, which a lossless, full-density, streamable octree based on LASzip (LAZ) encoding. The second resource is a Requester Pays of the original, Raw LAZ (Compressed LAS) 1.4 3DEP format, and more complete in coverage, as sources with incomplete or missing CRS, will not have an ETP tile generated. Resource names in both buckets correspond to the USGS project names.

  16. P

    STPLS3D Dataset

    • paperswithcode.com
    Updated Mar 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meida Chen; Qingyong Hu; Zifan Yu; Hugues Thomas; Andrew Feng; Yu Hou; Kyle McCullough; Fengbo Ren; Lucio Soibelman (2022). STPLS3D Dataset [Dataset]. https://paperswithcode.com/dataset/stpls3d
    Explore at:
    Dataset updated
    Mar 2, 2022
    Authors
    Meida Chen; Qingyong Hu; Zifan Yu; Hugues Thomas; Andrew Feng; Yu Hou; Kyle McCullough; Fengbo Ren; Lucio Soibelman
    Description

    Our project (STPLS3D) aims to provide a large-scale aerial photogrammetry dataset with synthetic and real annotated 3D point clouds for semantic and instance segmentation tasks.

    Although various 3D datasets with different functions and scales have been proposed recently, it remains challenging for individuals to complete the whole pipeline of large-scale data collection, sanitization, and annotation (e.g., semantic and instance labels). Moreover, the created datasets usually suffer from extremely imbalanced class distribution or partial low-quality data samples. Motivated by this, we explore the procedurally synthetic 3D data generation paradigm to equip individuals with the full capability of creating large-scale annotated photogrammetry point clouds. Specifically, we introduce a synthetic aerial photogrammetry point clouds generation pipeline that takes full advantage of open geospatial data sources and off-the-shelf commercial packages. Unlike generating synthetic data in virtual games, where the simulated data usually have limited gaming environments created by artists, the proposed pipeline simulates the reconstruction process of the real environment by following the same UAV flight pattern on a wide variety of synthetic terrain shapes and building densities, which ensure similar quality, noise pattern, and diversity with real data. In addition, the precise semantic and instance annotations can be generated fully automatically, avoiding the expensive and time-consuming manual annotation process. Based on the proposed pipeline, we present a richly-annotated synthetic 3D aerial photogrammetry point cloud dataset, termed STPLS3D, with more than 16 km^2 of landscapes and up to 18 fine-grained semantic categories. For verification purposes, we also provide a parallel dataset collected from four areas in the real environment.

  17. c

    Data from: Topographic point cloud from UAS survey of the debris flow at...

    • s.cnmilf.com
    • data.usgs.gov
    • +1more
    Updated Dec 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Topographic point cloud from UAS survey of the debris flow at South Fork Campground, Sequoia National Park, CA [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/topographic-point-cloud-from-uas-survey-of-the-debris-flow-at-south-fork-campground-sequoi
    Explore at:
    Dataset updated
    Dec 25, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    California
    Description

    This portion of the data release presents a topographic point cloud of the debris flow at South Fork Campground in Sequoia National Park. The point cloud was derived from structure-from-motion (SfM) photogrammetry using aerial imagery acquired during an uncrewed aerial systems (UAS) survey on 30 April 2024, conducted under authorization from the National Park Service. The raw imagery was acquired with a Ricoh GR II digital camera featuring a global shutter. The UAS was flown on pre-programmed autonomous flight lines spaced to provide approximately 70 percent overlap between images from adjacent lines, from an approximate altitude of 110 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 2.9 centimeters per pixel. The imagery was geotagged using positions from the UAS onboard single-frequency autonomous GPS. Survey control was established using temporary ground control points (GCPs) consisting of a combination of small square tarps with black-and-white cross patterns and temporary chalk marks placed on the ground. The GCP positions were measured using dual-frequency real-time kinematic (RTK) GPS with corrections referenced to a static base station operating nearby. The images and GCP positions were used for structure-from-motion (SfM) photogrammetric processing to create a topographic point cloud, a high-resolution orthomosaic image, and a DSM. The point cloud contains 284,906,970 points with an average point-spacing of one point every three centimeters. The point cloud has not been classified, however points with confidence less than three (a measure of the number of depth maps used to generate a point) have been assigned a classification value of 7, which represents low noise. The point cloud is provided in a cloud optimized LAZ format to facilitate cloud-based queries and display.

  18. D

    Detroit Street View Terrestrial LiDAR (2020-2022)

    • detroitdata.org
    • data.detroitmi.gov
    • +1more
    Updated Apr 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Detroit (2023). Detroit Street View Terrestrial LiDAR (2020-2022) [Dataset]. https://detroitdata.org/dataset/detroit-street-view-terrestrial-lidar-2020-2022
    Explore at:
    arcgis geoservices rest api, zip, csv, gdb, gpkg, txt, html, geojson, kml, xlsxAvailable download formats
    Dataset updated
    Apr 18, 2023
    Dataset provided by
    City of Detroit
    Area covered
    Detroit
    Description

    Detroit Street View (DSV) is an urban remote sensing program run by the Enterprise Geographic Information Systems (EGIS) Team within the Department of Innovation and Technology at the City of Detroit. The mission of Detroit Street View is ‘To continuously observe and document Detroit’s changing physical environment through remote sensing, resulting in freely available foundational data that empowers effective city operations, informed decision making, awareness, and innovation.’ LiDAR (as well as panoramic imagery) is collected using a vehicle-mounted mobile mapping system.

    Due to variations in processing, index lines are not currently available for all existing LiDAR datasets, including all data collected before September 2020. Index lines represent the approximate path of the vehicle within the time extent of the given LiDAR file. The actual geographic extent of the LiDAR point cloud varies dependent on line-of-sight.

    Compressed (LAZ format) point cloud files may be requested by emailing gis@detroitmi.gov with a description of the desired geographic area, any specific dates/file names, and an explanation of interest and/or intended use. Requests will be filled at the discretion and availability of the Enterprise GIS Team. Deliverable file size limitations may apply and requestors may be asked to provide their own online location or physical media for transfer.

    LiDAR was collected using an uncalibrated Trimble MX2 mobile mapping system. The data is not quality controlled, and no accuracy assessment is provided or implied. Results are known to vary significantly. Users should exercise caution and conduct their own comprehensive suitability assessments before requesting and applying this data.

    Sample Dataset: https://detroitmi.maps.arcgis.com/home/item.html?id=69853441d944442f9e79199b57f26fe3

    DSV Logo

  19. Tree Point Classification

    • hub.arcgis.com
    • cacgeoportal.com
    • +1more
    Updated Oct 8, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2020). Tree Point Classification [Dataset]. https://hub.arcgis.com/content/58d77b24469d4f30b5f68973deb65599
    Explore at:
    Dataset updated
    Oct 8, 2020
    Dataset authored and provided by
    Esrihttp://esri.com/
    Description

    Classifying trees from point cloud data is useful in applications such as high-quality 3D basemap creation, urban planning, and forestry workflows. Trees have a complex geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.Using the modelFollow the guide to use the model. The model can be used with the 3D Basemaps solution and ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.InputThe model accepts unclassified point clouds with the attributes: X, Y, Z, and Number of Returns.Note: This model is trained to work on unclassified point clouds that are in a projected coordinate system, where the units of X, Y, and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The provided deep learning model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification.This model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time and compute resources while improving accuracy. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block, and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following 2 classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 5 Trees / High-vegetationApplicable geographiesThis model is expected to work well in all regions globally, with an exception of mountainous regions. However, results can vary for datasets that are statistically dissimilar to training data.Model architectureThis model uses the PointCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. Class Precision Recall F1-score Trees / High-vegetation (5) 0.975374 0.965929 0.970628Training dataThis model is trained on a subset of UK Environment Agency's open dataset. The training data used has the following characteristics: X, Y and Z linear unit meter Z range -19.29 m to 314.23 m Number of Returns 1 to 5 Intensity 1 to 4092 Point spacing 0.6 ± 0.3 Scan angle -23 to +23 Maximum points per block 8192 Extra attributes Number of Returns Class structure [0, 5]Sample resultsHere are a few results from the model.

  20. p

    Building Point Classification - New Zealand

    • pacificgeoportal.com
    • hub.arcgis.com
    Updated Sep 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eagle Technology Group Ltd (2023). Building Point Classification - New Zealand [Dataset]. https://www.pacificgeoportal.com/datasets/eaglegis::building-point-classification-new-zealand
    Explore at:
    Dataset updated
    Sep 18, 2023
    Dataset authored and provided by
    Eagle Technology Group Ltd
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    New Zealand
    Description

    This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into building and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Building is useful in applications such as high-quality 3D basemap creation, urban planning, and planning climate change response.Building could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Building in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.The model is trained with classified LiDAR that follows the The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 6 BuildingApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Auckland, Christchurch, Kapiti, Wellington Testing dataset - Auckland, WellingtonValidation/Evaluation dataset - Hutt City Dataset City Training Auckland, Christchurch, Kapiti, Wellington Testing Auckland, Wellington Validating HuttModel architectureThis model uses the SemanticQueryNetwork model architecture implemented in ArcGIS Pro.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.984921 0.975853 0.979762 Building 0.951285 0.967563 0.9584Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 75~%, Test: 25~%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-137.74 m to 410.50 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-17 to +17 Maximum points per block8192 Block Size50 Meters Class structure[0, 6]Sample resultsModel to classify a dataset with 23pts/m density Wellington city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
NEON Data Skills Teaching Data Subsets (2023). NEON Teaching Data: LiDAR Point Cloud (.las) Data [Dataset]. http://doi.org/10.6084/m9.figshare.4307750.v1
Organization logo

NEON Teaching Data: LiDAR Point Cloud (.las) Data

Explore at:
binAvailable download formats
Dataset updated
May 30, 2023
Dataset provided by
Figsharehttp://figshare.com/
Authors
NEON Data Skills Teaching Data Subsets
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This .las file contains sample LiDAR point cloud data collected by National Ecological Observatory Network's Airborne Observation Platform. The .las file format is a commonly used file format to store LIDAR point cloud data.This teaching data set is used for several tutorials on the NEON website (neonscience.org). The dataset is for educational purposes, data for research purposes can be obtained from the NEON Data Portal (data.neonscience.org).

Search
Clear search
Close search
Google apps
Main menu