19 datasets found
  1. d

    Lidar Point Cloud - USGS National Map 3DEP Downloadable Data Collection

    • catalog.data.gov
    • data.usgs.gov
    Updated Mar 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Lidar Point Cloud - USGS National Map 3DEP Downloadable Data Collection [Dataset]. https://catalog.data.gov/dataset/lidar-point-cloud-usgs-national-map-3dep-downloadable-data-collection
    Explore at:
    Dataset updated
    Mar 11, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    This data collection of the 3D Elevation Program (3DEP) consists of Lidar Point Cloud (LPC) projects as provided to the USGS. These point cloud files contain all the original lidar points collected, with the original spatial reference and units preserved. These data may have been used as the source of updates to the 1/3-arcsecond, 1-arcsecond, and 2-arcsecond seamless 3DEP Digital Elevation Models (DEMs). The 3DEP data holdings serve as the elevation layer of The National Map, and provide foundational elevation information for earth science studies and mapping applications in the United States. Lidar (Light detection and ranging) discrete-return point cloud data are available in LAZ format. The LAZ format is a lossless compressed version of the American Society for Photogrammetry and Remote Sensing (ASPRS) LAS format. Point Cloud data can be converted from LAZ to LAS or LAS to LAZ without the loss of any information. Either format stores 3-dimensional point cloud data and point attributes along with header information and variable length records specific to the data. Millions of data points are stored as a 3-dimensional data cloud as a series of geo-referenced x, y coordinates and z (elevation), as well as other attributes for each point. Additonal information about the las file format can be found here: https://www.asprs.org/divisions-committees/lidar-division/laser-las-file-format-exchange-activities. All 3DEP products are public domain.

  2. S

    USGS 3DEP LiDAR Point Clouds

    • data.subak.org
    • registry.opendata.aws
    Updated Feb 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hobu, Inc. (2023). USGS 3DEP LiDAR Point Clouds [Dataset]. https://data.subak.org/dataset/usgs-3dep-lidar-point-clouds
    Explore at:
    Dataset updated
    Feb 16, 2023
    Dataset provided by
    Hobu, Inc.
    Description

    The goal of the USGS 3D Elevation Program (3DEP) is to collect elevation data in the form of light detection and ranging (LiDAR) data over the conterminous United States, Hawaii, and the U.S. territories, with data acquired over an 8-year period. This dataset provides two realizations of the 3DEP point cloud data. The first resource is a public access organization provided in Entwine Point Tiles format, which a lossless, full-density, streamable octree based on LASzip (LAZ) encoding. The second resource is a Requester Pays of the original, Raw LAZ (Compressed LAS) 1.4 3DEP format, and more complete in coverage, as sources with incomplete or missing CRS, will not have an ETP tile generated. Resource names in both buckets correspond to the USGS project names.

    Documentation

    https://github.com/hobu/usgs-lidar/

    Update Frequency

    Periodically

    License

    US Government Public Domain https://www.usgs.gov/faqs/what-are-terms-uselicensing-map-services-and-data-national-map

  3. 3D Point Cloud Processing Software market Will Grow at a CAGR of 7.00% from...

    • cognitivemarketresearch.com
    pdf,excel,csv,ppt
    Updated Jan 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cognitive Market Research (2025). 3D Point Cloud Processing Software market Will Grow at a CAGR of 7.00% from 2024 to 2031. [Dataset]. https://www.cognitivemarketresearch.com/3d-point-cloud-processing-software-market-report
    Explore at:
    pdf,excel,csv,pptAvailable download formats
    Dataset updated
    Jan 27, 2025
    Dataset provided by
    Decipher Market Research
    Authors
    Cognitive Market Research
    License

    https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy

    Time period covered
    2021 - 2033
    Area covered
    Global
    Description

    According to Cognitive Market Research, the global 3D Point Cloud Processing Software market size is USD 338.2 million in 2024 and will expand at a compound annual growth rate (CAGR) of 7.00% from 2024 to 2031.

    North America held the major market of more than 40% of the global revenue with a market size of USD 135.28 million in 2024 and will grow at a compound annual growth rate (CAGR) of 5.2% from 2024 to 2031.
    
    
    Europe accounted for a share of over 30% of the global market size of USD 101.46 million.
    
    
    Asia Pacific held the market of around 23% of the global revenue with a market size of USD 77.79 million in 2024 and will grow at a compound annual growth rate (CAGR) of 9.0% from 2024 to 2031.
    
    
    Latin America market of more than 5% of the global revenue with a market size of USD 16.91 million in 2024 and will grow at a compound annual growth rate (CAGR) of 6.4% from 2024 to 2031.
    
    
    Middle East and Africa held the major market of around 2% of the global revenue with a market size of USD 6.76 million in 2024 and will grow at a compound annual growth rate (CAGR) of 6.7% from 2024 to 2031.
    
    
    The Local Deployment held the highest 3D Point Cloud Processing Software market revenue share in 2024
    

    Market Dynamics of 3D Point Cloud Processing Software Market

    Key Drivers for 3D Point Cloud Processing Software Market

    Increasing Use of LiDAR and Photogrammetry to Increase the Demand Globally

    Large-scale 3D point cloud data creation is being fueled by the rising efficiency and affordability of LiDAR and photogrammetry technologies. Processing software is essential for efficiently utilizing this data since it converts unprocessed data into formats that may be used in a variety of applications. These technologies provide increased scalability and precision, making a variety of activities easier, from infrastructure building to environmental monitoring and urban planning. In addition to streamlining data collection, the combination of LiDAR and photogrammetry allows for precise and in-depth spatial analysis, which propels progress in disciplines like forestry, civil engineering, and archaeology. The interplay between data processing and acquisition software continues to spur innovation across industries even as demand rises and technology advances.

    Growth of BIM (Building Information Modeling to Propel Market Growth

    The broad use of Building Information Modeling (BIM) in the building sector is causing a revolution. Building information modeling (BIM) offers a full digital depiction of buildings and infrastructure, which improves design, planning, and construction workflows. The incorporation of 3D point cloud processing software, which is essential to combining point cloud data with BIM models, is at the center of this change. Construction experts may now use accurate and comprehensive as-built data for maintenance, retrofitting, and refurbishment projects thanks to this connection. Throughout the course of a project, BIM improves cooperation, lowers errors, and maximizes resource usage by fusing real-world data into virtual models. The industry continues to be driven by innovation due to the synergy between BIM and point cloud processing tools, as demand for sustainable and efficient construction processes develops

    Restraint Factor for the 3D Point Cloud Processing Software Market

    High Cost of Software Licenses to Limit the Sales

    Indeed, the high cost of 3D point cloud processing software licensing can be a major obstacle, particularly for smaller companies or individual users with tighter budgets. Programs that are feature-rich or complex sometimes have high price tags, which prevents people without significant financial means from using them. Due to their inability to compete with larger companies that can afford more expensive software solutions, smaller businesses may find it difficult to innovate and remain competitive in the industry as a result of the unequal access to cutting-edge software tools. In response to this difficulty, more flexible and affordable alternative pricing models have surfaced, such as pay-as-you-go or subscription-based services. Furthermore, accessible options are offered by open-source software projects and community-driven development activities to consumers looking for affordable solutions that don't sacrifice quality or functionality.

    Impact of Covid-19 on the 3D Point Cloud Processing Software Market

    Both good and n...

  4. Z

    3D Point Cloud Data for LiDAR-based Mobile Robot

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohd Romlay, Muhammad Rabani (2022). 3D Point Cloud Data for LiDAR-based Mobile Robot [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_5839708
    Explore at:
    Dataset updated
    Jan 12, 2022
    Dataset provided by
    Mohd Ibrahim, Azhar
    Toha, Siti Fauziah
    Mohd Romlay, Muhammad Rabani
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    LiDAR point cloud data serves as an machine vision alternative other than image. Its advantages when compared to image and video includes depth estimation and distance measurement. Low-density LiDAR point cloud data can be used to achieve navigation, obstacle detection and obstacle avoidance for mobile robots. autonomous vehicle and drones. In this metadata, we scanned over 1400 objects and classified it into 6 groups of object namely, human, cars, motorcyclist, signboard, road divider and others.

  5. k

    Kentucky LiDAR Point Cloud Data

    • kyfromabove.ky.gov
    • kyfromabove-kygeonet.opendata.arcgis.com
    • +2more
    Updated Aug 30, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KyGovMaps (2016). Kentucky LiDAR Point Cloud Data [Dataset]. https://kyfromabove.ky.gov/maps/b5ff91df6309491090c20333c8f58f52
    Explore at:
    Dataset updated
    Aug 30, 2016
    Dataset authored and provided by
    KyGovMaps
    Area covered
    Description

    This web map allows for the download of KyFromAbove LiDAR data by 5k tile in LAZ format. This point cloud data was acquired during the typical leaf-off acquisition period (winter-spring) over a period of several years and may be provided as LAS version 1.1, 1.2, or 1.4 depending upon the acquisition period. Users will need to download the LAZIP.exe in order to decompress each tile. LiDAR data specifications adopted by the KyFromAbove Technical Advisory Committee can be found here. This is the source data used to create the Commonwealth's 5 foot digital elevation model (DEM) and its associated derivatives. More information regarding this data resource can be found on the KyGeoPortal.

  6. Tree Point Classification

    • cacgeoportal.com
    • community-climatesolutions.hub.arcgis.com
    • +1more
    Updated Oct 8, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Esri (2020). Tree Point Classification [Dataset]. https://www.cacgeoportal.com/datasets/esri::tree-point-classification
    Explore at:
    Dataset updated
    Oct 8, 2020
    Dataset authored and provided by
    Esrihttp://esri.com/
    Description

    Classifying trees from point cloud data is useful in applications such as high-quality 3D basemap creation, urban planning, and forestry workflows. Trees have a complex geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.Using the modelFollow the guide to use the model. The model can be used with the 3D Basemaps solution and ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.InputThe model accepts unclassified point clouds with the attributes: X, Y, Z, and Number of Returns.Note: This model is trained to work on unclassified point clouds that are in a projected coordinate system, where the units of X, Y, and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The provided deep learning model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification.This model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time and compute resources while improving accuracy. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block, and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following 2 classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 5 Trees / High-vegetationApplicable geographiesThis model is expected to work well in all regions globally, with an exception of mountainous regions. However, results can vary for datasets that are statistically dissimilar to training data.Model architectureThis model uses the PointCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. Class Precision Recall F1-score Trees / High-vegetation (5) 0.975374 0.965929 0.970628Training dataThis model is trained on a subset of UK Environment Agency's open dataset. The training data used has the following characteristics: X, Y and Z linear unit meter Z range -19.29 m to 314.23 m Number of Returns 1 to 5 Intensity 1 to 4092 Point spacing 0.6 ± 0.3 Scan angle -23 to +23 Maximum points per block 8192 Extra attributes Number of Returns Class structure [0, 5]Sample resultsHere are a few results from the model.

  7. Z

    Sila National Park - 3D Point cloud data

    • data.niaid.nih.gov
    • zenodo.org
    Updated Feb 2, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Puletti, Nicola (2020). Sila National Park - 3D Point cloud data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3633628
    Explore at:
    Dataset updated
    Feb 2, 2020
    Dataset authored and provided by
    Puletti, Nicola
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains 3 types of data.

    GPS data (the ones starting with "GPS") of sampling plot centers collected with a Trimble GPS and post processed to ensure positioning errors lower than 2 meters.

    TLS data, (the ones starting with "ID_"): such data were collected in the end of August 2019 with a mobile terrestrial laser scanner (mobile ZEB TLS) in a squared area of approximatively 30x30m. Data have been normalized using TreeLS package in R.

    ALS data collected in the end of July 2019. For the entire study area, we upload 2 different ALS data: "merged.las" is the original point cloud; "myLas_norm_lt22.las" is the normalised point cloud, cut at 22 meters from the ground in order to perform specific analysis (i.e. paper under submission).

    Data collection was founded by the AGRIDIGIT Selvicoltura project.

  8. Data from: 3DHD CityScenes: High-Definition Maps in High-Density Point...

    • zenodo.org
    • data.niaid.nih.gov
    bin, pdf
    Updated Jul 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christopher Plachetka; Benjamin Sertolli; Jenny Fricke; Marvin Klingner; Tim Fingscheidt; Christopher Plachetka; Benjamin Sertolli; Jenny Fricke; Marvin Klingner; Tim Fingscheidt (2024). 3DHD CityScenes: High-Definition Maps in High-Density Point Clouds [Dataset]. http://doi.org/10.5281/zenodo.7085090
    Explore at:
    bin, pdfAvailable download formats
    Dataset updated
    Jul 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Christopher Plachetka; Benjamin Sertolli; Jenny Fricke; Marvin Klingner; Tim Fingscheidt; Christopher Plachetka; Benjamin Sertolli; Jenny Fricke; Marvin Klingner; Tim Fingscheidt
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview

    3DHD CityScenes is the most comprehensive, large-scale high-definition (HD) map dataset to date, annotated in the three spatial dimensions of globally referenced, high-density LiDAR point clouds collected in urban domains. Our HD map covers 127 km of road sections of the inner city of Hamburg, Germany including 467 km of individual lanes. In total, our map comprises 266,762 individual items.

    Our corresponding paper (published at ITSC 2022) is available here.
    Further, we have applied 3DHD CityScenes to map deviation detection here.

    Moreover, we release code to facilitate the application of our dataset and the reproducibility of our research. Specifically, our 3DHD_DevKit comprises:

    • Python tools to read, generate, and visualize the dataset,
    • 3DHDNet deep learning pipeline (training, inference, evaluation) for
      map deviation detection and 3D object detection.

    The DevKit is available here:

    https://github.com/volkswagen/3DHD_devkit.

    The dataset and DevKit have been created by Christopher Plachetka as project lead during his PhD period at Volkswagen Group, Germany.

    When using our dataset, you are welcome to cite:

    @INPROCEEDINGS{9921866,
      author={Plachetka, Christopher and Sertolli, Benjamin and Fricke, Jenny and Klingner, Marvin and 
      Fingscheidt, Tim},
      booktitle={2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)}, 
      title={3DHD CityScenes: High-Definition Maps in High-Density Point Clouds}, 
      year={2022},
      pages={627-634}}

    Acknowledgements

    We thank the following interns for their exceptional contributions to our work.

    • Benjamin Sertolli: Major contributions to our DevKit during his master thesis
    • Niels Maier: Measurement campaign for data collection and data preparation

    The European large-scale project Hi-Drive (www.Hi-Drive.eu) supports the publication of 3DHD CityScenes and encourages the general publication of information and databases facilitating the development of automated driving technologies.

    The Dataset

    After downloading, the 3DHD_CityScenes folder provides five subdirectories, which are explained briefly in the following.

    1. Dataset

    This directory contains the training, validation, and test set definition (train.json, val.json, test.json) used in our publications. Respective files contain samples that define a geolocation and the orientation of the ego vehicle in global coordinates on the map.

    During dataset generation (done by our DevKit), samples are used to take crops from the larger point cloud. Also, map elements in reach of a sample are collected. Both modalities can then be used, e.g., as input to a neural network such as our 3DHDNet.

    To read any JSON-encoded data provided by 3DHD CityScenes in Python, you can use the following code snipped as an example.

    import json
    
    json_path = r"E:\3DHD_CityScenes\Dataset\train.json"
    with open(json_path) as jf:
      data = json.load(jf)
    print(data)

    2. HD_Map

    Map items are stored as lists of items in JSON format. In particular, we provide:

    • traffic signs,
    • traffic lights,
    • pole-like objects,
    • construction site locations,
    • construction site obstacles (point-like such as cones, and line-like such as fences),
    • line-shaped markings (solid, dashed, etc.),
    • polygon-shaped markings (arrows, stop lines, symbols, etc.),
    • lanes (ordinary and temporary),
    • relations between elements (only for construction sites, e.g., sign to lane association).

    3. HD_Map_MetaData

    Our high-density point cloud used as basis for annotating the HD map is split in 648 tiles. This directory contains the geolocation for each tile as polygon on the map. You can view the respective tile definition using QGIS. Alternatively, we also provide respective polygons as lists of UTM coordinates in JSON.

    Files with the ending .dbf, .prj, .qpj, .shp, and .shx belong to the tile definition as “shape file” (commonly used in geodesy) that can be viewed using QGIS. The JSON file contains the same information provided in a different format used in our Python API.

    4. HD_PointCloud_Tiles

    The high-density point cloud tiles are provided in global UTM32N coordinates and are encoded in a proprietary binary format. The first 4 bytes (integer) encode the number of points contained in that file. Subsequently, all point cloud values are provided as arrays. First all x-values, then all y-values, and so on. Specifically, the arrays are encoded as follows.

    • x-coordinates: 4 byte integer
    • y-coordinates: 4 byte integer
    • z-coordinates: 4 byte integer
    • intensity of reflected beams: 2 byte unsigned integer
    • ground classification flag: 1 byte unsigned integer

    After reading, respective values have to be unnormalized. As an example, you can use the following code snipped to read the point cloud data. For visualization, you can use the pptk package, for instance.

    import numpy as np
    import pptk
    
    file_path = r"E:\3DHD_CityScenes\HD_PointCloud_Tiles\HH_001.bin"
    pc_dict = {}
    key_list = ['x', 'y', 'z', 'intensity', 'is_ground']
    type_list = ['

    5. Trajectories

    We provide 15 real-world trajectories recorded during a measurement campaign covering the whole HD map. Trajectory samples are provided approx. with 30 Hz and are encoded in JSON.

    These trajectories were used to provide the samples in train.json, val.json. and test.json with realistic geolocations and orientations of the ego vehicle.

    • OP1 – OP5 cover the majority of the map with 5 trajectories.
    • RH1 – RH10 cover the majority of the map with 10 trajectories.

    Note that OP5 is split into three separate parts, a-c. RH9 is split into two parts, a-b. Moreover, OP4 mostly equals OP1 (thus, we speak of 14 trajectories in our paper). For completeness, however, we provide all recorded trajectories here.

  9. a

    2017 Countywide LiDAR Point Cloud

    • hub.arcgis.com
    • datasets.ai
    • +2more
    Updated Jan 12, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lake County Illinois GIS (2021). 2017 Countywide LiDAR Point Cloud [Dataset]. https://hub.arcgis.com/documents/lakecountyil::2017-countywide-lidar-point-cloud
    Explore at:
    Dataset updated
    Jan 12, 2021
    Dataset authored and provided by
    Lake County Illinois GIS
    License

    https://www.arcgis.com/sharing/rest/content/items/89679671cfa64832ac2399a0ef52e414/datahttps://www.arcgis.com/sharing/rest/content/items/89679671cfa64832ac2399a0ef52e414/data

    Area covered
    Description

    Click here to access the data directly from the Illinois State Geospatial Data Clearinghouse. These lidar data are processed Classified LAS 1.4 files, formatted to 2,117 individual 2500 ft x 2500 ft tiles; used to create Reflectance Images, 3D breaklines and hydro-flattened DEMs as necessary. Geographic Extent: Lake county, Illinois covering approximately 466 square miles. Dataset Description: WI Kenosha-Racine Counties and IL 4 County QL1 Lidar project called for the Planning, Acquisition, processing and derivative products of lidar data to be collected at a derived nominal pulse spacing (NPS) of 1 point every 0.35 meters. Project specifications are based on the U.S. Geological Survey National Geospatial Program Base Lidar Specification, Version 1.2. The data was developed based on a horizontal projection/datum of NAD83 (2011), State Plane, U.S Survey Feet and vertical datum of NAVD88 (GEOID12B), U.S. Survey Feet. Lidar data was delivered as processed Classified LAS 1.4 files, formatted to 2,117 individual 2500 ft x 2500 ft tiles, as tiled Reflectance Imagery, and as tiled bare earth DEMs; all tiled to the same 2500 ft x 2500 ft schema. Ground Conditions: Lidar was collected April-May 2017, while no snow was on the ground and rivers were at or below normal levels. In order to post process the lidar data to meet task order specifications and meet ASPRS vertical accuracy guidelines, Ayers established a total of 66 ground control points that were used to calibrate the lidar to known ground locations established throughout the WI Kenosha-Racine Counties and IL 4 County QL1 project area. An additional 195 independent accuracy checkpoints, 116 in Bare Earth and Urban landcovers (116 NVA points), 79 in Tall Grass and Brushland/Low Trees categories (79 VVA points), were used to assess the vertical accuracy of the data. These checkpoints were not used to calibrate or post process the data.

    Users should be aware that temporal changes may have occurred since this dataset was collected and that some parts of these data may no longer represent actual surface conditions. Users should not use these data for critical applications without a full awareness of its limitations. Acknowledgement of the U.S. Geological Survey would be appreciated for products derived from these data.

    These LAS data files include all data points collected. No points have been removed or excluded. A visual qualitative assessment was performed to ensure data completeness. No void areas or missing data exist. The raw point cloud is of good quality and data passes Non-Vegetated Vertical Accuracy specifications.Link Source: Illinois Geospatial Data Clearinghouse

  10. p

    Tree Point Classification - New Zealand

    • pacificgeoportal.com
    • geoportal-pacificcore.hub.arcgis.com
    • +1more
    Updated Jul 25, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eagle Technology Group Ltd (2022). Tree Point Classification - New Zealand [Dataset]. https://www.pacificgeoportal.com/content/0e2e3d0d0ef843e690169cac2f5620f9
    Explore at:
    Dataset updated
    Jul 25, 2022
    Dataset authored and provided by
    Eagle Technology Group Ltd
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    This New Zealand Point Cloud Classification Deep Learning Package will classify point clouds into tree and background classes. This model is optimized to work with New Zealand aerial LiDAR data.The classification of point cloud datasets to identify Trees is useful in applications such as high-quality 3D basemap creation, urban planning, forestry workflows, and planning climate change response.Trees could have a complex irregular geometrical structure that is hard to capture using traditional means. Deep learning models are highly capable of learning these complex structures and giving superior results.This model is designed to extract Tree in both urban and rural area in New Zealand.The Training/Testing/Validation dataset are taken within New Zealand resulting of a high reliability to recognize the pattern of NZ common building architecture.Licensing requirementsArcGIS Desktop - ArcGIS 3D Analyst extension for ArcGIS ProUsing the modelThe model can be used in ArcGIS Pro's Classify Point Cloud Using Trained Model tool. Before using this model, ensure that the supported deep learning frameworks libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.InputThe model is trained with classified LiDAR that follows the LINZ base specification. The input data should be similar to this specification.Note: The model is dependent on additional attributes such as Intensity, Number of Returns, etc, similar to the LINZ base specification. This model is trained to work on classified and unclassified point clouds that are in a projected coordinate system, in which the units of X, Y and Z are based on the metric system of measurement. If the dataset is in degrees or feet, it needs to be re-projected accordingly. The model was trained using a training dataset with the full set of points. Therefore, it is important to make the full set of points available to the neural network while predicting - allowing it to better discriminate points of 'class of interest' versus background points. It is recommended to use 'selective/target classification' and 'class preservation' functionalities during prediction to have better control over the classification and scenarios with false positives.The model was trained on airborne lidar datasets and is expected to perform best with similar datasets. Classification of terrestrial point cloud datasets may work but has not been validated. For such cases, this pre-trained model may be fine-tuned to save on cost, time, and compute resources while improving accuracy. Another example where fine-tuning this model can be useful is when the object of interest is tram wires, railway wires, etc. which are geometrically similar to electricity wires. When fine-tuning this model, the target training data characteristics such as class structure, maximum number of points per block and extra attributes should match those of the data originally used for training this model (see Training data section below).OutputThe model will classify the point cloud into the following classes with their meaning as defined by the American Society for Photogrammetry and Remote Sensing (ASPRS) described below: 0 Background 5 Trees / High-vegetationApplicable geographiesThe model is expected to work well in the New Zealand. It's seen to produce favorable results as shown in many regions. However, results can vary for datasets that are statistically dissimilar to training data.Training dataset - Wellington CityTesting dataset - Tawa CityValidation/Evaluation dataset - Christchurch City Dataset City Training Wellington Testing Tawa Validating ChristchurchModel architectureThis model uses the PointCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThe table below summarizes the accuracy of the predictions on the validation dataset. - Precision Recall F1-score Never Classified 0.991200 0.975404 0.983239 High Vegetation 0.933569 0.975559 0.954102Training dataThis model is trained on classified dataset originally provided by Open TopoGraphy with < 1% of manual labelling and correction.Train-Test split percentage {Train: 80%, Test: 20%} Chosen this ratio based on the analysis from previous epoch statistics which appears to have a descent improvementThe training data used has the following characteristics: X, Y, and Z linear unitMeter Z range-121.69 m to 26.84 m Number of Returns1 to 5 Intensity16 to 65520 Point spacing0.2 ± 0.1 Scan angle-15 to +15 Maximum points per block8192 Block Size20 Meters Class structure[0, 5]Sample resultsModel to classify a dataset with 5pts/m density Christchurch city dataset. The model's performance are directly proportional to the dataset point density and noise exlcuded point clouds.To learn how to use this model, see this story

  11. d

    Data from: 3D point cloud data from laser scanning along the 2014 South Napa...

    • catalog.data.gov
    • data.usgs.gov
    • +3more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). 3D point cloud data from laser scanning along the 2014 South Napa Earthquake surface rupture, California, USA [Dataset]. https://catalog.data.gov/dataset/3d-point-cloud-data-from-laser-scanning-along-the-2014-south-napa-earthquake-surface-ruptu
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Napa, United States, California
    Description

    Point cloud data collected along a 500 meter portion of the 2014 South Napa Earthquake surface rupture near Cuttings Wharf Road, Napa, CA, USA. The data include 7 point cloud files (.laz). The files are named with the location and date of collection and either ALSM for airborne laser scanner data or TLS for terrestrial laser scanner data. The ALSM data re previously released but are included here because they have been precisely aligned with the TLS data as described in the processing section of this metadata. The included files are: Napa_CuttingsWharf_TLS_31082015_utm.laz Napa_CuttingsWharf_TLS_27022015_utm.laz Napa_CuttingsWharf_TLS_26082014_utm.laz Napa_CuttingsWharf_TLS_22102014_utm.laz Napa_CuttingsWharf_TLS_15092014_utm.laz Napa_CuttingsWharf_ALSM_09092014_utm.laz Napa_CuttingsWharf_ALSM_xx052003_utm.laz

  12. d

    Lidar point cloud data for Cabeza Prieta National Wildlife Refuge (CPNWR),...

    • catalog.data.gov
    Updated Oct 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Lidar point cloud data for Cabeza Prieta National Wildlife Refuge (CPNWR), Arizona, February 2022 [Dataset]. https://catalog.data.gov/dataset/lidar-point-cloud-data-for-cabeza-prieta-national-wildlife-refuge-cpnwr-arizona-february-2
    Explore at:
    Dataset updated
    Oct 26, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Arizona
    Description

    These data were compiled for Cabeza Prieta National Wildlife Refuge (CPNWR) in southern Arizona, to support managment efforts of water resources and wildlife conservation. Objective(s) of our study were to 1) measure water storage capacity at select stage heights in three tanks (also termed tinajas), 2) build a stage storage model to help CPNWR staff accurately estimate water volumes throughout the year, and 3) collect topographic data adjacent to the tanks as a means to help connect these survey data to past or future work. These data represent high-resolution (sub-meter) ground based lidar measurements used to meet these objectives and are provided as: processed lidar files (point clouds), rasters (digital elevation models), and vectors (shapefiles). These data were collected in Southern Arizona at Buckhorn, Eagle, and Senita tanks in the CPNWR from February 13-18, 2022. These data were collected by U.S. Geological Survey - Southwest Biological Science Center - Grand Canyon Monitoring and Research Center (GCMRC) staff for the CPNWR using a Riegl VZ 1000 ground-based lidar to produces ground elevation models georeferenced using control target coordinates collected by a Trimble real-time kinematic (RTK) rover and base station. These data which represent maximum water storage capacity at Buckhorn, Eagle and Senita tanks following sediment removal by CPNWR staff less than one month prior can be used to support management efforts for water resources at these tanks, and wildlife conservation in the CPNWR. Additionally, these data can be used as baseline conditions for evaluating changes in water storage and water storage capacity.

  13. P

    InLUT3D Dataset

    • paperswithcode.com
    Updated Jul 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). InLUT3D Dataset [Dataset]. https://paperswithcode.com/dataset/inlut3d
    Explore at:
    Dataset updated
    Jul 12, 2024
    Description

    This dataset called Indoor Lodz University of Technology Point Cloud Dataset (InLUT3D) is a point cloud set tailored for real object classification and both semantic and instance segmentation tasks. Comprising of 321 scans, some areas in the dataset are covered by multiple scans. All of them are captured using the Leica BLK360 scanner.

    The points are divided into 18 distinct categories outlined in the label.yaml file along with their respective codes and colors. Among categories you will find:

    ceiling, floor, wall, stairs, column, chair, sofa, table, storage, door, window, plant, dish, wallmounted, device, radiator, lighting, other.

    Several challenges are intrinsic to the presented dataset:

    Extremely non-uniform categories distribution across the dataset. Presence of virtual images, particularly in reflective surfaces, and data exterior to windows and doors. Occurrence of missing data due to scanning shadows (certain areas were inaccessible to the scanner's laser beam). High point density throughout the dataset.

    Each PTS file contains 8 columns:

    Column IDDescription
    1X Cartesian coordinate
    2Y Cartesian coordinate
    3Z Cartesian Coordinate
    4Red colour in RGB space in the range [0, 255]
    5Green colour in RGB space in the range [0, 255]
    6Blue colour in RGB space in the range [0, 255]
    7Category code
    8Instance ID
  14. o

    Point cloud data sets of real and virtual Chenopodium alba

    • explore.openaire.eu
    • data.niaid.nih.gov
    Updated Aug 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Katia Mirande; Christophe Godin; Marie Tisserand; Julie Charlaix; Fabrice Besnard; Franck Hetroy-Wheeler (2022). Point cloud data sets of real and virtual Chenopodium alba [Dataset]. http://doi.org/10.5281/zenodo.6962993
    Explore at:
    Dataset updated
    Aug 4, 2022
    Authors
    Katia Mirande; Christophe Godin; Marie Tisserand; Julie Charlaix; Fabrice Besnard; Franck Hetroy-Wheeler
    Description

    This data set contains: - 5 annotated point clouds of real Chenopodium alba plants obtained from multi-view 2D camera imaging. Annotations consist of 5 classes: leaf blade, petiole, apex, main stem, branch. .txt files contain both 3D coordinates and annotations. .ply files are also provided for raw 3D point data without annotations. - 24 annotated point clouds of virtual Chenopodium alba that were generated by a L-system simulation program. Annotations consist of 3 classes: leaf blade, petiole, stem. 3D coordinates and annotations are in separated .txt files. These files have been used in a companion paper.

  15. a

    Making Space for Water Point Cloud - Mellons Bay

    • data-aucklandcouncil.opendata.arcgis.com
    • hub.arcgis.com
    • +1more
    Updated Dec 3, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Auckland Council (2024). Making Space for Water Point Cloud - Mellons Bay [Dataset]. https://data-aucklandcouncil.opendata.arcgis.com/maps/aucklandcouncil::making-space-for-water-point-cloud-mellons-bay
    Explore at:
    Dataset updated
    Dec 3, 2024
    Dataset authored and provided by
    Auckland Council
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    The Auckland Urban Stream Survey – conducted by the Waterways Centre between April 8 and 9, 2024 – delivered high-resolution, high-fidelity terrain models to support Auckland Council’s Making Space for Water initiative. Waterways Centre used a bidirectional gridded acquisition flight plan using the VUX-240 sensor to capture LiDAR data of the Pakuranga Creek and Cockle Bay catchments as well as the eastern portion of the Whau catchment (Blockhouse Bay). A dense colourised 3D point cloud data was generated along with a push-button orthophotography at 0.1 m resolution.

  16. U

    Three-dimensional point cloud data collected with a scanning total station...

    • data.usgs.gov
    • datasets.ai
    • +2more
    Updated Jul 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    William Capurso; Michael Noll; Anthony Chu (2024). Three-dimensional point cloud data collected with a scanning total station on the western shoreline of the Shinnecock Nation Tribal Lands, Suffolk County, New York, 2022 [Dataset]. http://doi.org/10.5066/P9OG0AAO
    Explore at:
    Dataset updated
    Jul 21, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    William Capurso; Michael Noll; Anthony Chu
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Time period covered
    Jul 26, 2022 - Oct 17, 2022
    Area covered
    Suffolk County, New York
    Description

    This data release contains about 60 million point cloud data points collected during 27 scans of a section of the western shoreline of the Shinnecock Peninsula in Suffolk County, New York. Data were collected during July and October of 2022. Data are provided as .las files with points classified as either bare earth (GROUND_SN_BRIC_BL_2022_v03.0 (2).las), vegetation (VEGETATION_SN_BRIC_BL_2022_v03.0.las) or unclassified (DEFAULT_SN_BRIC_BL_2022_v03.0.las). Users are encouraged to read the metadata and Noll and others (2024) to understand how the data were collected, registered, and classified.

  17. Data from: Detection of Structural Components in Point Clouds of Existing RC...

    • zenodo.org
    bin
    Updated Jan 24, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ruodan LU; Ioannis Brilakis; Campbell R. Middleton; Ruodan LU; Ioannis Brilakis; Campbell R. Middleton (2020). Detection of Structural Components in Point Clouds of Existing RC Bridges [Dataset]. http://doi.org/10.5281/zenodo.1240534
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ruodan LU; Ioannis Brilakis; Campbell R. Middleton; Ruodan LU; Ioannis Brilakis; Campbell R. Middleton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The cost and effort of modelling existing bridges from point clouds currently outweighs the perceived benefits of the resulting model. There is a pressing need to automate this process. Previous research has achieved the automatic generation of surface primitives combined with rule-based classification to create labelled cuboids and cylinders from point clouds. While these methods work well in synthetic datasets or idealized cases, they encounter huge challenges when dealing with real-world bridge point clouds, which are often unevenly distributed and suffer from occlusions. In addition, real bridge geometries are complicated. In this paper, we propose a novel top-down method to tackle these challenges for detecting slab, pier, pier cap, and girder components in reinforced concrete bridges. This method uses a slicing algorithm to separate the deck assembly from pier assemblies. It then detects and segments pier caps using their surface normal, and girders using oriented bounding boxes and density histograms. Finally, our method merges over-segments into individually labelled point clusters. The results of 10 real-world bridge point cloud experiments indicate that our method achieves an average detection precision of 98.8%. This is the first method of its kind to achieve robust detection performance for the four component types in reinforced concrete bridges and to directly produce labelled point clusters. Our work provides a solid foundation for future work in generating rich Industry Foundation Classes models from the labelled point clusters.

  18. d

    Data from: Lidar Point Cloud Data of Nogahabara Sand Dunes, Alaska;...

    • catalog.data.gov
    • search.dataone.org
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Lidar Point Cloud Data of Nogahabara Sand Dunes, Alaska; September 2015 [Dataset]. https://catalog.data.gov/dataset/lidar-point-cloud-data-of-nogahabara-sand-dunes-alaska-september-2015
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Alaska
    Description

    This dataset provides information needed to reproduce a digital model of the Nogahabara Dune Field located in interior Alaska. The Nogahabara Dunes represent one of three active inland dune fields found in Alaska today. In an effort to update geospatial coverage of the dunes lidar data was collected over Nogahabara Sand Dunes in September 2015 using a 1955 Cessna 180 aircraft equipped with a Riegl brand LMS-Q240i laser scanner. The scanner was set to 10,000 laser shots per second and a +/- 30 degree beam sweep and flown over the Koyukuk National Wildlife Refuge’s Nogahabara sand dunes. The flight pattern was designed for 50 percent overlap between each adjacent swath to achieve 2 points per square meter over the entire coverage. The lidar scanner was rigidly attached to a OXTS brand Inertial+2 GPS/IMU unit, which was being fed by a Trimble R7 GPS receiver. The resultant survey achieved 1.4 points per square meter. Discrete-return point cloud data are available in the LAS format.

  19. d

    Data from: Topographic point cloud from UAS survey of the debris flow at...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Dec 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Topographic point cloud from UAS survey of the debris flow at South Fork Campground, Sequoia National Park, CA [Dataset]. https://catalog.data.gov/dataset/topographic-point-cloud-from-uas-survey-of-the-debris-flow-at-south-fork-campground-sequoi
    Explore at:
    Dataset updated
    Dec 25, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    California
    Description

    This portion of the data release presents a topographic point cloud of the debris flow at South Fork Campground in Sequoia National Park. The point cloud was derived from structure-from-motion (SfM) photogrammetry using aerial imagery acquired during an uncrewed aerial systems (UAS) survey on 30 April 2024, conducted under authorization from the National Park Service. The raw imagery was acquired with a Ricoh GR II digital camera featuring a global shutter. The UAS was flown on pre-programmed autonomous flight lines spaced to provide approximately 70 percent overlap between images from adjacent lines, from an approximate altitude of 110 meters above ground level (AGL), resulting in a nominal ground-sample-distance (GSD) of 2.9 centimeters per pixel. The imagery was geotagged using positions from the UAS onboard single-frequency autonomous GPS. Survey control was established using temporary ground control points (GCPs) consisting of a combination of small square tarps with black-and-white cross patterns and temporary chalk marks placed on the ground. The GCP positions were measured using dual-frequency real-time kinematic (RTK) GPS with corrections referenced to a static base station operating nearby. The images and GCP positions were used for structure-from-motion (SfM) photogrammetric processing to create a topographic point cloud, a high-resolution orthomosaic image, and a DSM. The point cloud contains 284,906,970 points with an average point-spacing of one point every three centimeters. The point cloud has not been classified, however points with confidence less than three (a measure of the number of depth maps used to generate a point) have been assigned a classification value of 7, which represents low noise. The point cloud is provided in a cloud optimized LAZ format to facilitate cloud-based queries and display.

  20. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
U.S. Geological Survey (2025). Lidar Point Cloud - USGS National Map 3DEP Downloadable Data Collection [Dataset]. https://catalog.data.gov/dataset/lidar-point-cloud-usgs-national-map-3dep-downloadable-data-collection

Lidar Point Cloud - USGS National Map 3DEP Downloadable Data Collection

Explore at:
Dataset updated
Mar 11, 2025
Dataset provided by
United States Geological Surveyhttp://www.usgs.gov/
Description

This data collection of the 3D Elevation Program (3DEP) consists of Lidar Point Cloud (LPC) projects as provided to the USGS. These point cloud files contain all the original lidar points collected, with the original spatial reference and units preserved. These data may have been used as the source of updates to the 1/3-arcsecond, 1-arcsecond, and 2-arcsecond seamless 3DEP Digital Elevation Models (DEMs). The 3DEP data holdings serve as the elevation layer of The National Map, and provide foundational elevation information for earth science studies and mapping applications in the United States. Lidar (Light detection and ranging) discrete-return point cloud data are available in LAZ format. The LAZ format is a lossless compressed version of the American Society for Photogrammetry and Remote Sensing (ASPRS) LAS format. Point Cloud data can be converted from LAZ to LAS or LAS to LAZ without the loss of any information. Either format stores 3-dimensional point cloud data and point attributes along with header information and variable length records specific to the data. Millions of data points are stored as a 3-dimensional data cloud as a series of geo-referenced x, y coordinates and z (elevation), as well as other attributes for each point. Additonal information about the las file format can be found here: https://www.asprs.org/divisions-committees/lidar-division/laser-las-file-format-exchange-activities. All 3DEP products are public domain.

Search
Clear search
Close search
Google apps
Main menu