100+ datasets found
  1. NEON Teaching Data: LiDAR Point Cloud (.las) Data

    • figshare.com
    • datasetcatalog.nlm.nih.gov
    bin
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NEON Data Skills Teaching Data Subsets (2023). NEON Teaching Data: LiDAR Point Cloud (.las) Data [Dataset]. http://doi.org/10.6084/m9.figshare.4307750.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    NEON Data Skills Teaching Data Subsets
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This .las file contains sample LiDAR point cloud data collected by National Ecological Observatory Network's Airborne Observation Platform. The .las file format is a commonly used file format to store LIDAR point cloud data.This teaching data set is used for several tutorials on the NEON website (neonscience.org). The dataset is for educational purposes, data for research purposes can be obtained from the NEON Data Portal (data.neonscience.org).

  2. d

    2020 LiDAR - Classified LAS

    • opendata.dc.gov
    • trees.dc.gov
    • +3more
    Updated Dec 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Washington, DC (2020). 2020 LiDAR - Classified LAS [Dataset]. https://opendata.dc.gov/datasets/2020-lidar-classified-las/about
    Explore at:
    Dataset updated
    Dec 21, 2020
    Dataset authored and provided by
    City of Washington, DC
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    These lidar data are processed classified LAS 1.4 files at USGS QL2 covering the District of Columbia. Voids exist in the data due to data redaction conducted under the guidance of the United States Secret Service. This dataset provided as an ArcGIS Image service. Please note, the download feature for this image service in Open Data DC provides a compressed PNG, JPEG or TIFF. The individual LAS point cloud datasets are available under additional options when viewing downloads.

  3. Lidar Dataset

    • kaggle.com
    zip
    Updated Oct 22, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Karim Cossentini (2020). Lidar Dataset [Dataset]. https://www.kaggle.com/datasets/karimcossentini/velodyne-point-cloud-dataset
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Oct 22, 2020
    Authors
    Karim Cossentini
    Description

    This Datasets contains the Kitti Object Detection Benchmark, created by Andreas Geiger, Philip Lenz and Raquel Urtasun in the Proceedings of 2012 CVPR ," Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite". This Kernel contains the object detection part of their different Datasets published for Autonomous Driving. It contains a set of images with their bounding box labels and velodyne point clouds. For more information visit the Website they published the data on (http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d).

  4. D

    Detroit Street View Terrestrial LiDAR (2020-2022)

    • detroitdata.org
    • data.detroitmi.gov
    • +1more
    Updated Apr 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Detroit (2023). Detroit Street View Terrestrial LiDAR (2020-2022) [Dataset]. https://detroitdata.org/dataset/detroit-street-view-terrestrial-lidar-2020-2022
    Explore at:
    csv, geojson, zip, gpkg, gdb, arcgis geoservices rest api, kml, xlsx, html, txtAvailable download formats
    Dataset updated
    Apr 18, 2023
    Dataset provided by
    City of Detroit
    Area covered
    Detroit
    Description

    Detroit Street View (DSV) is an urban remote sensing program run by the Enterprise Geographic Information Systems (EGIS) Team within the Department of Innovation and Technology at the City of Detroit. The mission of Detroit Street View is ‘To continuously observe and document Detroit’s changing physical environment through remote sensing, resulting in freely available foundational data that empowers effective city operations, informed decision making, awareness, and innovation.’ LiDAR (as well as panoramic imagery) is collected using a vehicle-mounted mobile mapping system.

    Due to variations in processing, index lines are not currently available for all existing LiDAR datasets, including all data collected before September 2020. Index lines represent the approximate path of the vehicle within the time extent of the given LiDAR file. The actual geographic extent of the LiDAR point cloud varies dependent on line-of-sight.

    Compressed (LAZ format) point cloud files may be requested by emailing gis@detroitmi.gov with a description of the desired geographic area, any specific dates/file names, and an explanation of interest and/or intended use. Requests will be filled at the discretion and availability of the Enterprise GIS Team. Deliverable file size limitations may apply and requestors may be asked to provide their own online location or physical media for transfer.

    LiDAR was collected using an uncalibrated Trimble MX2 mobile mapping system. The data is not quality controlled, and no accuracy assessment is provided or implied. Results are known to vary significantly. Users should exercise caution and conduct their own comprehensive suitability assessments before requesting and applying this data.

    Sample Dataset: https://detroitmi.maps.arcgis.com/home/item.html?id=69853441d944442f9e79199b57f26fe3

    DSV Logo

  5. Fusion of LiDAR and Hyperspectral Data

    • figshare.com
    zip
    Updated Jan 20, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pedram Ghamisi; Stuart Phinn (2016). Fusion of LiDAR and Hyperspectral Data [Dataset]. http://doi.org/10.6084/m9.figshare.2007723.v4
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 20, 2016
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Pedram Ghamisi; Stuart Phinn
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset is captured over Samford Ecological Research Facility (SERF), which is located within the Samford valley in south east Queensland, Australia. The central point of the dataset is located at coordinates: 27.38572oS, 152.877098oE. The Vegetation Management Act 1999 protects the vegetation on this property as it provides a refuge to native flora and fauna that are under increasing pressure caused by urbanization.The hyperspectral image was acquired by the SPECIM AsiaEAGLE II sensor on the second of February, 2013. This sensor captures 252 spectral channels ranging from 400.7nm to 999.2nm. The last five channels, i.e., channels 248 to 252, are corrupted and can be excluded. The spatial resolution of the hyperspectral data was set to 1m.The airborne light detection and ranging (LiDAR) data were captured by the ALTM Leica ALS50-II sensor in 2009 composing of a total of 3716157 points in the study area: 2133050 for the first return points, 1213712 for the second return points, 345.736 for the third return points, and 23659 for the fourth return points.The average flight height was 1700 meters and the average point density is two points per square meter. The laser pulse wavelength is 1064nm with a repetition rate of 126 kHz, an average sample spacing of 0.8m and a footprint of 0.34m. The data were collected up to four returns per pulse and the intensity records were supplied on all pulse returns.The nominal vertical accuracy was ±0.15m at 1 sigma and the measured vertical accuracy was ±0.05m at 1 sigma. These values have been determined from check points contrived on an open clear ground. The measured horizontal accuracy was ± 0.31m at 1 sigma.The obtained ground LiDAR returns were interpolated and rasterized into a 1m×1m digital elevation model (DEM) provided by the LiDAR contractor, which was produced from the LiDAR ground points and interpolated coastal boundaries.The first returns of the airborne LiDAR sensor were utilized to produce the normalized digital surface model (nDSM) at 1m spatial resolution using Las2dem.The 1m spatial resolution intensity image was also produced using Las2dem. This software interpolated the points using triangulated irregular networks (TIN). Then, the TINs were rasterized into the nDSM and the intensity image with a pixel size of 1m. The intensity image with 1m spatial resolution was also produced using Las2dem.The LiDAR data were classified into ground" andnon-ground" by the data contractor using algorithms tailored especially for the project area. For the areas covered by dense vegetation, less laser pulse reaches the ground. Consequently, fewer ground points were available for DEM and nDSM surfaces interpolation in those areas. Therefore, the DEM and the nDSM tend to be less accurate in these areas.In order to use the datasets, please fulfill the following three requirements:

    1) Giving an acknowledgement as follows:

    The authors gratefully acknowledge TERN AusCover and Remote Sensing Centre, Department of Science, Information Technology, Innovation and the Arts, QLD for providing the hyperspectral and LiDAR data, respectively. Airborne lidar are from http://www.auscover.org.au/xwiki/bin/view/Product+pages/Airborne+LidarAirborne hyperspectral are from http://www.auscover.org.au/xwiki/bin/view/Product+pages/Airborne+Hyperspectral

    2) Using the following license for LiDAR and hyperspectral data:

    http://creativecommons.org/licenses/by/3.0/3) This dataset was made public by Dr. Pedram Ghamisi from German Aerospace Center (DLR) and Prof. Stuart Phinn from the University of Queensland. Please cite: In WORD:Pedram Ghamisi and Stuart Phinn, Fusion of LiDAR and Hyperspectral Data, Figshare, December 2015, https://dx.doi.org/10.6084/m9.figshare.2007723.v3In LaTex:@article{Ghamisi2015,author = "Pedram Ghamisi and Stuart Phinn",title = "{Fusion of LiDAR and Hyperspectral Data}",journal={Figshare},year = {2015},month = {12},url = "10.6084/m9.figshare.2007723.v3",

    }

  6. NOAA Coastal Lidar Data

    • registry.opendata.aws
    Updated Feb 24, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA (2021). NOAA Coastal Lidar Data [Dataset]. https://registry.opendata.aws/noaa-coastal-lidar/
    Explore at:
    Dataset updated
    Feb 24, 2021
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Description

    Lidar (light detection and ranging) is a technology that can measure the 3-dimentional location of objects, including the solid earth surface. The data consists of a point cloud of the positions of solid objects that reflected a laser pulse, typically from an airborne platform. In addition to the position, each point may also be attributed by the type of object it reflected from, the intensity of the reflection, and other system dependent metadata. The NOAA Coastal Lidar Data is a collection of lidar projects from many different sources and agencies, geographically focused on the coastal areas of the United States of America. The data is provided in Entwine Point Tiles (EPT; https://entwine.io) format, which is a lossless streamable octree of the point cloud, and in LAZ format. Datasets are maintained in their original projects and care should be taken when merging projects. The coordinate reference system for the data is The NAD83(2011) UTM zone appropriate for the center of each data set for EPT and geographic coordinates for LAZ. Vertically they are in the orthometric datum appropriate for that area (for example, NAVD88 in the mainland United States, PRVD02 in Puerto Rico, or GUVD03 in Guam). The geoid model used is reflected in the data set resource name.
    The data are organized under directories entwine and laz for the EPT and LAZ versions respectively. Some datasets are not in EPT format, either because the dataset is already in EPT on the USGS public lidar site, they failed to build or their content does not work well in EPT format. Topobathy lidar datasets using the topobathy domain profile do not translate well to EPT format.

  7. c

    LIDAR Composite Digital Terrain Model (DTM) 1m - WMS

    • data.catchmentbasedapproach.org
    Updated Dec 22, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Environment Agency (2023). LIDAR Composite Digital Terrain Model (DTM) 1m - WMS [Dataset]. https://data.catchmentbasedapproach.org/maps/a0eb5fe3e2d142f2a3c30626a3db4d7f
    Explore at:
    Dataset updated
    Dec 22, 2023
    Dataset authored and provided by
    Environment Agency
    Area covered
    Description

    The LIDAR Composite DTM (Digital Terrain Model) is a raster elevation model covering ~99% of England at 1m spatial resolution.The DTM (Digital Terrain Model) is produced from the last or only laser pulse returned to the sensor. We remove surface objects from the Digital Surface Model (DSM), using bespoke algorithms and manual editing of the data, to produce a terrain model of just the surface. Produced by the Environment Agency in 2022, the DTM is derived from a combination of our Time Stamped archive and National LIDAR Programme surveys, which have been merged and re-sampled to give the best possible coverage. Where repeat surveys have been undertaken the newest, best resolution data is used. Where data was resampled a bilinear interpolation was used before being merged. The 2022 LIDAR Composite contains surveys undertaken between 6th June 2000 and 2nd April 2022. Please refer to the metadata index catalgoues which show for any location which survey was used in the production of the LIDAR composite.DEFRA Data Services Platform Metadata URLDefra Network WMS server provided by the Environment Agency

  8. Data from: Airborne Lidar Sampling Strategies to Enhance Forest Aboveground...

    • ckan.americaview.org
    Updated Sep 16, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ckan.americaview.org (2021). Airborne Lidar Sampling Strategies to Enhance Forest Aboveground Biomass Estimation from Landsat Imagery [Dataset]. https://ckan.americaview.org/dataset/airborne-lidar-sampling-strategies-to-enhance-forest-aboveground-biomass-estimation
    Explore at:
    Dataset updated
    Sep 16, 2021
    Dataset provided by
    CKANhttps://ckan.org/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Accurately estimating aboveground biomass (AGB) is important in many applications, including monitoring carbon stocks, investigating deforestation and forest degradation, and designing sustainable forest management strategies. Although lidar provides critical three-dimensional forest structure information for estimating AGB, acquiring comprehensive lidar coverage is often cost prohibitive. This research focused on developing a lidar sampling framework to support AGB estimation from Landsat images. Two sampling strategies, systematic and classification-based, were tested and compared. The proposed strategies were implemented over a temperate forest study site in northern New York State and the processes were then validated at a similar site located in central New York State. Our results demonstrated that while the inclusion of lidar data using systematic or classification-based sampling supports AGB estimation, the systematic sampling selection method was highly dependent on site conditions and had higher accuracy variability. Of the 12 systematic sampling plans, R2 values ranged from 0.14 to 0.41 and plot root mean square error (RMSE) ranged from 84.2 to 93.9 Mg ha−1. The classification-based sampling outperformed 75% of the systematic sampling strategies at the primary site with R2 of 0.26 and RMSE of 70.1 Mg ha−1. The classification-based lidar sampling strategy was relatively easy to apply and was readily transferable to a new study site. Adopting this method at the validation site, the classification-based sampling also worked effectively, with an R2 of 0.40 and an RMSE of 108.2 Mg ha−1 compared to the full lidar coverage model with an R2 of 0.58 and an RMSE of 96.0 Mg ha−1. This study evaluated different lidar sample selection methods to identify an efficient and effective approach to reduce the volume and cost of lidar acquisitions. The forest type classification-based sampling method described in this study could facilitate cost-effective lidar data collection in future studies.

  9. m

    Change Detection of Urban Vegetation Based on Airborne LiDAR: Datasets and...

    • data.mendeley.com
    Updated Jan 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anett Fekete (2021). Change Detection of Urban Vegetation Based on Airborne LiDAR: Datasets and Supplementary Materials [Dataset]. http://doi.org/10.17632/9thyzzwd5d.1
    Explore at:
    Dataset updated
    Jan 4, 2021
    Authors
    Anett Fekete
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Dataset of sample territories for the article entitled "Change Detection of Urban Vegetation Based on Airborne LiDAR". These sample datasets are extracts of the original AHN datasets available at https://www.pdok.nl/datasets .

  10. F

    Parking lot locations and utilization samples in the Hannover Linden-Nord...

    • data.uni-hannover.de
    geojson, png
    Updated Apr 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Institut für Kartographie und Geoinformatik (2024). Parking lot locations and utilization samples in the Hannover Linden-Nord area from LiDAR mobile mapping surveys [Dataset]. https://data.uni-hannover.de/dataset/parking-locations-and-utilization-from-lidar-mobile-mapping-surveys
    Explore at:
    png, geojsonAvailable download formats
    Dataset updated
    Apr 17, 2024
    Dataset authored and provided by
    Institut für Kartographie und Geoinformatik
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Area covered
    Linden - Nord, Hanover
    Description

    Work in progress: data might be changed

    The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies.

    Vehicle Detections

    Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock.

    The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/807618b6-5c38-4456-88a1-cb47500081ff/download/detection_map.png" alt="Overview map of detected vehicles" title="Overview map of detected vehicles"> Figure 1: Overview map of detected vehicles

    Parking Areas

    The public parking areas were digitized manually using aerial images and the detected vehicles in order to exclude irregular parking spaces as far as possible. They were also tagged as to whether they were aligned parallel to the road and assigned to a use at the time of recording, as some are used for construction sites or outdoor catering, for example. Depending on the intended use, they can be filtered individually.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/16b14c61-d1d6-4eda-891d-176bdd787bf5/download/parking_area_example.png" alt="Example parking area occupation pattern" title="Visualization of example parking areas on top of an aerial image [by LGLN]"> Figure 2: Visualization of example parking areas on top of an aerial image [by LGLN]

    Parking Occupancy

    For modelling the parking occupancy, single slots are sampled as center points every 5 m from the parking areas. In this way, they can be integrated into a street/routing graph, for example, as prepared in Wage et al. (2023). Own representations can be generated from the parking area and vehicle detections. Those parking points were intersected with the vehicle boxes to identify occupancy at the respective epochs.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/ca0b97c8-2542-479e-83d7-74adb2fc47c0/download/datenpub-bays.png" alt="Overview map of parking slots' average load" title="Overview map of parking slots' average load"> Figure 3: Overview map of average parking lot load

    However, unoccupied spaces cannot be determined quite as trivially the other way around, since no detected vehicle can result just as from no measurement/observation. Therefore, a parking space is only recorded as unoccupied if a vehicle was detected at the same time in the neighborhood on the same parking lane and therefore it can be assumed that there is a measurement.

    To close temporal gaps, interpolations were made by hour for each parking slot, assuming that between two consecutive observations with an occupancy the space was also occupied in between - or if both times free also free in between. If there was a change, this is indicated by a proportional value. To close spatial gaps, unobserved spaces in the area are drawn randomly from the ten closest occupation patterns around.

    This results in an exemplary occupancy pattern of a synthetic day. Depending on the application, the value could be interpreted as occupancy probability or occupancy share.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/184a1f75-79ab-4d0e-bb1b-8ed170678280/download/occupation_example.png" alt="Example parking area occupation pattern" title="Example parking area occupation pattern"> Figure 4: Example parking area occupation pattern

    References

    • F. Bock, D. Eggert and M. Sester (2015): On-street Parking Statistics Using LiDAR Mobile Mapping, 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 2015, pp. 2812-2818. https://doi.org/10.1109/ITSC.2015.452
    • A. Leichter, U. Feuerhake, and M. Sester (2021): Determination of Parking Space and its Concurrent Usage Over Time Using Semantically Segmented Mobile Mapping Data, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2021, 185–192. https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-185-2021
    • O. Wage, M. Heumann, and L. Bienzeisler (2023): Modeling and Calibration of Last-Mile Logistics to Study Smart-City Dynamic Space Management Scenarios. In 1st ACM SIGSPATIAL International Workshop on Sustainable Mobility (SuMob ’23), November 13, 2023, Hamburg, Germany. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3615899.3627930
  11. U

    Lidar Point Cloud - USGS National Map 3DEP Downloadable Data Collection

    • data.usgs.gov
    • s.cnmilf.com
    • +1more
    Updated Sep 18, 2014
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2014). Lidar Point Cloud - USGS National Map 3DEP Downloadable Data Collection [Dataset]. https://data.usgs.gov/datacatalog/data/USGS:b7e353d2-325f-4fc6-8d95-01254705638a
    Explore at:
    Dataset updated
    Sep 18, 2014
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Authors
    U.S. Geological Survey
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    This data collection of the 3D Elevation Program (3DEP) consists of Lidar Point Cloud (LPC) projects as provided to the USGS. These point cloud files contain all the original lidar points collected, with the original spatial reference and units preserved. These data may have been used as the source of updates to the 1/3-arcsecond, 1-arcsecond, and 2-arcsecond seamless 3DEP Digital Elevation Models (DEMs). The 3DEP data holdings serve as the elevation layer of The National Map, and provide foundational elevation information for earth science studies and mapping applications in the United States. Lidar (Light detection and ranging) discrete-return point cloud data are available in LAZ format. The LAZ format is a lossless compressed version of the American Society for Photogrammetry and Remote Sensing (ASPRS) LAS format. Point Cloud data can be converted from LAZ to LAS or LAS to LAZ without the loss of any information. Either format stores 3-dimensional point cloud data and point ...

  12. o

    Example data for 'lacunr: Efficient 3-D lacunarity for voxelized LiDAR data...

    • explore.openaire.eu
    • zenodo.org
    Updated Jul 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Smeds, Elliott A.; Cooper, Zane; Bentley, Lisa Patrick (2025). Example data for 'lacunr: Efficient 3-D lacunarity for voxelized LiDAR data from forested ecosystems' [Dataset]. http://doi.org/10.5281/zenodo.16116069
    Explore at:
    Dataset updated
    Jul 26, 2025
    Authors
    Smeds, Elliott A.; Cooper, Zane; Bentley, Lisa Patrick
    Description

    This repository contains data and code for reproducing the results of the worked example in Smeds et al. 2025 "lacunr: Efficient 3-D lacunarity for voxelized LiDAR data from forested ecosystems", currently in press for publication in Methods in Ecology and Evolution (doi:TBA). The worked example analyzes terrestrial LiDAR scans of two forest stands at the Saddle Mountain Open Space Preserve in Sonoma County, California, which were burned in the 2020 Glass Fire. The first of these scans, Plot 1, in included in the 'lacunr' software package as an example dataset, while the second plot is archived here. The included R script allows users to reproduce the lacunarity curves displayed in Figure 2 and Figure S1 of Smeds et al. The deposited zip archive, lacunr_example.zip, can be decompressed into a corresponding folder with the following contents: data/ - a data folder which contains two height-normalized terrestrial LiDAR point clouds in .laz format: c6_tls_p6_prefire.laz - 24*24m forest stand before wildfire c10_tls_p6_postfire.laz - the same forest stand after wildfire lacunr_example_data.R - R script for replicating the lacunarity curves presented in the publication lacunr_example.Rproj - the R project file which allows users to open the R script in a self-contained environment. This file should be opened in RStudio prior to executing the R script output/ - the folder where external image files are exported by the R script README.txt - a text file containing instructions for opening the R project and running the associated code

  13. Full-waveform pulsed LiDAR dataset

    • zenodo.org
    bin
    Updated Apr 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Bastos; Daniel Bastos; Alexandre Brandão; Abel Lorences-Riesgo; Paulo Monteiro; Arnaldo Oliveira; Dionísio Pereira; Hadi Olyaei; Miguel Drummond; Miguel Drummond; Alexandre Brandão; Abel Lorences-Riesgo; Paulo Monteiro; Arnaldo Oliveira; Dionísio Pereira; Hadi Olyaei (2023). Full-waveform pulsed LiDAR dataset [Dataset]. http://doi.org/10.5281/zenodo.7075871
    Explore at:
    binAvailable download formats
    Dataset updated
    Apr 3, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Daniel Bastos; Daniel Bastos; Alexandre Brandão; Abel Lorences-Riesgo; Paulo Monteiro; Arnaldo Oliveira; Dionísio Pereira; Hadi Olyaei; Miguel Drummond; Miguel Drummond; Alexandre Brandão; Abel Lorences-Riesgo; Paulo Monteiro; Arnaldo Oliveira; Dionísio Pereira; Hadi Olyaei
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A dataset comprised of full-waveform (periodically sampled) pulses emulating LiDAR signals is provided. The waveforms were captured at a sampling rate of 20 Gsample/s and over a dynamic range of 45 dB. A simple Python notebook is also given, showing how to properly load the waveforms and capture parameters from the stored .mat file.

    The dataset was used to demonstrate a new time-frequency estimation method for pulsed LiDAR systems [1].

    The dataset was also used to build efficient machine learning (ML) models capable of accurate and precise time-of-flight estimations [2].

    For full details, please check the Experimental Setup in [1].

    Please contact Daniel Bastos (d.bastos@ua.pt) for any further questions.

    References:

    [1] – D. Bastos, A. Brandão, A. Lorences-Riesgo, P. P. Monteiro, A. S. R. Oliveira, D. Pereira, H. Z. Olyaei and M. V. Drummond, , "Time-Frequency Range Estimation Method for Pulsed LiDAR," in IEEE Transactions on Vehicular Technology, vol. 72, no. 2, pp. 1429-1437, Feb. 2023, doi: 10.1109/TVT.2022.3207588.

    [2] – Daniel Bastos, Bruno Faria, Paulo P. Monteiro, Arnaldo S. R. Oliveira, and Miguel V. Drummond, "Machine learning-aided LiDAR range estimation," Opt. Lett. 48, 1962-1965 (2023), doi: 10.1364/OL.487000.

  14. Z

    L3A - Airborne LiDAR transects summary collected by EBA in the Brazilian...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 18, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Assis, Mauro (2022). L3A - Airborne LiDAR transects summary collected by EBA in the Brazilian Amazon [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4552698
    Explore at:
    Dataset updated
    Jul 18, 2022
    Dataset provided by
    Pereira, Francisca Rocha de Souza
    Ometto, Jean Pierre
    Cantinho, Roberta Zecchini
    Gorgens, Bastos Gorgens
    Assis, Mauro
    Sato, Luciane Yumie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Amazon Rainforest
    Description

    The shapefile provides the exact location and the attributes of the LiDAR transects including sampling method, sampling criteria, if the field data is available, who is the field data owner, the average metrics, and other relevant information. The metrics were extracted from the original point cloud, including basic outlier cleaning. The EBA project collects the LiDAR transects between 2016 and 2018. Each transect covered 375 ha (12.5 km × 300 m) by emitting full-waveform laser pulses from a Trimble Harrier 68i airborne sensor (Trimble; Sunnyvale, CA) aboard a Cessna aircraft (model 206). The average point density was set at four returns per square meters, the field of view was equal to 30°, the flying altitude was 600 m, and transect width on the ground was approximately 494 m. Global Navigation Satellite System (GNSS) data were collected on a dual-frequency receiver (L1/L2). The pulse footprint was set to be below 30 cm, based on a divergence angle between 0.1 and 0.3 milliradians. Horizontal and vertical accuracy were controlled to be under 1 m and under 0.5 m, respectively.

    Metadata included in the shapefile:

    Transect = Transect id (unique identification).

    Field_data = researcher or project who own the field data.

    Hyperspect = presence of hyperspectral data.

    Obs = Contact person that requested the transect.

    Campaign = 1 between 2016 and 2017. 2 between 2017 and 2018.

    Criteria = Study criteria to orient allocation.

    Datafile = name of datafile.

    Elev_maxim = maximum height meters.

    Elev_mean = mean height in meters.

    Elev_mode = Height mode in meters.

    Elev_stdde = Height standard deviation in meters.

    Elev_varia = Height variance.

    Elev_cv = Coefficient of variation for height.

    Elev_iq = 75th percentile minus 25th percentile for height in meters.

    Elev_skewn = Skewness for height.

    Elev_kurto = Kurtosis for height.

    Elev_aad = Height average absolute deviation in meters.

    Elev_mad_m = Mean absolute deviation for height in meters.

    Elev_mad_1 = not clear yet.

    Elev_l1 = L-moment 1 distance for height

    Elev_l2 = L-moment 2 distance for height

    Elev_l3 = L-moment 3 distance for height

    Elev_l4 = L-moment 4 distance for height

    Elev_l_cv = Coefficient of variation for L-moment .

    Elev_l_ske = Skewness for L-moment.

    Elev_l_kur = Kurtorsis for L-moment.

    Elev_p05 = Percentile 5 for height in meters.

    Elev_p20 = Percentile 20 for height in meters.

    Elev_p25 = Percentile 25 for height in meters.

    Elev_p30 = Percentile 30 for height in meters.

    Elev_p40 = Percentile 40 for height in meters.

    Elev_p50 = Percentile 50 for height in meters.

    Elev_p60 = Percentile 60 for height in meters.

    Elev_p70 = Percentile 70 for height in meters.

    Elev_p75 = Percentile 75 for height in meters.

    Elev_p80 = Percentile 80 for height in meters.

    Elev_p90 = Percentile 90 for height in meters.

    Elev_p95 = Percentile 95 for height in meters.

    Elev_p99 = Percentile 99 for height in meters.

    Canopy_rel = not clear yet.

    Elev_sqrt = Quadratic mean for hieght

    Elev_curt = Cubic mean for height

    Profile_ar = not clear yet.

    Random = Yes, if transect was random selected inside the criteria.

    Epsg = Coordinate system.

    eba_map = transects included in the biomass map for 4th National Communication

    overlap = transects overlapping more than 45%.

  15. d

    2010 Channel Islands Lidar Collection

    • catalog.data.gov
    • portal.opentopography.org
    • +5more
    Updated Nov 12, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Nature Conservancy (Originator); U.S. Geological Survey (Originator); null (Originator); Dewberry (Originator); National Park Service (Originator) (2020). 2010 Channel Islands Lidar Collection [Dataset]. https://catalog.data.gov/dataset/2010-channel-islands-lidar-collection
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    The Nature Conservancy (Originator); U.S. Geological Survey (Originator); null (Originator); Dewberry (Originator); National Park Service (Originator)
    Description

    Lidar dataset collected for the five islands comprising Channel Islands National Park. This includes all of the islands of San Miguel, Santa Rosa, Anacapa, Santa Barbara, and Santa Cruz. The project totals about 288 square miles (747 square kilometers). Elevation data were delivered in 2,000m x 2,000m tiles with 280 tiles produced. Deliverables include LAS 1.2 files (classified and raw), digital elevation model, digital surface model, control data, and metadata. Full waveform data were also collected over the entire project area.Data were collected using a Riegl LMS-Q560 laser scanner flown from a helicopter. Helicopters were also used to establish ground control on all five islands. Personnel from National Park Service and The Natural Conservancy (which owns most of Santa Cruz Island) accompanied the survey team to ensure minimal impact to habitat. To assist with positional accuracy, continuously operating reference stations (CORS) on each island were modified by UNAVCO from a 15 second to a one second sampling rate during the collection period. Nominal single swath point density was three points per square meter (seven points with 50% overlap).

  16. G

    AHN Netherlands 0.5m DEM, Raw Samples

    • developers.google.com
    Updated Jan 1, 2012
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AHN (2012). AHN Netherlands 0.5m DEM, Raw Samples [Dataset]. https://developers.google.com/earth-engine/datasets/catalog/AHN_AHN2_05M_RUW
    Explore at:
    Dataset updated
    Jan 1, 2012
    Dataset provided by
    AHN
    Time period covered
    Jan 1, 2012
    Area covered
    Description

    The AHN DEM is a 0.5m DEM covering the Netherlands. It was generated from LIDAR data taken in the spring between 2007 and 2012. This version contains both ground level samples and items above ground level (such as buildings, bridges, trees etc). The point cloud was converted to a 0.5m …

  17. ULS and conpy height model,30*30m sample plots

    • figshare.com
    tiff
    Updated Aug 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ma Ye (2024). ULS and conpy height model,30*30m sample plots [Dataset]. http://doi.org/10.6084/m9.figshare.26520301.v2
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Aug 8, 2024
    Dataset provided by
    figshare
    Authors
    Ma Ye
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    unmanned aerial vehicle LiDAR, denoised, normalized, etc.

  18. d

    Lidar - ND Halo Scanning Doppler, Boardman - Raw Data

    • catalog.data.gov
    • data.openei.org
    • +2more
    Updated Apr 26, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wind Energy Technologies Office (WETO) (2022). Lidar - ND Halo Scanning Doppler, Boardman - Raw Data [Dataset]. https://catalog.data.gov/dataset/lidar-hilflows-llnl-zephir300-mop-processed-data
    Explore at:
    Dataset updated
    Apr 26, 2022
    Dataset provided by
    Wind Energy Technologies Office (WETO)
    Description

    Overview The University of Notre Dame (ND) scanning lidar dataset used for the WFIP2 Campaign is provided. The raw dataset contains the radial velocity and backscatter measurements along with the beam location and other lidar parameters in the header. Data Details 1) A Halo photonics scanning lidar, owned by ND, was deployed and operated from 12/17/2015 to 02/09/2016. On 02/09/2016, this lidar was replaced by a Halo photonics scanning lidar owned by the Army Research Lab (ARL). 2) For information on the scanning patterns, refer to attached "ReadMe" file. 3) Data Period from 12/15/2015 to 02/09/2016: One data file per day (24 hours). File name of each daily data file has {boardman} as {optionalfields}. For example: lidar.z07.00.20150414.143000.boardman.csm. 4) Data Period after 02/09/2016: One scan file every 15 minutes, one stare file, and one background file every hour. File names have the following {optionalfields}: {background_boardman} for background files; {scan_boardman} for scan files; and {stare_boardman} for stare files. For example: - lidar.z07.00.20150414.143000.background_boardman - lidar.z07.00.20150414.143000.scan_boardman - lidar.z07.00.20150414.143000.stare_boardman 5) Site information: - Site: Boardman, OR - Latitude: 45.816185° N - Longitude: 119.811766° W - Elevation (meters): 112.0 Data Quality Raw data: no quality control (QC) is applied. Uncertainty The lidar measurements' uncertainty varies with the range of the measurements. Please refer to Pearson et al. (2009) for more details. Constraints 1) Because of the change of lidars, the data were downloaded in different formats. Hence, the raw data (unfiltered) primarily are in two formats: .csm and .hpl. 2) The data were downloaded every one hour or 15 minutes. Hence, the datasets are not concatenated for continuous scans. 3) A lidar offset of +195 deg (to True North) was added to the azimuthal angles from the ND scanning lidars, spanning 12/17/2015 until 02/09/2016. Later, this was corrected for the data from 02/09/2016 as the lidar aligned to True North.

  19. Data from: Flying high: Sampling savanna vegetation with UAV-lidar

    • zenodo.org
    • search.dataone.org
    • +2more
    bin, tiff
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peter Boucher; Peter Boucher; Evan Hockridge; Jenia Singh; Andrew Davies; Evan Hockridge; Jenia Singh; Andrew Davies (2024). Data from: Flying high: Sampling savanna vegetation with UAV-lidar [Dataset]. http://doi.org/10.5061/dryad.15dv41p24
    Explore at:
    tiff, binAvailable download formats
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Peter Boucher; Peter Boucher; Evan Hockridge; Jenia Singh; Andrew Davies; Evan Hockridge; Jenia Singh; Andrew Davies
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The flexibility of UAV-lidar remote sensing offers a myriad of new opportunities for savanna ecology, enabling researchers to measure vegetation structure at a variety of temporal and spatial scales. However, this flexibility also increases the number of customizable variables, such as flight altitude, pattern, and sensor parameters, that, when adjusted, can impact data quality as well as the applicability of a dataset to a specific research interest.
    To better understand the impacts that UAV flight patterns and sensor parameters have on vegetation metrics, we compared 7 lidar point clouds collected with a Riegl VUX-1LR over a 300 x 300 m area in the Kruger National Park, South Africa. We varied the altitude (60 m above ground, 100 m, 180 m, and 300 m) and sampling pattern (slowing the flight speed, increasing the overlap between flightlines, and flying a crosshatch pattern), and compared a variety of vertical vegetation metrics related to height and fractional cover.
    Comparing vegetation metrics from acquisitions with different flight patterns and sensor parameters, we found that both flight altitude and pattern had significant impacts on derived structure metrics, with variation in altitude causing the largest impacts. Flying higher resulted in lower point cloud heights, leading to a consistent downward trend in percentile height metrics and fractional cover. The magnitude and direction of these trends also varied depending on the vegetation type sampled (trees, shrubs, or grasses), showing that the structure and composition of savanna vegetation can interact with the lidar signal and alter derived metrics. While there were statistically significant differences in metrics among acquisitions, the average differences were often on the order of a few centimeters or less, which shows great promise for future comparison studies.
    We discuss how these results apply in practice, explaining the potential trade-offs of flying at higher altitudes and alternating flight pattern. We highlight how flight and sensor parameters can be geared toward specific ecological applications and vegetation types, and we explore future opportunities for optimizing UAV-lidar sampling designs in savannas.

  20. Data from: Detailed point cloud data on stem size and shape of Scots pine...

    • zenodo.org
    • data.niaid.nih.gov
    pdf, zip
    Updated Jul 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ninni Saarinen; Ninni Saarinen; Ville Kankare; Ville Kankare; Tuomas Yrttimaa; Tuomas Yrttimaa; Niko Viljanen; Niko Viljanen; Eija Honkavaara; Eija Honkavaara; Markus Holopainen; Markus Holopainen; Juha Hyyppä; Juha Hyyppä; Saija Huuskonen; Jari Hynynen; Jari Hynynen; Mikko Vastaranta; Mikko Vastaranta; Saija Huuskonen (2024). Detailed point cloud data on stem size and shape of Scots pine trees [Dataset]. http://doi.org/10.5281/zenodo.3701271
    Explore at:
    zip, pdfAvailable download formats
    Dataset updated
    Jul 22, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ninni Saarinen; Ninni Saarinen; Ville Kankare; Ville Kankare; Tuomas Yrttimaa; Tuomas Yrttimaa; Niko Viljanen; Niko Viljanen; Eija Honkavaara; Eija Honkavaara; Markus Holopainen; Markus Holopainen; Juha Hyyppä; Juha Hyyppä; Saija Huuskonen; Jari Hynynen; Jari Hynynen; Mikko Vastaranta; Mikko Vastaranta; Saija Huuskonen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data set is comprised of three packed zip files and they include text files of 3D information from terrestrial laser scanning (TLS) and aerial imagery from unmanned aerial vehicle (UAV) from individual Scots pine trees within 27 sample plots from three test sites located in southern Finland.

    TLS data acquisition was carried out with Trimble TX5 3D laser scanner (Trible Navigation Limited, USA) for all three study sites between September and October 2018. Eight scans were placed to each sample plot and scan resolution of point distance approximately 6.3 mm at 10-m distance was used. Artificial constant sized spheres (i.e. diameter of 198 mm) were placed around sample plots and used as reference objects for registering the eight scans onto a single, aligned coordinate system. The registration was carried out with FARO Scene software (version 2018). Aerial images were obtained by using an UAV with Gryphon Dynamics quadcopter frame. Two Sony A7R II digital cameras were mounted on the UAV in +15° and -15° angles. Images were acquired in every two seconds and image locations were recorded for each image. The flights were carried out on October 2, 2018. For each study site, eight ground control points (GCPs) were placed and measured. Flying height of 140 m and a flying speed of 5 m/s was selected for all the flights, resulting in 1.6 cm ground sampling distance. Total of 639, 614 and 663 images were captured for study site 1, 2, and 3, respectively, resulting in 93% and 75% forward and side overlaps, respectively. Photogrammetric processing of aerial images was carried out following the workflow as presented in Viljanen et al. (2018). The processing produced photogrammetric point clouds for each study site with point density of 804 points/m2, 976 points/m2, and 1030 points/m2 for study site 1, 2, and 3, respectively.

    The sample plots within the three test sites have been managed with different thinning treatments in either 2005 or 2006. The experimental design of the sample plots includes two levels of thinning intensity and three thinning types resulting in six different thinning treatments, namely i) moderate thinning from below, ii) moderate thinning from above, iii) moderate systematic thinning, iv) intensive thinning from below, v) intensive thinning from above, and vi) intensive systematic thinning, as well as a control plot where no thinning has been carried out since the establishment. More information about the study sites and samples plots as well as the thinning treatments can be found in Saarinen et al. (2020a).

    The data set includes stem points of individual Scot pine trees extracted from the point clouds. More about the method of extraction can be found in Saarinen et al. (2020a, 2020b) and Yrttimaa et al. (2020). The title of the zip file refers to the study sites 1, 2, and 3. The title of the text files includes the information on the test site, the plot within the test site, and the tree within the plot. The text files contain stem points extracted from the TLS point clouds. The columns “x” and “y” contain x- and y-coordinates in a local coordinate system (in meters), in column “h” is the height of each point in meters above ground, and treeID is the tree identification number. The columns are separated by space.

    Based on the study site and plot number, files from different thinning treatments can be identified by using the information in Table 1 in Saarinen et al. (2020b).

    References

    Saarinen, N., Kankare, V., Yrttimaa, T., Viljanen, N., Honkavaara, E., Holopainen, M., Hyyppä, J., Huuskonen, S., Hynynen, J., Vastaranta, M. 2020a. Assessing the effects of stand dynamics on stem growth allocation of individual Scots pines. bioRxiv 2020.03.02.972521. https://doi.org/10.1101/2020.03.02.972521

    Saarinen, N., Kankare, V., Yrttimaa, T., Viljanen, N., Honkavaara, E., Holopainen, M., Hyyppä, J., Huuskonen, S., Hynynen, J., Vastaranta, M. 2020b. Detailed point cloud data on stem size and shape of Scots pine trees. bioRxiv 2020.03.09.983973. https://doi.org/10.1101/2020.03.09.983973

    Viljanen, N., Honkavaara, E., Näsi, R., Hakala, T., Niemeläinen, O., Kaivosoja, J. 2018. A Novel Machine Learning Method for Estimating Biomass of Grass Swards Using a Photogrammetric Canopy Height Model, Images and Vegetation Indices Captured by a Drone. Agriculture 8: 70. https://doi.org/10.3390/agriculture8050070

    Yrttimaa, T., Saarinen, N., Kankare, V., Hynynen, J., Huuskonen, S., Holopainen, M., Hyyppä, J., Vastaranta, M. 2020. Performance of terrestrial laser scanning to characterize managed Scots pine (Pinus sylvestris L.) stands is dependent on forest structural variation. EarthArXiv. March 5. https://doi.org/10.31223/osf.io/ybs7c

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
NEON Data Skills Teaching Data Subsets (2023). NEON Teaching Data: LiDAR Point Cloud (.las) Data [Dataset]. http://doi.org/10.6084/m9.figshare.4307750.v1
Organization logoOrganization logo

NEON Teaching Data: LiDAR Point Cloud (.las) Data

Explore at:
binAvailable download formats
Dataset updated
May 30, 2023
Dataset provided by
figshare
Figsharehttp://figshare.com/
Authors
NEON Data Skills Teaching Data Subsets
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This .las file contains sample LiDAR point cloud data collected by National Ecological Observatory Network's Airborne Observation Platform. The .las file format is a commonly used file format to store LIDAR point cloud data.This teaching data set is used for several tutorials on the NEON website (neonscience.org). The dataset is for educational purposes, data for research purposes can be obtained from the NEON Data Portal (data.neonscience.org).

Search
Clear search
Close search
Google apps
Main menu