100+ datasets found
  1. Lidar Dataset

    • kaggle.com
    zip
    Updated Oct 22, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Karim Cossentini (2020). Lidar Dataset [Dataset]. https://www.kaggle.com/datasets/karimcossentini/velodyne-point-cloud-dataset
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Oct 22, 2020
    Authors
    Karim Cossentini
    Description

    This Datasets contains the Kitti Object Detection Benchmark, created by Andreas Geiger, Philip Lenz and Raquel Urtasun in the Proceedings of 2012 CVPR ," Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite". This Kernel contains the object detection part of their different Datasets published for Autonomous Driving. It contains a set of images with their bounding box labels and velodyne point clouds. For more information visit the Website they published the data on (http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d).

  2. m

    Data from: Extended Evaluation of SnowPole Detection for Machine-Perceivable...

    • data.mendeley.com
    Updated Jun 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Durga Prasad Bavirisetti (2025). Extended Evaluation of SnowPole Detection for Machine-Perceivable Infrastructure for Nordic Winter Conditions: A Comparative Study of Object Detection Models [Dataset]. http://doi.org/10.17632/tt6rbx7s3h.3
    Explore at:
    Dataset updated
    Jun 30, 2025
    Authors
    Durga Prasad Bavirisetti
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In this study, we present an extensive evaluation of state-of-the-art YOLO object detection architectures for identifying snow poles in LiDAR-derived imagery captured under challenging Nordic conditions. Building upon our previous work on the SnowPole Detection dataset [1] and our LiDAR–GNSS-based localization framework [2], we expand the benchmark to include six YOLO models—YOLOv5s, YOLOv7-tiny, YOLOv8n, YOLOv9t, YOLOv10n, and YOLOv11n—evaluated across multiple input modalities. Specifically, we assess single-channel modalities (Reflectance, Signal, Near-Infrared) and six pseudo-color combinations derived by mapping these channels to RGB representations. Each model’s performance is quantified using Precision, Recall, mAP@50, mAP@50–95, and GPU inference latency. To facilitate systematic comparison, we define a composite Rank Score that integrates detection accuracy and real-time performance in a weighted formulation. Experimental results show that YOLOv9t consistently achieves the highest detection accuracy, while YOLOv11n provides the best trade-off between accuracy and inference speed, making it a promising candidate for real-time applications on embedded platforms. Among input modalities, pseudo-color combinations—particularly those fusing Near-Infrared, Signal, and Reflectance channels—outperformed single modalities across most configurations, achieving the highest Rank Scores and mAP metrics. Therefore, we recommend using multimodal LiDAR representations such as Combination 4 and Combination 5 to maximize detection robustness in practical deployments. All datasets, benchmarking code, and trained models are publicly avail- able to support reproducibility and further research through our GitHub repository (a).

    References [1] Durga Prasad Bavirisetti, Gabriel Hanssen Kiss, Petter Arnesen, Hanne Seter, Shaira Tabassum, and Frank Lindseth. Snowpole detection: A comprehensive dataset for detection and localization using lidar imaging in nordic winter conditions. Data in Brief, 59:111403, 2025. [2] Durga Prasad Bavirisetti, Gabriel Hanssen Kiss, and Frank Lindseth. A pole detection and geospatial localization framework using lidar-gnss data fusion. In 2024 27th International Conference on Information Fusion (FUSION), pages 1–8. IEEE, 2024. (a) https://github.com/MuhammadIbneRafiq/Extended-evaluation-snowpole-lidar-dataset

  3. u

    NCAR REAL Lidar Imagery [NCAR/EOL]

    • data.ucar.edu
    image
    Updated Aug 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). NCAR REAL Lidar Imagery [NCAR/EOL] [Dataset]. http://doi.org/10.5065/D6TT4P9G
    Explore at:
    imageAvailable download formats
    Dataset updated
    Aug 1, 2025
    Time period covered
    Mar 14, 2006 - May 1, 2006
    Area covered
    Description

    This dataset contains png images from the NCAR REAL Lidar Imagery dataset. The dataset is from the TREX period from 20060314 to 20060501 and contains both RHI and PPI images. When ordering or browsing data, be aware of the following data gap. There is no data for the following days: 20060424-28

  4. F

    i.c.sens Visual-Inertial-LiDAR Dataset

    • data.uni-hannover.de
    bag, jpeg, pdf, png +2
    Updated Dec 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    i.c.sens (2024). i.c.sens Visual-Inertial-LiDAR Dataset [Dataset]. https://data.uni-hannover.de/dataset/i-c-sens-visual-inertial-lidar-dataset
    Explore at:
    txt(1049), jpeg(556618), jpeg(129333), rviz(6412), png(650007), jpeg(153522), txt(285), pdf(21788288), bag(9982003259), bag(9980268682), bag(9969171093), bag(9971699339), bag(9939783847), bag(9896857478), bag(9960305979), bag(7419679751)Available download formats
    Dataset updated
    Dec 12, 2024
    Dataset authored and provided by
    i.c.sens
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">

    Image credit: Sören Vogel

    The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">

    Image credit: Sören Vogel

    The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">

    The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:

    • Velodyne HDL-64 LiDAR
    • LORD MicroStrain 3DM-GQ4-45 GNSS aided IMU
    • Pointgrey GS3-U3-23S6C-C RGB camera

    To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:

    roscore & rosrun rviz rviz -d icsens_data.rviz
    

    Afterwards, start playing a rosbag with

    rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
    

    Below we provide some exemplary images and their corresponding point clouds.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">

    Related publications:

    • R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.

    • R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.

    • R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.

    • R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.

    • R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.

    • R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.

    • R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.

    • R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.

  5. Open Topographic Lidar Data - Dataset - data.gov.ie

    • data.gov.ie
    Updated Oct 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.gov.ie (2021). Open Topographic Lidar Data - Dataset - data.gov.ie [Dataset]. https://data.gov.ie/dataset/open-topographic-lidar-data
    Explore at:
    Dataset updated
    Oct 22, 2021
    Dataset provided by
    data.gov.ie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data was collected by the Geological Survey Ireland, the Department of Culture, Heritage and the Gaeltacht, the Discovery Programme, the Heritage Council, Transport Infrastructure Ireland, New York University, the Office of Public Works and Westmeath County Council. All data formats are provided as GeoTIFF rasters but are at different resolutions. Data resolution varies depending on survey requirements. Resolutions for each organisation are as follows: GSI – 1m DCHG/DP/HC - 0.13m, 0.14m, 1m NY – 1m TII – 2m OPW – 2m WMCC - 0.25m Both a DTM and DSM are raster data. Raster data is another name for gridded data. Raster data stores information in pixels (grid cells). Each raster grid makes up a matrix of cells (or pixels) organised into rows and columns. The grid cell size varies depending on the organisation that collected it. GSI data has a grid cell size of 1 meter by 1 meter. This means that each cell (pixel) represents an area of 1 meter squared.

  6. D

    OC 2017 LiDAR Image Service

    • detroitdata.org
    • portal.datadrivendetroit.org
    • +5more
    Updated May 18, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oakland County, Michigan (2021). OC 2017 LiDAR Image Service [Dataset]. https://detroitdata.org/dataset/oc-2017-lidar-image-service1
    Explore at:
    html, arcgis geoservices rest apiAvailable download formats
    Dataset updated
    May 18, 2021
    Dataset provided by
    Oakland County, Michigan
    Description

    BY USING THIS WEBSITE OR THE CONTENT THEREIN, YOU AGREE TO THE TERMS OF USE.

    The Classified Point Cloud (LAS) for the 2017 Michigan LiDAR project covering approximately 907 square miles, covering Oakland County. LAS data products are suitable for 1 foot contour generation. USGS LiDAR Base Specification 1.2, QL2. 19.6 cm NVA.

    This data is for planning purposes only and should not be used for legal or cadastral purposes. Any conclusions drawn from analysis of this information are not the responsibility of Sanborn Map Company. Users should be aware that temporal changes may have occurred since this dataset was collected and some parts of this dataset may no longer represent actual surface conditions. Users should not use these data for critical applications without a full awareness of its limitations.

    This service is best used directly within ArcMap or ArcGIS Pro.If the raw LiDAR points are needed, use these clients to extract project area size portions. Due to the density of the data, downloading the entire County from this service is not possible. For further questions, contact the Oakland County Service Center at 248-858-8812, servicecenter@oakgov.com.

  7. d

    Data from: 30-m Hillshaded relief image produced from swath interferometric,...

    • catalog.data.gov
    • datadiscoverystudio.org
    • +5more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). 30-m Hillshaded relief image produced from swath interferometric, multibeam, and lidar datasets (navd_bath_30m.tif GeoTIFF Image; UTM, Zone 19N, WGS 84) [Dataset]. https://catalog.data.gov/dataset/30-m-hillshaded-relief-image-produced-from-swath-interferometric-multibeam-and-lidar-datas
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    These data are qualitatively derived interpretive polygon shapefiles and selected source raster data defining surficial geology, sediment type and distribution, and physiographic zones of the sea floor from Nahant to Northern Cape Cod Bay. Much of the geophysical data used to create the interpretive layers were collected under a cooperative agreement among the Massachusetts Office of Coastal Zone Management (CZM), the U.S. Geological Survey (USGS), Coastal and Marine Geology Program, the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Army Corps of Engineers (USACE). Initiated in 2003, the primary objective of this program is to develop regional geologic framework information for the management of coastal and marine resources. Accurate data and maps of seafloor geology are important first steps toward protecting fish habitat, delineating marine resources, and assessing environmental changes because of natural or human effects. The project is focused on the inshore waters of coastal Massachusetts. Data collected during the mapping cooperative involving the USGS have been released in a series of USGS Open-File Reports (http://woodshole.er.usgs.gov/project-pages/coastal_mass/html/current_map.html). The interpretations released in this study are for an area extending from the southern tip of Nahant to Northern Cape Cod Bay, Massachusetts. A combination of geophysical and sample data including high resolution bathymetry and lidar, acoustic-backscatter intensity, seismic-reflection profiles, bottom photographs, and sediment samples are used to create the data interpretations. Most of the nearshore geophysical and sample data (including the bottom photographs) were collected during several cruises between 2000 and 2008. More information about the cruises and the data collected can be found at the Geologic Mapping of the Seafloor Offshore of Massachusetts Web page: http://woodshole.er.usgs.gov/project-pages/coastal_mass/.

  8. u

    SynthCity Dataset - Area 1

    • rdr.ucl.ac.uk
    bin
    Updated Sep 11, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Griffiths; Jan Boehm (2019). SynthCity Dataset - Area 1 [Dataset]. http://doi.org/10.5522/04/8851616.v2
    Explore at:
    binAvailable download formats
    Dataset updated
    Sep 11, 2019
    Dataset provided by
    University College London
    Authors
    David Griffiths; Jan Boehm
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    With deep learning becoming a more prominent approach for automatic classification of three-dimensional point cloud data, a key bottleneck is the amount of high quality training data, especially when compared to that available for two-dimensional images. One potential solution is the use of synthetic data for pre-training networks, however the ability for models to generalise from synthetic data to real world data has been poorly studied for point clouds. Despite this, a huge wealth of 3D virtual environments exist, which if proved effective can be exploited. We therefore argue that research in this domain would be hugely useful. In this paper we present SynthCity an open dataset to help aid research. SynthCity is a 367.9M point synthetic full colour Mobile Laser Scanning point cloud. Every point is labelled from one of nine categories. We generate our point cloud in a typical Urban/Suburban environment using the Blensor plugin for Blender. See our project website http://www.synthcity.xyz or paper https://arxiv.org/abs/1907.04758 for more information.

  9. f

    Camera-LiDAR Datasets

    • figshare.com
    zip
    Updated Aug 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jennifer Leahy (2024). Camera-LiDAR Datasets [Dataset]. http://doi.org/10.6084/m9.figshare.26660863.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 14, 2024
    Dataset provided by
    figshare
    Authors
    Jennifer Leahy
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The datasets are original and specifically collected for research aimed at reducing registration errors between Camera-LiDAR datasets. Traditional methods often struggle with aligning 2D-3D data from sources that have different coordinate systems and resolutions. Our collection comprises six datasets from two distinct setups, designed to enhance versatility in our approach and improve matching accuracy across both high-feature and low-feature environments.Survey-Grade Terrestrial Dataset:Collection Details: Data was gathered across various scenes on the University of New Brunswick campus, including low-feature walls, high-feature laboratory rooms, and outdoor tree environments.Equipment: LiDAR data was captured using a Trimble TX5 3D Laser Scanner, while optical images were taken with a Canon EOS 5D Mark III DSLR camera.Mobile Mapping System Dataset:Collection Details: This dataset was collected using our custom-built Simultaneous Localization and Multi-Sensor Mapping Robot (SLAMM-BOT) in several indoor mobile scenes to validate our methods.Equipment: Data was acquired using a Velodyne VLP-16 LiDAR scanner and an Arducam IMX477 Mini camera, controlled via a Raspberry Pi board.

  10. d

    Lidar-derived imagery and digital elevation model of Monroe County, West...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Lidar-derived imagery and digital elevation model of Monroe County, West Virginia at 3-meter resolution [Dataset]. https://catalog.data.gov/dataset/lidar-derived-imagery-and-digital-elevation-model-of-monroe-county-west-virginia-at-3-mete
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Monroe County, West Virginia
    Description

    These raster datasets are 3-meter lidar-derived images of Monroe County, West Virginia, and were created using geographic information systems (GIS) software. Lidar-derived elevation data acquired in late December of 2016 were used to create a 3-meter resolution working digital elevation model (DEM), from which a hillshade was applied and a topographic position index (TPI) raster was calculated. These two rasters were uploaded into GlobalMapper, where the TPI raster was made partially transparent and overlaid the hillshade DEM. The resulting image was exported to create a 3-meter resolution lidar-derived image. The data is projected in North America Datum (NAD) 1983 UTM Zone 17.

  11. KUCL: Korea University Camera-LIDAR Dataset

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 28, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jaehyeon Kang; Jaehyeon Kang; Nakju Lett Doh; Nakju Lett Doh (2020). KUCL: Korea University Camera-LIDAR Dataset [Dataset]. http://doi.org/10.5281/zenodo.2640062
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 28, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jaehyeon Kang; Jaehyeon Kang; Nakju Lett Doh; Nakju Lett Doh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview

    The Korea University Camera-LIDAR (KUCL) dataset contains images and point clouds acquired in indoor and outdoor environments for various applications (e.g., calibration of rigid-body transformation between camera and LIDAR) in robotics and computer vision communities.

    • Indoor dataset: contains 63 pairs of images and point clouds ('indoor.zip'). We collected the indoor dataset in a static indoor environment with walls, floor, and ceiling.
    • Outdoor dataset: 61 pairs of images and point clouds ('outdoor.zip'). We collected the outdoor dataset in an outdoor environment including buildings and trees.

    Setup

    The images were taken using a Point Grey Ladybug5 (specifications) camera and point clouds were acquired with a Velodyne VLP-16 LIDAR (specifications). We rigidly mounted both sensors on the sensor frame during the overall data acquisition. Each pair of images and point clouds was discretely acquired while maintaining the sensor system standing still to reduce time-synchronization problems.

    Description

    Each dataset (zip file) is organized as follows:

    • images/pano: This folder contains spherical panorama images (8000 X 4000) collected using the Ladybug5.
    • images/pinhole/cam0~cam5: These folders contain rectified pinhole images (2448 X 2048) collected using six cameras (cam0~cam5) of the Ladybug5.
    • images/pinhole/mask: This folder contains the mask (BW image) of each camera of the Ladybug5.
    • images/pinhole/cam_param_pinhole.txt: This file contains extrinsic (transformation from the Ladybug5 to each lens) and intrinsic (focal length and center) parameters of each lens of the Ladybug5. For details of Ladybug5 coordinate system, please refer to the technical application note.
    • scans: This folder contains point clouds collected using the VLP-16 LIDAR in text files. The first line of each file is the number of points (N), and the remaining lines are points and corresponding reflectivities (N X 4).

    We also provide MATLAB functions projecting point cloud onto spherical panorama and pinhole images. Before running the following functions, please unzip the dataset file ('indoor.zip' or 'outdoor.zip') under the main directory.

    • run_pano_projection.m: This function projects points onto a spherical panorama image. Lines 19-20 select dataset and index of an image and a point cloud.
    • run_pinhole_projection.m: This function projects points onto a pinhole image. Lines 19-21 select dataset, index of an image and a point cloud, and pinhole camera index.

    The rigid-body transformation between the Ladybug5 and the VLP-16 in each function is acquired using our edge-based Camera-LIDAR calibration method with Gaussian Mixture Model (GMM). For the details, please refer to our paper (https://doi.org/10.1002/rob.21893).

    Citation

    Please cite the following paper when using this dataset in your work.

    • Jaehyeon Kang and Nakju L. Doh, "Automatic Targetless Camera-LIDAR Calibration by Aligning Edge with Gaussian Mixture Model," Journal of Field Robotics, vol. 37, no. 1, pp.158-179, 2020.
    • @ARTICLE {kang-2020-jfr,
      AUTHOR = {Jaehyeon Kang and Nakju Lett Doh},
      TITLE = {Automatic Targetless Camera–{LIDAR} Calibration by Aligning Edge with {Gaussian} Mixture Model},
      JOURNAL = {Journal of Field Robotics},
      YEAR = {2020},
      VOLUME = {37},
      NUMBER = {1},
      PAGES = {158--179},
      }

    License information

    The KUCL dataset is released under a Creative Commons Attribution 4.0 International License, CC BY 4.0

    Contact Information

    If you have any issues about the KUCL dataset, please contact us at kangjae07@gmail.com.

  12. d

    Lidar point cloud, elevation models, GPS data, and imagery and orthomosaics...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Dec 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Lidar point cloud, elevation models, GPS data, and imagery and orthomosaics from multispectral and true-color aerial imagery data, collected during UAS operations at Marsh Island, New Bedford, MA on October 26th, 2023 [Dataset]. https://catalog.data.gov/dataset/lidar-point-cloud-elevation-models-gps-data-and-imagery-and-orthomosaics-from-multispectra
    Explore at:
    Dataset updated
    Dec 27, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    New Bedford, Massachusetts
    Description

    Small Uncrewed Aircraft Systems (sUAS) were used to collect aerial remote sensing data over Marsh Island, a salt marsh restoration site along New Bedford Harbor, Massachusetts. Remediation of the site will involve direct hydrological and geochemical monitoring of the system alongside the UAS remote sensing data. On October 26th, 2023, USGS personnel collected natural (RGB) color images, multispectral images, lidar, and ground control points. These data were processed to produce a high resolution lidar point cloud (LPC), digital elevation models (surface and terrain), and natural-color and multispectral reflectance image mosaics. Data collection is related to USGS Field Activity 2023-025-FA and this release only provides the UAS portion.

  13. m

    Data from: UA_L-DoTT: University of Alabama's Large Dataset of Trains and...

    • data.mendeley.com
    Updated Feb 17, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maxwell Eastepp (2022). UA_L-DoTT: University of Alabama's Large Dataset of Trains and Trucks [Dataset]. http://doi.org/10.17632/982jbmh5h9.1
    Explore at:
    Dataset updated
    Feb 17, 2022
    Authors
    Maxwell Eastepp
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Area covered
    Alabama
    Description

    UA_L-DoTT (University of Alabama’s Large Dataset of Trains and Trucks) is a collection of camera images and 3D LiDAR point cloud scans from five different data sites. Four of the data sites targeted trains on railways and the last targeted trucks on a four-lane highway. Low light conditions were present at one of the data sites showcasing unique differences between individual sensor data. The final data site utilized a mobile platform which created a large variety of view points in images and point clouds. The dataset consists of 93,397 raw images, 11,415 corresponding labeled text files, 354,334 raw point clouds, 77,860 corresponding labeled point clouds, and 33 timestamp files. These timestamps correlate images to point cloud scans via POSIX time. The data was collected with a sensor suite consisting of five different LiDAR sensors and a camera. This provides various viewpoints and features of the same targets due to the variance in operational characteristics of the sensors. The inclusion of both raw and labeled data allows users to get started immediately with the labeled subset, or label additional raw data as needed. This large dataset is beneficial to any researcher interested in machine learning using cameras, LiDARs, or both.

    The full dataset is too large (~1 Tb) to be uploaded to Mendeley Data. Please see the attached link for access to the full dataset.

  14. d

    2017 Countywide LiDAR Point Cloud

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Sep 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lake County Illinois GIS (2022). 2017 Countywide LiDAR Point Cloud [Dataset]. https://catalog.data.gov/dataset/2017-countywide-lidar-point-cloud-638f8
    Explore at:
    Dataset updated
    Sep 1, 2022
    Dataset provided by
    Lake County Illinois GIS
    Description

    Click here to access the data directly from the Illinois State Geospatial Data Clearinghouse. These lidar data are processed Classified LAS 1.4 files, formatted to 2,117 individual 2500 ft x 2500 ft tiles; used to create Reflectance Images, 3D breaklines and hydro-flattened DEMs as necessary. Geographic Extent: Lake county, Illinois covering approximately 466 square miles. Dataset Description: WI Kenosha-Racine Counties and IL 4 County QL1 Lidar project called for the Planning, Acquisition, processing and derivative products of lidar data to be collected at a derived nominal pulse spacing (NPS) of 1 point every 0.35 meters. Project specifications are based on the U.S. Geological Survey National Geospatial Program Base Lidar Specification, Version 1.2. The data was developed based on a horizontal projection/datum of NAD83 (2011), State Plane, U.S Survey Feet and vertical datum of NAVD88 (GEOID12B), U.S. Survey Feet. Lidar data was delivered as processed Classified LAS 1.4 files, formatted to 2,117 individual 2500 ft x 2500 ft tiles, as tiled Reflectance Imagery, and as tiled bare earth DEMs; all tiled to the same 2500 ft x 2500 ft schema. Ground Conditions: Lidar was collected April-May 2017, while no snow was on the ground and rivers were at or below normal levels. In order to post process the lidar data to meet task order specifications and meet ASPRS vertical accuracy guidelines, Ayers established a total of 66 ground control points that were used to calibrate the lidar to known ground locations established throughout the WI Kenosha-Racine Counties and IL 4 County QL1 project area. An additional 195 independent accuracy checkpoints, 116 in Bare Earth and Urban landcovers (116 NVA points), 79 in Tall Grass and Brushland/Low Trees categories (79 VVA points), were used to assess the vertical accuracy of the data. These checkpoints were not used to calibrate or post process the data. Users should be aware that temporal changes may have occurred since this dataset was collected and that some parts of these data may no longer represent actual surface conditions. Users should not use these data for critical applications without a full awareness of its limitations. Acknowledgement of the U.S. Geological Survey would be appreciated for products derived from these data. These LAS data files include all data points collected. No points have been removed or excluded. A visual qualitative assessment was performed to ensure data completeness. No void areas or missing data exist. The raw point cloud is of good quality and data passes Non-Vegetated Vertical Accuracy specifications.Link Source: Illinois Geospatial Data Clearinghouse

  15. R

    Data from: Lidar Data Dataset

    • universe.roboflow.com
    zip
    Updated Dec 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SSG (2023). Lidar Data Dataset [Dataset]. https://universe.roboflow.com/ssg-gjpvf/lidar-data-jda8f/model/4
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 3, 2023
    Dataset authored and provided by
    SSG
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Cars Bounding Boxes
    Description

    LIDAR DATA

    ## Overview
    
    LIDAR DATA is a dataset for object detection tasks - it contains Cars annotations for 200 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  16. d

    2020 LiDAR - Classified LAS

    • catalog.data.gov
    • opendata.dc.gov
    • +6more
    Updated May 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Office of the Chief Technology Officer (2025). 2020 LiDAR - Classified LAS [Dataset]. https://catalog.data.gov/dataset/2020-lidar-classified-las
    Explore at:
    Dataset updated
    May 7, 2025
    Dataset provided by
    Office of the Chief Technology Officer
    Description

    These lidar data are processed classified LAS 1.4 files at USGS QL2 covering the District of Columbia. Voids exist in the data due to data redaction conducted under the guidance of the United States Secret Service. This dataset provided as an ArcGIS Image service. Please note, the download feature for this image service in Open Data DC provides a compressed PNG, JPEG or TIFF. The individual LAS point cloud datasets are available under additional options when viewing downloads.

  17. KITTI LiDAR Based 2D Depth Images

    • kaggle.com
    Updated Jul 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahmed Fawzy Elaraby (2020). KITTI LiDAR Based 2D Depth Images [Dataset]. https://www.kaggle.com/ahmedfawzyelaraby/kitti-lidar-based-2d-depth-images/activity
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 24, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Ahmed Fawzy Elaraby
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Introduction

    Light Detection And Ranging (LiDAR) is a sensor that is used to measure distances between the sensor and the surroundings. It depends on sending multiple laser beams and sense them back after being reflected to calculate the distance between the sensor and the objected they were reflected on. Since the rise of the research in the field of self-driving cars, LiDAR has been widely used and was even developed to be with lower cost than before.

    KITTI dataset is one of the most famous datasets targeting the field of self-driving cars. It contains recorded data from camera, LiDAR and other sensors mounted on top of a car that moves in many streets with many different scenes and scenarios.

    This dataset contains the LiDAR frames of KITTI dataset converted to 2D depth images and it was converted using this code. These 2D depth images represents the same scene of the corresponding LiDAR frame but in an easier to process format.

    Content

    This dataset contains 2D depth images, like the one represented below. The 360 LiDAR frames like those in KITTI dataset are in a cylindrical format around the sensor itself. The 2D depth images in this dataset could be represented as if you have made a cut in the cylinder of the LiDAR frame and straightened it to be in a 2D plane. The pixels of these 2D depth images represent the distance of the reflecting object from the LiDAR sensor. The vertical resolution of the 2D depth image (64 in our case) represents the number of laser beams of the LiDAR sensor used to scan the surroundings. These 2D depth images could be used for segmentation, detection, recognition and etc. tasks and could make use of the huge literature of computer vision on 2D images. https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3283916%2F71fcde75b3e94ab78896aa75d7efea09%2F0000000077.png?generation=1595578603898080&alt=media" alt="">

  18. F

    UAV LiDAR, UAV Imagery, Tree Segmentations and Ground Mesurements for...

    • frdr-dfdr.ca
    Updated Jul 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lefebvre, Isabelle; Laliberté, Etienne (2024). UAV LiDAR, UAV Imagery, Tree Segmentations and Ground Mesurements for Estimating Tree Biomass in Canadian (Quebec) Plantations [Dataset]. http://doi.org/10.20383/103.0979
    Explore at:
    Dataset updated
    Jul 29, 2024
    Dataset provided by
    Federated Research Data Repository / dépôt fédéré de données de recherche
    Authors
    Lefebvre, Isabelle; Laliberté, Etienne
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Canada, Quebec
    Description

    This dataset contains UAV photogrammetric products, UAV LiDAR point clouds, hand-made tree crown segmentation polygons, and georeferenced tree height and stem diameter measurements in carbon sequestration plantations in Quebec, Canada. The products included in this dataset are RGB orthomosaics, LiDAR and photogrammetry point clouds, digital surface models and processing reports. The tree measurements include species, height, stem diameter and location.

  19. Tree detection in UAV LiDAR and RGB image data

    • kaggle.com
    Updated Aug 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ivan D. (2024). Tree detection in UAV LiDAR and RGB image data [Dataset]. https://www.kaggle.com/datasets/sentinel3734/tree-detection-lidar-rgb
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 3, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Ivan D.
    Description

    A dataset for development and evaluation of methods of automatic tree detection in UAV LiDAR point clouds and RGB orthophotos. Consists of 3600 trees across 10 ground plots covered by a LiDAR point cloud and an RGB orthophoto. Each tree is represented as a point in UTM 40N (EPSG:32640) coordinates with a species label and diameter at breast height. Some trees also have height and age information.

    A paper describing the dataset and the accompanying algorithms is published in Scientific Reports: https://rdcu.be/dUwTR. If you find this dataset useful in your work, we would appreciate a citation.

  20. S

    DIDLM

    • scidb.cn
    Updated Jul 30, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    anonymous (2024). DIDLM [Dataset]. http://doi.org/10.57760/sciencedb.10199
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 30, 2024
    Dataset provided by
    Science Data Bank
    Authors
    anonymous
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Adverse weather conditions, low-light environments, and bumpy road surfaces pose significant challenges to SLAM in robotic navigation and autonomous driving. Existing datasets in this field predominantly rely on single sensors or combinations of LiDAR, cameras, and IMUs. However, 4D millimeter-wave radar demonstrates robustness in adverse weather, infrared cameras excel in capturing details under low-light conditions, and depth images provide richer spatial information. Multi-sensor fusion methods also show potential for better adaptation to bumpy roads. Despite some SLAM studies incorporating these sensors and conditions, there remains a lack of comprehensive datasets addressing low-light environments and bumpy road conditions, or featuring a sufficiently diverse range of sensor data. In this study, we introduce a multi-sensor dataset covering challenging scenar ios such as snowy weather, rainy weather, nighttime conditions, speed bumps, and rough terrains. The dataset includes rarely utilized sensors for extreme conditions, such as 4D millimeter wave radar, infrared cameras, and depth cameras, alongside 3D LiDAR, RGB cameras, GPS, and IMU. It supports both autonomous driving and ground robot applications and provides reliable GPS/INS ground truth data, covering structured and semi-structured terrains. We evaluated various SLAM algorithms using this dataset, including RGB images, infrared images, depth images, LiDAR, and 4D millimeter-wave radar. The dataset spans a total of 18.5 km, 69 minutes, and approximately 660 GB, offering a valuable resource for advancing SLAM research under complex and extreme conditions.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Karim Cossentini (2020). Lidar Dataset [Dataset]. https://www.kaggle.com/datasets/karimcossentini/velodyne-point-cloud-dataset
Organization logo

Lidar Dataset

Explore at:
zip(0 bytes)Available download formats
Dataset updated
Oct 22, 2020
Authors
Karim Cossentini
Description

This Datasets contains the Kitti Object Detection Benchmark, created by Andreas Geiger, Philip Lenz and Raquel Urtasun in the Proceedings of 2012 CVPR ," Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite". This Kernel contains the object detection part of their different Datasets published for Autonomous Driving. It contains a set of images with their bounding box labels and velodyne point clouds. For more information visit the Website they published the data on (http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d).

Search
Clear search
Close search
Google apps
Main menu