Facebook
TwitterThis Datasets contains the Kitti Object Detection Benchmark, created by Andreas Geiger, Philip Lenz and Raquel Urtasun in the Proceedings of 2012 CVPR ," Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite". This Kernel contains the object detection part of their different Datasets published for Autonomous Driving. It contains a set of images with their bounding box labels and velodyne point clouds. For more information visit the Website they published the data on (http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Lidar Human Detection is a dataset for object detection tasks - it contains Humans annotations for 1,343 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterA proposal-level point cloud SSL framework for 3D object detection, learning robust 3D representations by contrasting region proposals.
Facebook
TwitterThis dataset was created by cubeai
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
## Overview
LiDAR Object Detection is a dataset for instance segmentation tasks - it contains Buildings Roads Veg Parkinglot annotations for 558 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
Facebook
TwitterThis dataset was created by Neoskye
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The SERC Subjective Quality LiDAR Image Dataset contains 342 reconstructed images and their ground-truth equivalents, as well as the MOS values and statistics obtained during subjective experiments conducted within the joint IMPRESS-U project entitled “EAGER IMPRESS-U: Exploratory Research on Generative Compression for Compressive Lidar”, funded in part by US National Science Foundation NSF under Grant No. 2404740, Science Technology Center in Ukraine (STCU) Agreement No. 7116, and National Science Centre, Poland (NCN), Grant no. 2023/05/Y/ST6/00197.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
## Overview
Lidar is a dataset for object detection tasks - it contains Vehicles annotations for 1,905 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset provides object detection results using five different LiDAR-based object detection algorithms: PointRCNN, SECOND, Part-A², PointPillars, and PVRCNN. The experiments aim to determine the optimal angular resolution for LiDAR-based object detection. The point cloud data was generated in the CARLA simulator, modeled in a suburban scenario featuring 30 vehicles, 13 bicycles, and 40 pedestrians. The angular resolution in the dataset ranges from 0.1° x 0.1° (H x V) to 1.0° x 1.0°, with increments of 0.1° in each direction.
For each angular resolution, over 2000 frames of point clouds were collected, with 1600 of these frames labeled across three object classes—vehicles, pedestrians, and cyclists, for algorithm training purposes The dataset includes detection results after evaluating 1000 frames, with results recorded for the respective angular resolutions.
Each file in the dataset contains five sheets, corresponding to the five different algorithms evaluated. The data structure includes the following columns:
Frame Index: Indicates the frame number, ranging from 1 to 1000.
Object Classification: Labels objects as 1 (Vehicle), 2 (Pedestrian), or 3 (Cyclist).
Confidence Score: Represents the confidence level of the detected object in its bounding box.
Number of LiDAR Points: Indicates the count of LiDAR points within the bounding box.
Bounding Box Distance: Specifies the distance of the bounding box from the LiDAR sensor.
This dataset has been created in the context of the Leibniz Young Investigator Grants- programmed by the Leibniz University Hannover and is funded by the Ministry of Science and Culture of Lower Saxony (MWK) Grant Nr. 11-76251-114/2022
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
## Overview
LIDAR Detection is a dataset for object detection tasks - it contains LIDARs Rkfw annotations for 665 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
Facebook
TwitterA LiDAR-based 3D object detection dataset.
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
UA_L-DoTT (University of Alabama’s Large Dataset of Trains and Trucks) is a collection of camera images and 3D LiDAR point cloud scans from five different data sites. Four of the data sites targeted trains on railways and the last targeted trucks on a four-lane highway. Low light conditions were present at one of the data sites showcasing unique differences between individual sensor data. The final data site utilized a mobile platform which created a large variety of view points in images and point clouds. The dataset consists of 93,397 raw images, 11,415 corresponding labeled text files, 354,334 raw point clouds, 77,860 corresponding labeled point clouds, and 33 timestamp files. These timestamps correlate images to point cloud scans via POSIX time. The data was collected with a sensor suite consisting of five different LiDAR sensors and a camera. This provides various viewpoints and features of the same targets due to the variance in operational characteristics of the sensors. The inclusion of both raw and labeled data allows users to get started immediately with the labeled subset, or label additional raw data as needed. This large dataset is beneficial to any researcher interested in machine learning using cameras, LiDARs, or both.
The full dataset is too large (~1 Tb) to be uploaded to Mendeley Data. Please see the attached link for access to the full dataset.
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
Discover the booming Lidar Object Processing Software market! This in-depth analysis reveals key trends, growth drivers, and market segmentation from 2019-2033, featuring key players like Hexagon and Velodyne. Explore regional market share and forecast data for informed business decisions.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by Bill Basener
Released under MIT
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Object Detection Lidar YOLOv5 is a dataset for object detection tasks - it contains Car Person Tree annotations for 3,930 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Keywords: LiDAR Point Cloud corruption, Sensor phenomena, anomaly, autonomous vehicle, contamination, dataset, object detection benchmark, perception robustness testing, sensor.
This dataset is the 5m dataset, which is part of the larger LIDAROC dataset.
The experiment was conducted in two environments: The first was a subterranean narrow hallway with the target approximately 5 meters away, referred to as the 5m dataset, simulating a complex urban driving scenario. The second environment was a spacious outdoor area with two distance variations (10 and 20 meters).
For the 10m and 20m datasets, please refer to the link below:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
LiDAR+Camera Dataset is a dataset for object detection tasks - it contains Auto Car HV LCV MTW Others annotations for 2,187 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is part of the larger data collection, “Aerial imagery object identification dataset for building and road detection, and building height estimation”, linked to in the references below and can be accessed here: https://dx.doi.org/10.6084/m9.figshare.c.3290519. For a full description of the data, please see the metadata: https://dx.doi.org/10.6084/m9.figshare.3504413.
Imagery data from the United States Geological Survey (USGS); building and road shapefiles are from OpenStreetMaps (OSM) (these OSM data are made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/); and the Lidar data are from U.S. National Oceanic and Atmospheric Administration (NOAA), the Texas Natural Resources Information System (TNRIS).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Keywords: LiDAR Point Cloud corruption, Sensor phenomena, anomaly, autonomous vehicle, contamination, dataset, object detection benchmark, perception robustness testing, sensor.
This dataset is the 20m dataset, which is part of the larger LIDAROC dataset.
The experiment was conducted in two environments: The first was a subterranean narrow hallway with the target approximately 5 meters away, referred to as the 5m dataset, simulating a complex urban driving scenario. The second environment was a spacious outdoor area with two distance variations (10 and 20 meters).
For the 5m and 10m datasets, please refer to the link below:
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset is made focusing on real-time 3D object detection and tracking for autonomous vehicles. Processing raw LiDAR point clouds on edge devices like NVIDIA Xavier is computationally expensive, so we adopted a more efficient approach—converting 3D LiDAR data into 2D Bird’s Eye View (BEV) images.
Autonomous driving systems rely heavily on LiDAR point clouds (x, y, z, intensity) for perception. However, running deep learning models directly on raw LiDAR data is challenging due to high computational costs. This dataset provides pre-processed BEV images that significantly reduce complexity while preserving critical spatial information for object detection.
We follow the approach from the research paper "Complex-YOLO: An Euler-Region-Proposal for Real-time 3D Object Detection on Point Clouds" to transform LiDAR point clouds into RGB BEV images. The transformation includes:
These maps are combined to form RGB images that can be efficiently processed by CNNs.
✅ Data Reduction – 29GB of raw KITTI LiDAR data compressed into just 600MB of BEV images.
✅ Efficient Object Detection – Supports YOLO OBB bounding box format for lightweight real-time inference.
✅ Edge Device Optimization – Designed for fast inference on low-power hardware like NVIDIA Jetson Xavier.
🔹 Real-time 3D object detection for autonomous vehicles
🔹 Sensor fusion and multi-modal perception research
🔹 Efficient LiDAR data processing for embedded AI applications
Facebook
TwitterThis Datasets contains the Kitti Object Detection Benchmark, created by Andreas Geiger, Philip Lenz and Raquel Urtasun in the Proceedings of 2012 CVPR ," Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite". This Kernel contains the object detection part of their different Datasets published for Autonomous Driving. It contains a set of images with their bounding box labels and velodyne point clouds. For more information visit the Website they published the data on (http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d).