Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
License information was derived automatically
Virtual KITTI is a photo-realistic synthetic video dataset designed to learn and evaluate computer vision models for several video understanding tasks: object detection and multi-object tracking, scene-level and instance-level semantic segmentation, optical flow, and depth estimation. Virtual KITTI contains 21,260 images generated from five different virtual worlds in urban settings under different imaging and weather conditions. These photo-realistic synthetic images are automatically, exactly, and fully annotated for 2D and 3D multi-object tracking and at the pixel level with category, instance, flow, and depth labels.
Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
License information was derived automatically
The depth prediction evaluation is related to the work published in Sparsity Invariant CNNs (THREEDV 2017). It contains over 93 thousand depth maps with corresponding raw LiDaR scans and RGB images, aligned with the "raw data" of the KITTI dataset. Given a large amount of training data, this dataset shall allow the training of complex deep learning models for depth completion and single image depth prediction tasks. Also, manually selected images with unpublished depth maps are provided here to serve as a benchmark for those two challenging tasks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Cars Kitti is a dataset for object detection tasks - it contains Cars annotations for 383 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
From: This is the file : 2011_09_29_drive_0071 (4.1 GB) [synced+rectified data] This page contains our raw data recordings, sorted by category (see menu above). So far, we included only sequences, for which we either have 3D object labels or which occur in our odometry benchmark training set. The dataset comprises the following information, captured and synchronized at 10 Hz: Raw (unsynced+unrectified) and processed (synced+rectified) grayscale stereo sequences (0.5 Megapixels, stored in png format) Raw (unsynced+unrectified) and processed (synced+rectified) color stereo sequences (0.5 Megapixels, stored in png format) 3D Velodyne point clouds (100k points per frame, stored as binary float matrix) 3D GPS/IMU data (location, speed, acceleration, meta information, stored as text file) Calibration (Camera, Camera-to-GPS/IMU, Camera-to-Velodyne, stored as text file) 3D object tracklet labels (cars, trucks, trams, pedestrians, cyclists, sto
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
KITTI is a dataset for object detection tasks - it contains P annotations for 3,000 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
The KITTI val split is a subset of the KITTI dataset, used for validation and testing.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset downloaded from official KITTI website was used in eGAC3D's evaluation. This dataset includes raw images (data_object_label_2.zip) and corresponding ground-truth (data_object_image_2.zip).
https://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt
Yearly citation counts for the publication titled "Vision meets robotics: The KITTI dataset".
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset is created by MonoTTA: Fully Test-Time Adaptation for Monocular 3D Object Detection, based on KITTI. You can check this link for more details: https://arxiv.org/abs/2405.19682v1 And access the code: https://github.com/Hongbin98/MonoTTA Please double-check the demands of KITTI when you try to download this dataset and obey their rules.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
KITTI+NuScene is a dataset for object detection tasks - it contains CARS PEDESTRIAN SIGNS annotations for 452 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Light Detection And Ranging (LiDAR) is a sensor that is used to measure distances between the sensor and the surroundings. It depends on sending multiple laser beams and sense them back after being reflected to calculate the distance between the sensor and the objected they were reflected on. Since the rise of the research in the field of self-driving cars, LiDAR has been widely used and was even developed to be with lower cost than before.
KITTI dataset is one of the most famous datasets targeting the field of self-driving cars. It contains recorded data from camera, LiDAR and other sensors mounted on top of a car that moves in many streets with many different scenes and scenarios.
This dataset contains the LiDAR frames of KITTI dataset converted to 2D depth images and it was converted using this code. These 2D depth images represents the same scene of the corresponding LiDAR frame but in an easier to process format.
This dataset contains 2D depth images, like the one represented below. The 360 LiDAR frames like those in KITTI dataset are in a cylindrical format around the sensor itself. The 2D depth images in this dataset could be represented as if you have made a cut in the cylinder of the LiDAR frame and straightened it to be in a 2D plane. The pixels of these 2D depth images represent the distance of the reflecting object from the LiDAR sensor. The vertical resolution of the 2D depth image (64 in our case) represents the number of laser beams of the LiDAR sensor used to scan the surroundings. These 2D depth images could be used for segmentation, detection, recognition and etc. tasks and could make use of the huge literature of computer vision on 2D images.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3283916%2F71fcde75b3e94ab78896aa75d7efea09%2F0000000077.png?generation=1595578603898080&alt=media" alt="">
This dataset was created by Luan Huynh
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Mono KITTI is a dataset of high-quality monocular images with precise distance measurements, derived from the KITTI dataset. It is ideal for monocular distance estimation, autonomous driving, and advancing computer vision research.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
fenfenda/KITTI dataset hosted on Hugging Face and contributed by the HF Datasets community
The KITTI object detection benchmark dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The .hkl weights for training PredNet on the KITTI dataset, created with an updated hickle version.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Kitti Train is a dataset for object detection tasks - it contains Objects annotations for 7,481 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
From: David Eigen, ChristianPuhrsch,andRobFergus.Depth map prediction from a single image using a multi-scale deep network. In NIPS*2014, pages 2366β2374.
28 Scenes - testing.
33 Scenes - training and validation.
29,000 images - training
1,159 images - validation
12,223 images - test
29 Scenes - testing.
32 Scenes - training and validation.
22,600 images - training
888 images - validation
697 images - test
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Kitti Test is a dataset for object detection tasks - it contains Cars Vehicles Pedestrians annotations for 585 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Koshti10/Kitti-Images dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
License information was derived automatically
Virtual KITTI is a photo-realistic synthetic video dataset designed to learn and evaluate computer vision models for several video understanding tasks: object detection and multi-object tracking, scene-level and instance-level semantic segmentation, optical flow, and depth estimation. Virtual KITTI contains 21,260 images generated from five different virtual worlds in urban settings under different imaging and weather conditions. These photo-realistic synthetic images are automatically, exactly, and fully annotated for 2D and 3D multi-object tracking and at the pixel level with category, instance, flow, and depth labels.