Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Work in progress: data might be changed
The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies.
Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock.
The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/807618b6-5c38-4456-88a1-cb47500081ff/download/detection_map.png" alt="Overview map of detected vehicles" title="Overview map of detected vehicles">
Figure 1: Overview map of detected vehicles
The public parking areas were digitized manually using aerial images and the detected vehicles in order to exclude irregular parking spaces as far as possible. They were also tagged as to whether they were aligned parallel to the road and assigned to a use at the time of recording, as some are used for construction sites or outdoor catering, for example. Depending on the intended use, they can be filtered individually.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/16b14c61-d1d6-4eda-891d-176bdd787bf5/download/parking_area_example.png" alt="Example parking area occupation pattern" title="Visualization of example parking areas on top of an aerial image [by LGLN]">
Figure 2: Visualization of example parking areas on top of an aerial image [by LGLN]
For modelling the parking occupancy, single slots are sampled as center points every 5 m from the parking areas. In this way, they can be integrated into a street/routing graph, for example, as prepared in Wage et al. (2023). Own representations can be generated from the parking area and vehicle detections. Those parking points were intersected with the vehicle boxes to identify occupancy at the respective epochs.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/ca0b97c8-2542-479e-83d7-74adb2fc47c0/download/datenpub-bays.png" alt="Overview map of parking slots' average load" title="Overview map of parking slots' average load">
Figure 3: Overview map of average parking lot load
However, unoccupied spaces cannot be determined quite as trivially the other way around, since no detected vehicle can result just as from no measurement/observation. Therefore, a parking space is only recorded as unoccupied if a vehicle was detected at the same time in the neighborhood on the same parking lane and therefore it can be assumed that there is a measurement.
To close temporal gaps, interpolations were made by hour for each parking slot, assuming that between two consecutive observations with an occupancy the space was also occupied in between - or if both times free also free in between. If there was a change, this is indicated by a proportional value. To close spatial gaps, unobserved spaces in the area are drawn randomly from the ten closest occupation patterns around.
This results in an exemplary occupancy pattern of a synthetic day. Depending on the application, the value could be interpreted as occupancy probability or occupancy share.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/184a1f75-79ab-4d0e-bb1b-8ed170678280/download/occupation_example.png" alt="Example parking area occupation pattern" title="Example parking area occupation pattern">
Figure 4: Example parking area occupation pattern
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Roads affect negatively wildlife, from direct mortality to habitat fragmentation. Mortality caused by collision with vehicles on roads is a major threat to many species. Monitoring animal road-kills is essential to stablish correct road mitigation measures. Many countries have national monitoring systems for identifying mortality hotspots. We present here an improved version of the mobile mapping system (MMS2) for detecting Roadkills not only for amphibians but small birds as well. It is composed by two stereo multi-spectral and high definition camera (ZED), a high-power processing laptop, a GPS device connected to the laptop, and a small support device attachable to the back of any vehicle. The system is controlled by several applications that manage all the video recording steps as well as the GPS acquisition, merging everything in a single final file, ready to be examine by an algorithm at posterior. We used the state-of-the-art machine learning computer vision algorithm (CNN: Convolutional Neural Network) to automatically detect animals on roads. This self-learning algorithm needs a large number of images with alive animals, road-killed animals and any objects likely to be found on roads (e.g. garbage thrown away by drivers) in order to be trained. The greater the image database, the greater the detection efficiency. This improved version of the mobile mapping system presents very good results. The algorithm has a good effectiveness in detecting small birds and amphibians.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The datasets are original and specifically collected for research aimed at reducing registration errors between Camera-LiDAR datasets. Traditional methods often struggle with aligning 2D-3D data from sources that have different coordinate systems and resolutions. Our collection comprises six datasets from two distinct setups, designed to enhance versatility in our approach and improve matching accuracy across both high-feature and low-feature environments.Survey-Grade Terrestrial Dataset:Collection Details: Data was gathered across various scenes on the University of New Brunswick campus, including low-feature walls, high-feature laboratory rooms, and outdoor tree environments.Equipment: LiDAR data was captured using a Trimble TX5 3D Laser Scanner, while optical images were taken with a Canon EOS 5D Mark III DSLR camera.Mobile Mapping System Dataset:Collection Details: This dataset was collected using our custom-built Simultaneous Localization and Multi-Sensor Mapping Robot (SLAMM-BOT) in several indoor mobile scenes to validate our methods.Equipment: Data was acquired using a Velodyne VLP-16 LiDAR scanner and an Arducam IMX477 Mini camera, controlled via a Raspberry Pi board.
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">
Image credit: Sören Vogel
The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">
Image credit: Sören Vogel
The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">
The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:
To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:
roscore & rosrun rviz rviz -d icsens_data.rviz
Afterwards, start playing a rosbag with
rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
Below we provide some exemplary images and their corresponding point clouds.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">
R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.
R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.
R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.
R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.
R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.
R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.
R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.
R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.
Facebook
Twitterhttps://doi.org/10.17026/fp39-0x58https://doi.org/10.17026/fp39-0x58
These files are to support the published journal and thesis about the IMU and LIDAR SLAM for indoor mapping. They include datasets and functions used for point clouds generation. Date Submitted: 2022-02-21
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Recently published datasets have been increasingly comprehensive with respect to their variety of simultaneously used sensors, traffic scenarios, environmental conditions, and provided annotations. However, these datasets typically only consider data collected by one independent vehicle. Hence, there is currently a lack of comprehensive, real-world, multi-vehicle datasets fostering research on cooperative applications such as object detection, urban navigation, or multi-agent SLAM. In this paper, we aim to fill this gap by introducing the novel LUCOOP dataset, which provides time-synchronized multi-modal data collected by three interacting measurement vehicles. The driving scenario corresponds to a follow-up setup of multiple rounds in an inner city triangular trajectory. Each vehicle was equipped with a broad sensor suite including at least one LiDAR sensor, one GNSS antenna, and up to three IMUs. Additionally, Ultra-Wide-Band (UWB) sensors were mounted on each vehicle, as well as statically placed along the trajectory enabling both V2V and V2X range measurements. Furthermore, a part of the trajectory was monitored by a total station resulting in a highly accurate reference trajectory. The LUCOOP dataset also includes a precise, dense 3D map point cloud, acquired simultaneously by a mobile mapping system, as well as an LOD2 city model of the measurement area. We provide sensor measurements in a multi-vehicle setup for a trajectory of more than 4 km and a time interval of more than 26 minutes, respectively. Overall, our dataset includes more than 54,000 LiDAR frames, approximately 700,000 IMU measurements, and more than 2.5 hours of 10 Hz GNSS raw measurements along with 1 Hz data from a reference station. Furthermore, we provide more than 6,000 total station measurements over a trajectory of more than 1 km and 1,874 V2V and 267 V2X UWB measurements. Additionally, we offer 3D bounding box annotations for evaluating object detection approaches, as well as highly accurate ground truth poses for each vehicle throughout the measurement campaign.
Important: Before downloading and using the data, please check the Updates.zip in the "Data and Resources" section at the bottom of this web site. There, you find updated files and annotations as well as update notes.
Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/541747ed-3d6e-41c4-9046-15bba3702e3b/download/lgln_logo.png" alt="Alt text" title="LGLN logo">
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/d141d4f1-49b0-40e6-b8d9-e49f420e3627/download/vans_with_redgreen_cs_vehicle.png" alt="Alt text" title="Sensor Setup of the three measurement vehicles">
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/5b6b37cf-a991-4dc4-8828-ad12755203ca/download/map_point_cloud.png" alt="Alt text" title="3D map point cloud">
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/6c61d297-8544-4788-bccf-7a28ccfa702a/download/scenario_with_osm_reference.png" alt="Alt text" title="Measurement scenario">
Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/8b0262b9-6769-4a5d-a37e-8fcb201720ef/download/annotations.png" alt="Alt text" title="Number of annotations per class">
Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/7358ed31-9886-4c74-bec2-6868d577a880/download/data_structure.png" alt="Alt text" title="Data structure">
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/fc795ec2-f920-4415-aac6-6ad3be3df0a9/download/data_format.png" alt="Alt text" title="Data format">
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/a1974957-5ce2-456c-9f44-9d05c5a14b16/download/vans_merged.png" alt="Alt text" title="Measurement vehicles">
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/53a58500-8847-4b3c-acd4-a3ac27fc8575/download/ts_uwb_mms.png" alt="Alt text">
This measurement campaign could not have been carried out without the help of many contributors. At this point, we thank Yuehan Jiang (Institute for Autonomous Cyber-Physical Systems, Hamburg), Franziska Altemeier, Ingo Neumann, Sören Vogel, Frederic Hake (all Geodetic Institute, Hannover), Colin Fischer (Institute of Cartography and Geoinformatics, Hannover), Thomas Maschke, Tobias Kersten, Nina Fletling (all Institut für Erdmessung, Hannover), Jörg Blankenbach (Geodetic Institute, Aachen), Florian Alpen (Hydromapper GmbH), Allison Kealy (Victorian Department of Environment, Land, Water and Planning, Melbourne), Günther Retscher, Jelena Gabela (both Department of Geodesy and Geoin- formation, Wien), Wenchao Li (Solinnov Pty Ltd), Adrian Bingham (Applied Artificial Intelligence Institute,
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The dataset is an annotated point cloud in ASPRS LAS v1.2 format, which is annotated with different classification numbers representing six different road markings, including lane markings (1), pedestrian crosswalk and text (2), bike (3), left arrow (4), right arrow (5), straight arrow (6), and others (0). The point cloud dataset was obtained using Oregon Department of Transportation current mobile lidar system (Leica Pegasus:Two). The data were georeferenced in the supporting software for the Leica Pegasus:Two by Oregon DOT. The authors processed the data to extract the road markings using the road marking extraction tool (Rome2) developed in this Pactrans research.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a point cloud sampe data which was collected by a mobile Lidar system (MLS).
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
These sample LiDAR datasets were collected by the Hungarian State Railways with a Riegl VMX-450 high density mobile mapping system (MMS) mounted on a railroad vehicle. The sensor was capable of recording 1.1 million points / sec with an average 3 dimensional range precision of 3 mm and a maximum threshold of 7 mm. Average positional accuracy was 3 cm with a maximum threshold of 5 cm. The acquired point clouds contain the georeferenced spatial information (3D coordinates) with intensity and RGB data attached to the points. The applied reference system is the Hungarian national spatial reference system, EPSG:23700.
3 datasets with different topographical regions of Hungary were selected: 1) mav_szabadszallas_csengod_665500_162600_665900_163200.laz is a curved rail track segment on flat terrain between the city of Szabadszállás and the town of Csengőd. The selected segment is ca. 600 m long, containing 51.8 million points. 2) mav_sztg_szh_439040_183444_440377_183863.laz is a curved rail track segment with varied terrain and slopes between the cities Szentgotthárd and Szombathely. The selected segment is ca. 1500 m long, containing 58.6 million points. 3) mav_szabadszallas_csengod_666285_159100_666436_159200.laz is a curved rail track segment on flat terrain between Szabadszállás and Csengőd, 100 m long, containing 7.3 million points. Manually annotated ground truth data for cable and rail track recognition is also attached.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Roads have multiple effects on wildlife, from animal mortality, habitat and population fragmentation, to modification of animal reproductive behavior. Amphibians in particular, due to their activity patterns, population structure, and preferred habitats, are strongly affected by traffic intensity and road density. On the other hand, road-kills studies and conservation measures have been extensively applied on highways, although amphibians die massively on country roads, where conservation measures are not applied. Many countries (e.g. Portugal) have not any national program for monitoring road-kills, a common practice in other European countries (e.g. UK; The Netherlands). This is necessary to identify hotspots of road-kills in order to implement conservation measures correctly. However, monitoring road-kills is expensive and time consuming, and depend mainly on volunteers. Therefore, cheap, easy to implement, and automatic methods for detecting road-kills over larger areas (broad monitoring) and along time (continuous monitoring) are necessary. We present here the preliminary results from a research project which aims to build a cheap and efficient system for detecting amphibians roadkills using computer-vision techniques from robotics. We propose two different solutions: 1) a Mobile Mapping System to detect automatically amphibians’ road-kills in roads, and 2) a Fixed Detection System to monitor automatically road-kills in a particular road place during a long time. The first methodology will detect and locate road-kills through the automatic classification of road surface images taken from a car with a digital camera, linked to a GPS. Road kill casualties will be detected automatically in the image through a classification algorithm developed specifically for this purpose. The second methodology will detect amphibians crossing a particular road point, and determine if they survive or not. Both Fixed and Mobile system will use similar programs. The algorithm is trained with existing data. For now, we can only present some results about the Mobile Mapping System. We are performing different tests with different cameras, namely a lineal camera, used in different industrial solutions of control quality, and an outdoor Go-pro camera, very famous on different sports like biking. Our results prove that we can detect different road-killed and live animals to an acceptable car speed and at a high spatial resolution. Both Mapping Systems will provide the capacity to detect automatically the casualties of road-kills. With these data, it will be possible to analyze the distribution of road-kills and hotspots, to identify the main migration routes, to count the total number of amphibians crossing a road, to determine how many of that individuals are effectively road-killed, and to define where conservation measures should be implemented. All these objectives will be achieved more easily at with a lower cost in funds, time, and personal resources.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains Bluetooth Low Energy signal strengths measured in a fully furnished flat. The dataset was originally used in the study concerning RSS-fingerprinting based indoor positioning systems. The data were gathered using a hybrid BLE-UWB localization system, which was installed in the apartment and a mobile robotic platform equipped for a LiDAR. The dataset comprises power measurement results and LiDAR scans performed in 4104 points. The scans used for initial environment mapping and power levels registered in two test scenarios are also attached.
The set contains both raw and preprocessed measurement data. The Python code for raw data loading is supplied.
The detailed dataset description can be found in the dataset_description.pdf file.
When using the dataset, please consider citing the original paper, in which the data were used:
M. Kolakowski, “Automated Calibration of RSS Fingerprinting Based Systems Using a Mobile Robot and Machine Learning”, Sensors , vol. 21, 6270, Sep. 2021 https://doi.org/10.3390/s21186270
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
✨ Introduction The SemanticRail3D dataset is a 3D point cloud collection tailored for railway infrastructure semantic and instance segmentation. Originally, the dataset comprises 438 point clouds covering approximately 200 meters of track each—with a total of around 2.8 billion points annotated into 11 semantic classes. Collected using high-resolution LiDAR via a LYNX Mobile Mapper (≈980 points/m² with 5mm precision), this dataset serves as an excellent benchmark for state-of-the-art AI models .
🚀 Key Enhancements & Processing To further enrich its utility for machine learning applications, the dataset has undergone several advanced preprocessing steps and quality assurance measures:
🔍 Data Standardization via PCA Targeted Features: • Linear elements, including rails and all associated wires.
PCA Application: • Extracts the principal orientation of these elements by identifying the axis of maximum variance.
Reorientation: • Aligns the extracted principal axis with the x-axis, ensuring consistency and simplifying downstream analysis.
📸 Multi-Perspective Visualizations Each point cloud in the dataset is accompanied by four rendered images, generated from distinct camera viewpoints to enhance interpretability and usability. These views are designed to showcase the spatial structure of the railway environment from meaningful angles, aiding both visual inspection and AI model training.
The saved camera views are based on spherical coordinates and include:
🔹 Front View • A head-on perspective with a slight downward angle (azimuth = 50°, elevation = 35°) to give a balanced overview of the scene structure.
🔹 Side View • A lateral perspective (azimuth = 130°, elevation = 55°) that highlights the side profile of rail and overhead wire structures.
🔹 Diagonal View • An oblique angle (azimuth = -40°, elevation = 55°) providing depth perception and a richer understanding of the 3D layout.
🔹 Overhead View • A top-down (bird’s-eye) perspective (azimuth = -140°, elevation = 35°) showing the full track arrangement and spatial alignment.
🎨 Visual Color Coding
Color Code Mapping: The points in the images are colorized based on a standardized mapping to clearly differentiate between semantic classes:
| Class | Color |
|---|---|
| Unclassified | 🔘 Gray |
| Rail | 🟫 Brown |
| Catenary | 🔵 Blue |
| Contact | 🔴 Red |
| Droppers | 🟣 Purple |
| Other Wires | 🟦 Cyan |
| Masts | 🟢 Green |
| Signs | 🟧 Orange |
| Traffic Lights | 🟡 Yellow |
| Marks | 🩷 Pink |
| Signs in Masts | 🟪 Magenta |
| Lights | ⚫ Black |
✅ Quality Assurance through Human Evaluation
Detailed Review: • Each point cloud undergoes a rigorous expert review to ensure accurate and consistent labeling.
Rating System: • Files are rated on a scale from 1 (needs improvement) to 5 (excellent quality). • The ratings are compiled in a separate CSV file for ease of reference.
Label Error Codes: Within the CSV file, objects with labeling mistakes are flagged using the following codes: • R: Rails • W: Any kind of wires and cables • M: Masts • TS: Traffic signs • Noise: Miscellaneous errors or irrelevant data
🎯 Dataset Highlights Comprehensive Coverage: • 438 point clouds covering ~200 meters each • Approximately 2.8 billion points annotated into 11 semantic classes
High-Quality LiDAR Acquisition: • Dual LiDAR sensors on a Mobile Mapping System • Point density of ~980 points/m² and a precision of 5 mm
Consistent Data Alignment: • PCA is applied to linear elements (rails and wires) for reorientation along the x-axis
Enhanced Visualizations: • Four images per point cloud provide multiple viewpoints • Points are colorized based on the standardized color code for immediate visual clarity
Robust Quality Control: • Expert human evaluation rates each point cloud (1 to 5) • A separate CSV file holds the quality ratings along with detailed error codes for any mislabeling
🔗 Summary The enhanced SemanticRail3D dataset builds on a robust collection of 3D railway point clouds with advanced preprocessing techniques and comprehensive quality assurance. Through PCA-driven alignment, multi-perspective image generation, and an intuitive color coding system, the dataset standardizes data for efficient model training. Furthermore, the additional CSV file detailing human evaluation ratings and specific label error codes provides users with clear insights into the reliability and accuracy of the annotations. This complete solution sets a new benchmark for railway infrastructure analysis, empowering researchers and practitioners to develop more precise and reliable AI solutions.
Facebook
Twitter
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Robot-at-Home dataset (Robot@Home, paper here) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.
This dataset is unique in three aspects:
The provided data were captured with a rig of 4 RGB-D sensors with an overall field of view of 180°H. and 58°V., and with a 2D laser scanner.
It comprises diverse and numerous data: sequences of RGB-D images and laser scans from the rooms of five apartments (87,000+ observations were collected), topological information about the connectivity of these rooms, and 3D reconstructions and 2D geometric maps of the visited rooms.
The provided ground truth is dense, including per-point annotations of the categories of the objects and rooms appearing in the reconstructed scenarios, and per-pixel annotations of each RGB-D image within the recorded sequences
During the data collection, a total of 36 rooms were completely inspected, so the dataset is rich in contextual information of objects and rooms. This is a valuable feature, missing in most of the state-of-the-art datasets, which can be exploited by, for instance, semantic mapping systems that leverage relationships like pillows are usually on beds or ovens are not in bathrooms.
Robot@Home2
Robot@Home2, is an enhanced version aimed at improving usability and functionality for developing and testing mobile robotics and computer vision algorithms. It consists of three main components. Firstly, a relational database that states the contextual information and data links, compatible with Standard Query Language. Secondly,a Python package for managing the database, including downloading, querying, and interfacing functions. Finally, learning resources in the form of Jupyter notebooks, runnable locally or on the Google Colab platform, enabling users to explore the dataset without local installations. These freely available tools are expected to enhance the ease of exploiting the Robot@Home dataset and accelerate research in computer vision and robotics.
If you use Robot@Home2, please cite the following paper:
Gregorio Ambrosio-Cestero, Jose-Raul Ruiz-Sarmiento, Javier Gonzalez-Jimenez, The Robot@Home2 dataset: A new release with improved usability tools, in SoftwareX, Volume 23, 2023, 101490, ISSN 2352-7110, https://doi.org/10.1016/j.softx.2023.101490.
@article{ambrosio2023robotathome2,title = {The Robot@Home2 dataset: A new release with improved usability tools},author = {Gregorio Ambrosio-Cestero and Jose-Raul Ruiz-Sarmiento and Javier Gonzalez-Jimenez},journal = {SoftwareX},volume = {23},pages = {101490},year = {2023},issn = {2352-7110},doi = {https://doi.org/10.1016/j.softx.2023.101490},url = {https://www.sciencedirect.com/science/article/pii/S2352711023001863},keywords = {Dataset, Mobile robotics, Relational database, Python, Jupyter, Google Colab}}
Version historyv1.0.1 Fixed minor bugs.v1.0.2 Fixed some inconsistencies in some directory names. Fixes were necessary to automate the generation of the next version.v2.0.0 SQL based dataset. Robot@Home v1.0.2 has been packed into a sqlite database along with RGB-D and scene files which have been assembled into a hierarchical structured directory free of redundancies. Path tables are also provided to reference files in both v1.0.2 and v2.0.0 directory hierarchies. This version has been automatically generated from version 1.0.2 through the toolbox.v2.0.1 A forgotten foreign key pair have been added.v.2.0.2 The views have been consolidated as tables which allows a considerable improvement in access time.v.2.0.3 The previous version does not include the database. In this version the database has been uploaded.v.2.1.0 Depth images have been updated to 16-bit. Additionally, both the RGB images and the depth images are oriented in the original camera format, i.e. landscape.
Facebook
TwitterDetroit Street View (DSV) is an urban remote sensing program run by the Enterprise Geographic Information Systems (EGIS) Team within the Department of Innovation and Technology at the City of Detroit. The mission of Detroit Street View is ‘To continuously observe and document Detroit’s changing physical environment through remote sensing, resulting in freely available foundational data that empowers effective city operations, informed decision making, awareness, and innovation.’ LiDAR (as well as panoramic imagery) is collected using a vehicle-mounted mobile mapping system.Due to variations in processing, index lines are not currently available for all existing LiDAR datasets, including all data collected before September 2020. Index lines represent the approximate path of the vehicle within the time extent of the given LiDAR file. The actual geographic extent of the LiDAR point cloud varies dependent on line-of-sight.Compressed (LAZ format) point cloud files may be requested by emailing gis@detroitmi.gov with a description of the desired geographic area, any specific dates/file names, and an explanation of interest and/or intended use. Requests will be filled at the discretion and availability of the Enterprise GIS Team. Deliverable file size limitations may apply and requestors may be asked to provide their own online location or physical media for transfer.LiDAR was collected using an uncalibrated Trimble MX2 mobile mapping system. The data is not quality controlled, and no accuracy assessment is provided or implied. Results are known to vary significantly. Users should exercise caution and conduct their own comprehensive suitability assessments before requesting and applying this data.Sample Dataset: https://detroitmi.maps.arcgis.com/home/item.html?id=69853441d944442f9e79199b57f26fe3
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Pavement planar coefficients are critical for a wide range of civil engineering applications, including 3D city modeling, extraction of pavement design parameters, and assessment of pavement conditions. Existing plane fitting methods, however, often struggle to maintain accuracy and stability in complex road environments, particularly when the point cloud is affected by non-pavement objects such as trees, curbstones, pedestrians, and vehicles.
REoPC is proposed as a robust two-stage estimation method based on road point clouds acquired using a hybrid solid-state LiDAR. The method consists of two main parts: coarse estimation and refined estimation. The first stage employs a dual-plane sliding window to remove major outliers and extract an initial surface. The second stage introduces a new cost function based on the Geman-McClure estimator to further suppress residual noise and reduce fitting instability caused by outlier influence and algorithmic randomness.
Evaluation is conducted on both synthetic and real-world datasets collected using a custom mobile LiDAR scanning system across three urban road scenarios—flat, crowned, and traffic-interfered segments. Each scenario includes 100 frames of road point clouds, with approximately 35,000 points per frame, offering a diverse and challenging benchmark. REoPC consistently outperforms existing methods in terms of accuracy and robustness and exhibits low sensitivity to parameter tuning, demonstrating strong applicability in varied real-world conditions.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Technical specifications of the Mobile LiDAR System.
Facebook
TwitterBeaver Lake was constructed in 1966 on the White River in the northwest corner of Arkansas for flood control, hydroelectric power, public water supply, and recreation. The surface area of Beaver Lake is about 27,900 acres and approximately 449 miles of shoreline are at the conservation pool level (1,120 feet above the North American Vertical Datum of 1988). Sedimentation in reservoirs can result in reduced water storage capacity and a reduction in usable aquatic habitat. Therefore, accurate and up-to-date estimates of reservoir water capacity are important for managing pool levels, power generation, water supply, recreation, and downstream aquatic habitat. Many of the lakes operated by the U.S. Army Corps of Engineers are periodically surveyed to monitor bathymetric changes that affect water capacity. In October 2018, the U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, completed one such survey of Beaver Lake using a multibeam echosounder. The echosounder data was combined with light detection and ranging (lidar) data to prepare a bathymetric map and a surface area and capacity table. Collection of bathymetric data in October 2018 at Beaver Lake near Rogers, Arkansas, used a marine-based mobile mapping unit that operates with several components: a multibeam echosounder (MBES) unit, an inertial navigation system (INS), and a data acquisition computer. Bathymetric data were collected using the MBES unit in longitudinal transects to provide complete coverage of the lake. The MBES was tilted in some areas to improve data collection along the shoreline, in coves, and in areas that are shallower than 2.5 meters deep (the practical limit of reasonable and safe data collection with the MBES). Two bathymetric datasets collected during the October 2018 survey include the gridded bathymetric point data (BeaverLake2018_bathy.zip) computed on a 3.28-foot (1-meter) grid using the Combined Uncertainty and Bathymetry Estimator (CUBE) method, and the bathymetric quality-assurance dataset (BeaverLake2018_QA.zip). The gridded point data used to create the bathymetric surface (BeaverLake2018_bathy.zip) was quality-assured with data from 9 selected resurvey areas (BeaverLake2018_QA.zip) to test the accuracy of the gridded bathymetric point data. The data are provided as comma delimited text files that have been compressed into zip archives. The shoreline was created from bare-earth lidar resampled to a 3.28-foot (1-meter) grid spacing. A contour line representing the flood pool elevation of 1,135 feet was generated from the gridded data. The data are provided in the Environmental Systems Research Institute shapefile format and have the common root name of BeaverLake2018_1135-ft. All files in the shapefile group must be retrieved to be useable.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Roads have multiple effects on wildlife, from animal mortality, habitat and population fragmentation, to modification of animal reproductive behaviour. Monitoring roadkill is expensive and time-consuming, and depend mainly on volunteers. Thus, cheap, easy to implement, and automatic methods for detecting roadkill over larger areas and over time are necessary. We present results from the research project Life LINES, where we developed a cheap and efficient system for detecting amphibians and small birds roadkill using computer vision techniques. We present here the Mobile Mapping System 2, an improved version of the Mobile Mapping System 1 developed during the Roadkill-project and presented in previous IENE congresses. We have successfully reduced the size and energetic consumption of the MMS, so now the device can be attached directly to the back of any car. The MMS2 is composed by several cameras (multi-spectral, visual with 3D laser technology, and high definition). The algorithms were trained with previous collected pictures of road-killed amphibians and small birds. We have tested all images using the Haar Cascade algorithm from the OpenCV library, which provided high rate classification results. We tested the MMS2 in three conditions: a control test with plastic models of amphibians and birds in a small road; a control test with collection specimens of amphibians and birds; and a real test on a 30 km road survey in Southern Portugal. The MMS2 has been developed using low cost components with the idea of saving funds, time and personal resources for wildlife preservation.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Net-Income-From-Continuing-Operations Time Series for Shanghai Huace Navigation Technology Ltd. Shanghai Huace Navigation Technology Ltd. engages in the research and development, manufacturing, and integration high-precision satellite navigation and positioning technologies in China and internationally. The company offers global navigation satellite system (GNSS) smart antennas and antennas, controllers and tablets, surveying and mapping software, GNSS sensors, total stations, and data links; handheld laser scanners, airborne LiDAR and mobile mapping systems, and UAV platforms and cameras; USV platforms and hydrographic sensors; SAR systems; and GNSS corrections for use in survey and engineering, 3D mobile mapping, marine surveying, monitoring and infrastructure, and positioning services. It also provides machine control systems for excavators, graders, and dozers; GNSS+INS and IMU sensors; and auto steering, manual guidance, land leveling, and GNSS systems. The company serves the geospatial, machine control, navigation, and agriculture industries. It also engages in property management, investing, and research and development activities. Shanghai Huace Navigation Technology Ltd. was founded in 2003 and is headquartered in Shanghai, China.
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Work in progress: data might be changed
The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies.
Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock.
The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/807618b6-5c38-4456-88a1-cb47500081ff/download/detection_map.png" alt="Overview map of detected vehicles" title="Overview map of detected vehicles">
Figure 1: Overview map of detected vehicles
The public parking areas were digitized manually using aerial images and the detected vehicles in order to exclude irregular parking spaces as far as possible. They were also tagged as to whether they were aligned parallel to the road and assigned to a use at the time of recording, as some are used for construction sites or outdoor catering, for example. Depending on the intended use, they can be filtered individually.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/16b14c61-d1d6-4eda-891d-176bdd787bf5/download/parking_area_example.png" alt="Example parking area occupation pattern" title="Visualization of example parking areas on top of an aerial image [by LGLN]">
Figure 2: Visualization of example parking areas on top of an aerial image [by LGLN]
For modelling the parking occupancy, single slots are sampled as center points every 5 m from the parking areas. In this way, they can be integrated into a street/routing graph, for example, as prepared in Wage et al. (2023). Own representations can be generated from the parking area and vehicle detections. Those parking points were intersected with the vehicle boxes to identify occupancy at the respective epochs.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/ca0b97c8-2542-479e-83d7-74adb2fc47c0/download/datenpub-bays.png" alt="Overview map of parking slots' average load" title="Overview map of parking slots' average load">
Figure 3: Overview map of average parking lot load
However, unoccupied spaces cannot be determined quite as trivially the other way around, since no detected vehicle can result just as from no measurement/observation. Therefore, a parking space is only recorded as unoccupied if a vehicle was detected at the same time in the neighborhood on the same parking lane and therefore it can be assumed that there is a measurement.
To close temporal gaps, interpolations were made by hour for each parking slot, assuming that between two consecutive observations with an occupancy the space was also occupied in between - or if both times free also free in between. If there was a change, this is indicated by a proportional value. To close spatial gaps, unobserved spaces in the area are drawn randomly from the ten closest occupation patterns around.
This results in an exemplary occupancy pattern of a synthetic day. Depending on the application, the value could be interpreted as occupancy probability or occupancy share.
https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/184a1f75-79ab-4d0e-bb1b-8ed170678280/download/occupation_example.png" alt="Example parking area occupation pattern" title="Example parking area occupation pattern">
Figure 4: Example parking area occupation pattern