Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">
Image credit: Sören Vogel
The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">
Image credit: Sören Vogel
The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">
The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:
To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:
roscore & rosrun rviz rviz -d icsens_data.rviz
Afterwards, start playing a rosbag with
rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
Below we provide some exemplary images and their corresponding point clouds.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">
R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.
R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.
R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.
R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.
R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.
R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.
R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.
R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The datasets are original and specifically collected for research aimed at reducing registration errors between Camera-LiDAR datasets. Traditional methods often struggle with aligning 2D-3D data from sources that have different coordinate systems and resolutions. Our collection comprises six datasets from two distinct setups, designed to enhance versatility in our approach and improve matching accuracy across both high-feature and low-feature environments.Survey-Grade Terrestrial Dataset:Collection Details: Data was gathered across various scenes on the University of New Brunswick campus, including low-feature walls, high-feature laboratory rooms, and outdoor tree environments.Equipment: LiDAR data was captured using a Trimble TX5 3D Laser Scanner, while optical images were taken with a Canon EOS 5D Mark III DSLR camera.Mobile Mapping System Dataset:Collection Details: This dataset was collected using our custom-built Simultaneous Localization and Multi-Sensor Mapping Robot (SLAMM-BOT) in several indoor mobile scenes to validate our methods.Equipment: Data was acquired using a Velodyne VLP-16 LiDAR scanner and an Arducam IMX477 Mini camera, controlled via a Raspberry Pi board.
Atlantic was contracted to acquire high resolution topographic LiDAR (Light Detection and Ranging) data located in Mobile County, Alabama. The intent was to collect one (1) Area of Interest (AOI) that encompasses Mobile County. The total client defined AOI was 1,402 square miles or 3,361 square kilometers. The data were collected from January 12 - 22, 2014. Classifications of data available from NOAA OCM are: 1 (Unclassified, 2 (Ground), 3 (Low Vegetation), 7 (Low Noise), 8 (Model Key Points), 9 (Water), 10 (Ignored Ground, Breakline Proximity). Low vegetation points were removed as they were incorrect and not required for delivery. Digital Elevation Models (DEMs) created from this lidar data are available for download. They are available at: https://coast.noaa.gov/dataviewer/#/lidar/search/where:ID=5169 . Breaklines are available at: https://noaa-nos-coastal-lidar-pds.s3.amazonaws.com/laz/geoid18/4966/supplemental/breaklines Original contact information: Contact Name: Scott Kearney Contact Org: City of Mobile Phone: (251) 208-7942 Email: kearney@cityofmobile.org
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file. Image credit: Sören Vogel The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system. Image credit: Sören Vogel The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes. The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors: Velodyne HDL-64 LiDAR LORD MicroStrain 3DM-GQ4-45 GNSS aided IMU Pointgrey GS3-U3-23S6C-C RGB camera To inspect the data, first start a rosmaster and launch rviz using the provided configuration file: roscore & rosrun rviz rviz -d icsens_data.rviz
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This dataset contains point cloud data of two stretches of railway track in the Netherlands. One stretch covers the trajectory between Deventer and Twello, the other stretch is located around Dronten. The data have been captured using a Veldodyne VLP-16 mobile laser scanner mounted on a dedicated measurement train by Strukton Rail. For both stretches, the EPSG:32631 coordinate reference system (CRS) was used for the captured point clouds and for the timestamps Coordinated Universal Time (UTC) was used. The provided files are LAZ files. The trajectory of the measurement train is also logged using the ETRS89 CRS.
In common, these four classes are annotated:
Label values are stored in conjunction with the data using a 'Scalar Field' attribute. Each annotated object also carries a unique identifier (uid). This uid is also provided as a 'Scalar Field' attribute within the data.
** Deventer-Twello **
Length: 6.5 km
Acquisition date: 2021-06-14
Trajectory log time zone: Central European Summer Time (CEST, UTC+02:00)
Additional notes:
The measurement train drove four times back and forth on the same piece of track. The measurement train did not turn at the end of the section, but instead just drove backwards. Reversing a train is not easy. This implies that objects are not captured from both directions on the track. Each trip of the measurement train will be referred to as a run. All four runs are merged together into one file. Each of the runs can be distinguished by the `run` Scalar Field attribute.
This dataset has partial labels for signs (label=6) and lamp posts (label=2).
** Dronten **
Length: 2.9 km
Acquisition date: 2021-11-16
Trajectory log time zone: Central European Time (CET, UTC+01:00)
Additional notes:
This dataset has one additional label, the arms of catenary masts have also been labelled (label=7). The arm carries the same uid as the mast.
The timestamps between the point cloud data and the trajectory log are not synchronised.
** Early access **
The dataset is available under an embargo. If you would like to obtain the dataset before the embargo period expires, send an e-mail with a short motivation to Corne.vandekraats@strukton.com.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This benchmark dataset was acquired during the SilviLaser conference 2021 in Vienna. The benchmark aims to demonstrate the different terrestrial system's capabilities for capturing 3D scenes in various forest conditions. A number of universities, institutes, and companies participated and contributed their outputs to this dataset, compiled by terrestrial laser scanning (TLS), mobile laser scanning (MLS), as well as terrestrial photogrammetric systems (TPS). Along with the terrestrial data, one airborne laser scanning (ALS) data was provided as a reference.
Eight forest plots were installed in the terrestrial challenge. Each plot was formed with a 25-meter radius circular area and different tree species (i.e. spruce, pine, beech, white fir), forest structures (i.e. one layer, multi-layer, natural regeneration, deadwood), and age classes (~50 – 120 years). The 3D point clouds acquired by each participant cover the eight plots. In addition to point clouds, traditional in-situ data (tree position, tree species, DBH) were recorded by the organization team.
All point clouds provided by participants were processed in the following steps: co-registration with geo-referenced data, setting a uniform coordinate reference system (CRS), and removing data located out of the plot. This work was performed by OPALS, a laser scanning data processing software developed by the Photogrammetry Group of the TU Wien Department of Geodesy and Geoinformation. Please note that some point clouds are not archived due to problems encountered during pre-processing. The final products consist of one metadata, 3D point clouds, ALS data for reference, and corresponding digital terrain models (DTM) derived from the ALS data using OPALS software. Point clouds are in laz 1.4 format, and DTMs are raster models in GeoTIFF format. Furthermore, all geo-data use CRS of WGS84 / UTM zone 33N (EPSG:32633). More information (e.g. instrument, point density, and extra attributes) can be found in the file "SL21BM_TER_metadata.csv".
This dataset is available to the community for a wide variety of scientific studies. These unique data sets will also form the basis for an international benchmark for parameter retrieval from different 3D recording methods.
This dataset was contributed by the universities/institutes/companies (alphabetical order):
The Paris-Lille-3D is a Benchmark on Point Cloud Classification. The Point Cloud has been labeled entirely by hand with 50 different classes. The dataset consists of around 2km of Mobile Laser System point cloud acquired in two cities in France (Paris and Lille).
Beaver Lake was constructed in 1966 on the White River in the northwest corner of Arkansas for flood control, hydroelectric power, public water supply, and recreation. The surface area of Beaver Lake is about 27,900 acres and approximately 449 miles of shoreline are at the conservation pool level (1,120 feet above the North American Vertical Datum of 1988). Sedimentation in reservoirs can result in reduced water storage capacity and a reduction in usable aquatic habitat. Therefore, accurate and up-to-date estimates of reservoir water capacity are important for managing pool levels, power generation, water supply, recreation, and downstream aquatic habitat. Many of the lakes operated by the U.S. Army Corps of Engineers are periodically surveyed to monitor bathymetric changes that affect water capacity. In October 2018, the U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, completed one such survey of Beaver Lake using a multibeam echosounder. The echosounder data was combined with light detection and ranging (lidar) data to prepare a bathymetric map and a surface area and capacity table. Collection of bathymetric data in October 2018 at Beaver Lake near Rogers, Arkansas, used a marine-based mobile mapping unit that operates with several components: a multibeam echosounder (MBES) unit, an inertial navigation system (INS), and a data acquisition computer. Bathymetric data were collected using the MBES unit in longitudinal transects to provide complete coverage of the lake. The MBES was tilted in some areas to improve data collection along the shoreline, in coves, and in areas that are shallower than 2.5 meters deep (the practical limit of reasonable and safe data collection with the MBES). Two bathymetric datasets collected during the October 2018 survey include the gridded bathymetric point data (BeaverLake2018_bathy.zip) computed on a 3.28-foot (1-meter) grid using the Combined Uncertainty and Bathymetry Estimator (CUBE) method, and the bathymetric quality-assurance dataset (BeaverLake2018_QA.zip). The gridded point data used to create the bathymetric surface (BeaverLake2018_bathy.zip) was quality-assured with data from 9 selected resurvey areas (BeaverLake2018_QA.zip) to test the accuracy of the gridded bathymetric point data. The data are provided as comma delimited text files that have been compressed into zip archives. The shoreline was created from bare-earth lidar resampled to a 3.28-foot (1-meter) grid spacing. A contour line representing the flood pool elevation of 1,135 feet was generated from the gridded data. The data are provided in the Environmental Systems Research Institute shapefile format and have the common root name of BeaverLake2018_1135-ft. All files in the shapefile group must be retrieved to be useable.
ConSLAM is a real-world dataset collected periodically on a construction site to measure the accuracy of mobile scanners' SLAM algorithms.
The dataset contains time-synchronized and spatially registered RGB and NIR images and 360-deg LiDAR scans, 9-axis IMU measurements, and professional ground-truth terrestrial laser scans. This dataset reflects the periodic need to scan construction sites with the aim of accurately monitoring progress using a hand-held scanner. The sensors used for data acquisition are: - LiDAR: Velodyne VLP-16. - RGB camera: Alvium U-319c, 3.2 MP. - NIR camera: Alvium 1800 U-501, 5.0 MP. - 9-axis IMU: Xsens MTi-610.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The site is situated south of Obergurgl (Ötztal Alps, Austria) above the Zirbenwald. The local field name of the site is "Bruggboden" It contains selected group of arolla pine trees (Pinus cembra). The data set encompasses a range of techniques and tools, including Mobile Laser Scanning (MLS) using GeoSLAM and IPhone Lidar, Terrestrial Laser Scanning (TLS) with Trimble and Riegl VZ2000-i, Airborne Laser Scanning (ALS) by RICOPTER RIEGL VUX-1LR, ALS 2020, and ICESat-2, as well as photogrammetric reconstructions using IPhone Video and Sony Alpha cameras. Additionally, ground truth data is collected using Total Station Leica, GNSS PPK, mobile phone GNSS, and caliper measurements. This data set has been acquired at the 4th edition of the Sensing Mountains 2022 - Innsbruck Summer School of Alpine Research – Close-range Sensing Techniques in Alpine Terrain.
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Recently published datasets have been increasingly comprehensive with respect to their variety of simultaneously used sensors, traffic scenarios, environmental conditions, and provided annotations. However, these datasets typically only consider data collected by one independent vehicle. Hence, there is currently a lack of comprehensive, real-world, multi-vehicle datasets fostering research on cooperative applications such as object detection, urban navigation, or multi-agent SLAM. In this paper, we aim to fill this gap by introducing the novel LUCOOP dataset, which provides time-synchronized multi-modal data collected by three interacting measurement vehicles. The driving scenario corresponds to a follow-up setup of multiple rounds in an inner city triangular trajectory. Each vehicle was equipped with a broad sensor suite including at least one LiDAR sensor, one GNSS antenna, and up to three IMUs. Additionally, Ultra-Wide-Band (UWB) sensors were mounted on each vehicle, as well as statically placed along the trajectory enabling both V2V and V2X range measurements. Furthermore, a part of the trajectory was monitored by a total station resulting in a highly accurate reference trajectory. The LUCOOP dataset also includes a precise, dense 3D map point cloud, acquired simultaneously by a mobile mapping system, as well as an LOD2 city model of the measurement area. We provide sensor measurements in a multi-vehicle setup for a trajectory of more than 4 km and a time interval of more than 26 minutes, respectively. Overall, our dataset includes more than 54,000 LiDAR frames, approximately 700,000 IMU measurements, and more than 2.5 hours of 10 Hz GNSS raw measurements along with 1 Hz data from a reference station. Furthermore, we provide more than 6,000 total station measurements over a trajectory of more than 1 km and 1,874 V2V and 267 V2X UWB measurements. Additionally, we offer 3D bounding box annotations for evaluating object detection approaches, as well as highly accurate ground truth poses for each vehicle throughout the measurement campaign.
Important: Before downloading and using the data, please check the Updates.zip in the "Data and Resources" section at the bottom of this web site. There, you find updated files and annotations as well as update notes.
Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/541747ed-3d6e-41c4-9046-15bba3702e3b/download/lgln_logo.png" alt="Alt text" title="LGLN logo">
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/d141d4f1-49b0-40e6-b8d9-e49f420e3627/download/vans_with_redgreen_cs_vehicle.png" alt="Alt text" title="Sensor Setup of the three measurement vehicles">
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/5b6b37cf-a991-4dc4-8828-ad12755203ca/download/map_point_cloud.png" alt="Alt text" title="3D map point cloud">
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/6c61d297-8544-4788-bccf-7a28ccfa702a/download/scenario_with_osm_reference.png" alt="Alt text" title="Measurement scenario">
Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/8b0262b9-6769-4a5d-a37e-8fcb201720ef/download/annotations.png" alt="Alt text" title="Number of annotations per class">
Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/7358ed31-9886-4c74-bec2-6868d577a880/download/data_structure.png" alt="Alt text" title="Data structure">
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/fc795ec2-f920-4415-aac6-6ad3be3df0a9/download/data_format.png" alt="Alt text" title="Data format">
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/a1974957-5ce2-456c-9f44-9d05c5a14b16/download/vans_merged.png" alt="Alt text" title="Measurement vehicles">
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/53a58500-8847-4b3c-acd4-a3ac27fc8575/download/ts_uwb_mms.png" alt="Alt text">
This measurement campaign could not have been carried out without the help of many contributors. At this point, we thank Yuehan Jiang (Institute for Autonomous Cyber-Physical Systems, Hamburg), Franziska Altemeier, Ingo Neumann, Sören Vogel, Frederic Hake (all Geodetic Institute, Hannover), Colin Fischer (Institute of Cartography and Geoinformatics, Hannover), Thomas Maschke, Tobias Kersten, Nina Fletling (all Institut für Erdmessung, Hannover), Jörg Blankenbach (Geodetic Institute, Aachen), Florian Alpen (Hydromapper GmbH), Allison Kealy (Victorian Department of Environment, Land, Water and Planning, Melbourne), Günther Retscher, Jelena Gabela (both Department of Geodesy and Geoin- formation, Wien), Wenchao Li (Solinnov Pty Ltd), Adrian Bingham (Applied Artificial Intelligence Institute,
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data Organization
Under the root directory for the whole acquisition, there is a positions.csv file and 3 subdirectories: img, dense, and sparse. The mobile mapping 3D dataset was generated walking around an indoor space and each corresponds to a unique pose along the trajectory of this motion. This version of the dataset contains a total of 99 unique poses. There is a separation of 1 meter between each adjacent pose.
root
├── img
│ ├──
positions.csv
sparse
dense
img
A set of equirectangular panoramic images was taken with a 360° color camera in 1920x960 resolution. They follow the same trajectory.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
LoD3 (Level of Detail 3) Road Space Models is CityGML dataset which contains road space models (over 50 building models) in the area of Ingolstadt.
There are several approaches to model Building in CityGML 2.0 (e.g. see Biljecki et al.). In our case, due to the acquisition geometry of MLS point clouds, the building objects consist of a very detailed representation of facade elements but on the other hand, it might lack roof elements and entities located in the Building's backyard. Thus, we encourage to see the list below for a detailed description of the Building in our Ingolstadt LoD3 dataset:
The building consists of:
Building does NOT consist of:
The terminology according to SIG3D.
To ensure the highest accuracy geometrically as well as semantically, the dataset was manually modeled based on the mobile laser scannings (MLS) provided by the company 3D Mapping Solutions GmbH (relative accuracy in the range of 1-3cm). Moreover, a complementary OpenDRIVE dataset is available, which includes the road network, traffic lights, fences, vegetation and so on:
Further Information:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Tracking people has many applications, such as security or safe use of robots. Many onboard systems are based on Laser Imaging Detection and Ranging (LIDAR) sensors. Tracking peoples' legs using only information from a 2D LIDAR scanner in a mobile robot is a challenging problem because many legs can be present in an indoor environment, there are frequent occlusions and self-occlusions, many items in the environment such as table legs or columns could resemble legs as a result of the limited information provided by two-dimensional LIDAR usually mounted at knee height in mobile robots, etc. On the other hand, LIDAR sensors are affordable in terms of the acquisition price and processing requirements. In this article, we describe a tool named PeTra based on an off-line trained full Convolutional Neural Network capable of tracking pairs of legs in a cluttered environment. We describe the characteristics of the system proposed and evaluate its accuracy using a dataset from a public repository. Results show that PeTra provides better accuracy than Leg Detector (LD), the standard solution for Robot Operating System (ROS)-based robots.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Matlab implementation to generate 3D point clouds from data acquired with VLP-16 and GNSS GPS1200+. This project is a Matlab implementation to generate 3D point clouds from data acquired with a mobile terrestrial laser scanner (MTLS) comprised of a LiDAR sensor Velodyne VLP-16 (Velodyne LIDAR Inc., San Jose, CA, USA) and a GNSS position sensor GPS1200+ (Leica Geosystems AG, Heerbrugg, Swizeland). This implementation was used to generate the point clouds provided in LFuji-air dataset, which contains 3D LiDAR data of 11Fuji apple trees with the corresponding fruit position annotations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
As part of the Rhizome 2.0 project—an initiative investigating the development of human habitats in Martian lava tubes—we conducted an extensive robotic mapping mission inside the Grotta di Monte Intraleo, a terrestrial lava tube in Sicily serving as an analogue site. This dataset supports the exploration of construction and habitation strategies in similarly structured Martian environments.
The dataset includes multi-modal mapping and environmental data collected using manual and robotic scanning techniques. Specifically, the data comprises:
.obj
files for rapid spatial documentation.Data collection adhered to all local regulations, and care was taken to minimize impact on the natural environment of the cave. This dataset is intended to support reproducible research in robotic mapping, autonomous navigation, and extraterrestrial habitat design.
A dataset acquired in FEUP's garden with an OAK-D mounted on a mobile terrestrial robot to perform tree trunk mapping.
These topographic data were collected for the U.S. Army Corps of Engineers by a helicopter-mounted LiDAR sensor over the New Orleans Hurricane Protection Levee System in Louisiana.
Original contact information: Contact Org: NOAA Office for Coastal Management Phone: 843-740-1202 Email: coastal.info@noaa.gov
These data provide an accurate high-resolution shoreline compiled from imagery of WESTERN MOBILE BAY, AL . This vector shoreline data is based on an office interpretation of imagery that may be suitable as a geographic information system (GIS) data layer. This metadata describes information for both the line and point shapefiles. The NGS attribution scheme 'Coastal Cartographic Object Attribu...
PixelHelp includes 187 multi-step instructions of 4 task categories deined in https://support.google.com/pixelphone and annotated by human. This dataset includes 88 general tasks, such as configuring accounts, 38 Gmail tasks, 31 Chrome tasks, and 30 Photos related tasks. This dataset is an updated opensource version of the original PixelHelp dataset, which was used for testing the end-to-end grounding quality of the model in paper "Mapping Natural Language Instructions to Mobile UI Action Sequences". The similar accuracy is acquired on this version of the dataset.
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">
Image credit: Sören Vogel
The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">
Image credit: Sören Vogel
The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">
The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:
To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:
roscore & rosrun rviz rviz -d icsens_data.rviz
Afterwards, start playing a rosbag with
rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
Below we provide some exemplary images and their corresponding point clouds.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">
R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.
R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.
R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.
R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.
R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.
R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.
R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.
R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.