Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The datasets are original and specifically collected for research aimed at reducing registration errors between Camera-LiDAR datasets. Traditional methods often struggle with aligning 2D-3D data from sources that have different coordinate systems and resolutions. Our collection comprises six datasets from two distinct setups, designed to enhance versatility in our approach and improve matching accuracy across both high-feature and low-feature environments.Survey-Grade Terrestrial Dataset:Collection Details: Data was gathered across various scenes on the University of New Brunswick campus, including low-feature walls, high-feature laboratory rooms, and outdoor tree environments.Equipment: LiDAR data was captured using a Trimble TX5 3D Laser Scanner, while optical images were taken with a Canon EOS 5D Mark III DSLR camera.Mobile Mapping System Dataset:Collection Details: This dataset was collected using our custom-built Simultaneous Localization and Multi-Sensor Mapping Robot (SLAMM-BOT) in several indoor mobile scenes to validate our methods.Equipment: Data was acquired using a Velodyne VLP-16 LiDAR scanner and an Arducam IMX477 Mini camera, controlled via a Raspberry Pi board.
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">
Image credit: Sören Vogel
The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">
Image credit: Sören Vogel
The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">
The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:
To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:
roscore & rosrun rviz rviz -d icsens_data.rviz
Afterwards, start playing a rosbag with
rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
Below we provide some exemplary images and their corresponding point clouds.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">
R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.
R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.
R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.
R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.
R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.
R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.
R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.
R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a point cloud sampe data which was collected by a mobile Lidar system (MLS).
Detroit Street View (DSV) is an urban remote sensing program run by the Enterprise Geographic Information Systems (EGIS) Team within the Department of Innovation and Technology at the City of Detroit. The mission of Detroit Street View is ‘To continuously observe and document Detroit’s changing physical environment through remote sensing, resulting in freely available foundational data that empowers effective city operations, informed decision making, awareness, and innovation.’ LiDAR (as well as panoramic imagery) is collected using a vehicle-mounted mobile mapping system.
Due to variations in processing, index lines are not currently available for all existing LiDAR datasets, including all data collected before September 2020. Index lines represent the approximate path of the vehicle within the time extent of the given LiDAR file. The actual geographic extent of the LiDAR point cloud varies dependent on line-of-sight.
Compressed (LAZ format) point cloud files may be requested by emailing gis@detroitmi.gov with a description of the desired geographic area, any specific dates/file names, and an explanation of interest and/or intended use. Requests will be filled at the discretion and availability of the Enterprise GIS Team. Deliverable file size limitations may apply and requestors may be asked to provide their own online location or physical media for transfer.
LiDAR was collected using an uncalibrated Trimble MX2 mobile mapping system. The data is not quality controlled, and no accuracy assessment is provided or implied. Results are known to vary significantly. Users should exercise caution and conduct their own comprehensive suitability assessments before requesting and applying this data.
Sample Dataset: https://detroitmi.maps.arcgis.com/home/item.html?id=69853441d944442f9e79199b57f26fe3
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file. Image credit: Sören Vogel The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system. Image credit: Sören Vogel The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes. The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors: Velodyne HDL-64 LiDAR LORD MicroStrain 3DM-GQ4-45 GNSS aided IMU Pointgrey GS3-U3-23S6C-C RGB camera To inspect the data, first start a rosmaster and launch rviz using the provided configuration file: roscore & rosrun rviz rviz -d icsens_data.rviz
These files are to support the published journal and thesis about the IMU and LIDAR SLAM for indoor mapping. They include datasets and functions used for point clouds generation. Date Submitted: 2022-02-21
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Work in progress: data might be changed The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies. Vehicle Detections Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock. The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides. Figure 1: Overview map of detected vehicles Parking Areas
This project aims to improve the position estimation of mobile mapping platforms. Mobile Mapping (MM) is a technique to obtain geo-information on a large scale using sensors mounted on a car or another vehicle. Under normal conditions, accurate positioning is provided by the integration of Global Navigation Satellite Systems (GNSS) and Inertial Navigation Systems (INS). However, especially in urban areas, where building structures impede a direct line-of-sight to navigation satellites or lead to multipath effects, MM derived products, such as laser point clouds or images, lack the expected reliability and contain an unknown positioning error. This issue has been addressed by many researchers, whose aim to mitigate these effects mainly concentrates on utilising tertiary data, such as digital maps, ortho images or airborne LiDAR. These data serve as a reliable source of orientation and are being used subsidiarily or as the basis for adjustment. However, these approaches show limitations regarding the achieved accuracy, the correction of error in height, the availability of tertiary data and their feasibility in difficult areas. This project is addressing the aforementioned problem by employing high resolution aerial nadir and oblique imagery as reference data. By exploiting the MM platform?s approximate orientation parameters, very accurate matching techniques can be realised to extract the MM platform?s positioning error. In the form of constraints, they serve as a corrective for an orientation update, which is conducted by an estimation or adjustment technique.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data collected using an Ouster OS1-32 LiDAR sensor mounted on a Hunter 2.0 UGV (by AgileX Robotics). The robot was manually driven through the Computer Science building at the University of Málaga (Spain), covering approximately 1 km.
The dataset includes the following files:
etsii_rosbag: A 2.2 GiB ROS 2 bag file containing sensory data, including RGB images, IMU, and GPS (when available). Due to size constraints, raw 3D point cloud data from the LiDAR was not included. However, it can be reconstructed using the official Ouster utilities, such as ouster-ros-extras.
metadata_rosbag: A lightweight ROS bag that includes the metadata necessary to decode and reconstruct the 3D point cloud from the sensor.
play.sh: A shell script that sequentially plays both rosbag files to facilitate data reproduction and use.
The understory plays a critical role in the disturbance dynamics of forest ecosystems, as it can influence wildfire behavior. Unfortunately, the 3D structure of understory fuels is often difficult to quantify and model due to vegetation and substrate heterogeneity. LiDAR remote sensing can measure changes in 3D forest structure more rapidly, comprehensively, and accurately than manual approaches, but a remote sensing approach is more frequently applied to the overstory compared to the understory. Here we evaluated the use of handheld mobile laser scanning (HMLS) to measure and detect changes in fine-scale surface fuels following wildfire and timber harvest in Northern Californian forests, USA. First, the ability of HMLS to quantify surface fuels was validated by destructively sampling vegetation below 1 m with a known occupied volume within a 3D frame and comparing destructive-based volumes with HMLS-based occupied volume estimates. There was a positive linear relationship (R2 = 0.72) b..., Data were collected in a few different ways. 3D frame data were collected by scanning a 3D frame with a handheld mobile laser scanner (HMLS) and then destructively sampling of the vegetation inside. The scans were processed by the scanner's software (GeoSLAM, SLAM algorithm), and the vegetation samples were oven dried to get dry mass measurements. Plot-level data were collected at 11.3 m radius circle plots at 2 locations across 3 time periods, lidar scans were taken with the HMLS and Brown's data were collected using the standard Brown's transect protocol. Brown's data were processed to extract estimates of fuel mass per area for each plot. All of the lidar scans taken with the HMLS (both frame and plot scans) were further processed in Lidar360, CloudCompare, and R with the lidR package to clip scans to the frame/plot boundary, height normalize, and voxelize the scans. Frame scans were voxelized at 4 different voxel sizes (1, 5, 10, and 25 cm), while plot scans were all voxelized at 1 ..., , # Data from: Using handheld mobile laser scanning to quantify fine-scale surface fuels and detect changes post-disturbance in Northern California forests
https://doi.org/10.5061/dryad.sxksn038g
The dataset includes processed handheld lidar data and dry mass, from 3D frame and plot sampling. The lidar system used is a handheld mobile laser scanner (GeoSLAM's Zeb-REVO).
Sheets within the Excel file are separated based on manuscript sections. '3D Frame' includes the data collected from lidar scans and destructive sampling which was collected to validate the use of handheld lidar for vegetation monitoring. 'Plot-level' contains the total occupied voxels from the processed plot scans taken in each survey/campaign. 'Brown's' is the mass per area calculated from Brown's transects collected at the plots and the predicted mass in grams as calculated from the voxelized plot scans. 'Point Density' contains...
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The IILABS 3D dataset is a rigorously designed benchmark intended to advance research in 3D LiDAR-based Simultaneous Localization and Mapping (SLAM) algorithms within indoor environments. It provides a robust and diverse foundation for evaluating and enhancing SLAM techniques in complex indoor settings. The dataset was retrived in the Industry and Innovation Laboratory (iiLab) and comprises synchronized data from a suite of sensors—including four distinct 3D LiDAR sensors, a 2D LiDAR, an Inertial Measurement Unit (IMU), and wheel odometry—complemented by high-precision ground truth obtained via a Motion Capture (MoCap) system. Project Webpage https://jorgedfr.github.io/3d_lidar_slam_benchmark_at_iilab/ Dataset Toolkit https://github.com/JorgeDFR/iilabs3d-toolkit Data Collection Method Sensor data was captured using the Robot Operating System (ROS) framework’s rosbag record tool on a LattePanda 3 Delta embedded computer. Post-processing involved timestamp correction for the Xsens MTi-630 IMU via custom Python scripts. Ground-truth data was captured using an OptiTrack MoCap system featuring 24 high-resolution PrimeX 22 cameras. These cameras were connected via Ethernet to a primary Windows computer running the Motive software (https://optitrack.com/software/motive), which processed the camera data. This Windows computer was then connected via Ethernet to a secondary Ubuntu machine running the NatNet 4 ROS driver (https://github.com/L2S-lab/natnet_ros_cpp). The driver published the data as ROS topics, which were recorded into rosbag files. Additionally, temporal synchronization between the robot platform and the ground-truth system was achieved using the Network Time Protocol (NTP). Finally, the bag files were processed using the EVO open-source Python library (https://github.com/MichaelGrupp/evo) to convert the data into TUM format and adjust the initial position offsets for accurate SLAM odometry benchmarking. Type of Instrument Mobile Robot Platform: INESC TEC MRDT Modified Hangfa Discovery Q2 Platform. R.B. Sousa, H.M. Sobreira, J.G. Martins, P.G. Costa, M.F. Silva and A.P. Moreira, "Integrating Multimodal Perception into Ground Mobile Robots," 2025 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC2025), Madeira, Portugal, 2025, pp. TBD, doi: TBD [Manuscript accepted for publication].https://sousarbarb.github.io/inesctec_mrdt_hangfa_discovery_q2/ Sensor data: Livox Mid-360, Ouster OS1-64 RevC, RoboSense RS-HELIOS-5515, and Velodyne VLP-16 (3D LiDARs); Hokuyo UST-10LX-H01 (2D LiDAR); Xsens MTi-630 (IMU); and Faulhaber 2342 wheel encoders (64:1 gear ratio, 12 Counts Per Revolution (CPR)). Ground Truth data: OptiTrack Motion Capture System with 24 PrimeX 22 cameras installed in Room A, Floor 0 at iilab
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
The dataset contains lidar, imu and wheel odometry measurements collected using an all-electric 4 wheel robotic vehicle (Gator) in a forest environment at the Queensland Centre for Advanced Technologies (QCAT - CSIRO) in Brisbane, Australia. The dataset also contains a heightmap image constructed from aerial lidar data of the same forest. This dataset allows users to run the Forest Localisation software and evaluate the results of the presented localisation method. Lineage: The ground view data was collected utilising an all-electric 4 wheel robotic vehicle equipped with a Velodyne VLP-16 laser mounted on a servo-motor, with a 45 degree inclination, spinning around the vertical axis at 0.5Hz. In addition to the lidar scans, imu and wheel odometry measurements were also recorded. The above canopy map (heightmap) was constructed from aerial lidar data captured using a drone also equipped with a spinning mobile lidar sensor.
https://doi.org/10.17026/fp39-0x58https://doi.org/10.17026/fp39-0x58
These files are to support the published journal and thesis about the IMU and LIDAR SLAM for indoor mapping. They include datasets and functions used for point clouds generation. Date Submitted: 2022-02-21
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This benchmark dataset was acquired during the SilviLaser conference 2021 in Vienna. The benchmark aims to demonstrate the different terrestrial system's capabilities for capturing 3D scenes in various forest conditions. A number of universities, institutes, and companies participated and contributed their outputs to this dataset, compiled by terrestrial laser scanning (TLS), mobile laser scanning (MLS), as well as terrestrial photogrammetric systems (TPS). Along with the terrestrial data, one airborne laser scanning (ALS) data was provided as a reference.
Eight forest plots were installed in the terrestrial challenge. Each plot was formed with a 25-meter radius circular area and different tree species (i.e. spruce, pine, beech, white fir), forest structures (i.e. one layer, multi-layer, natural regeneration, deadwood), and age classes (~50 – 120 years). The 3D point clouds acquired by each participant cover the eight plots. In addition to point clouds, traditional in-situ data (tree position, tree species, DBH) were recorded by the organization team.
All point clouds provided by participants were processed in the following steps: co-registration with geo-referenced data, setting a uniform coordinate reference system (CRS), and removing data located out of the plot. This work was performed by OPALS, a laser scanning data processing software developed by the Photogrammetry Group of the TU Wien Department of Geodesy and Geoinformation. Please note that some point clouds are not archived due to problems encountered during pre-processing. The final products consist of one metadata, 3D point clouds, ALS data for reference, and corresponding digital terrain models (DTM) derived from the ALS data using OPALS software. Point clouds are in laz 1.4 format, and DTMs are raster models in GeoTIFF format. Furthermore, all geo-data use CRS of WGS84 / UTM zone 33N (EPSG:32633). More information (e.g. instrument, point density, and extra attributes) can be found in the file "SL21BM_TER_metadata.csv".
This dataset is available to the community for a wide variety of scientific studies. These unique data sets will also form the basis for an international benchmark for parameter retrieval from different 3D recording methods.
This dataset was contributed by the universities/institutes/companies (alphabetical order):
Beaver Lake was constructed in 1966 on the White River in the northwest corner of Arkansas for flood control, hydroelectric power, public water supply, and recreation. The surface area of Beaver Lake is about 27,900 acres and approximately 449 miles of shoreline are at the conservation pool level (1,120 feet above the North American Vertical Datum of 1988). Sedimentation in reservoirs can result in reduced water storage capacity and a reduction in usable aquatic habitat. Therefore, accurate and up-to-date estimates of reservoir water capacity are important for managing pool levels, power generation, water supply, recreation, and downstream aquatic habitat. Many of the lakes operated by the U.S. Army Corps of Engineers are periodically surveyed to monitor bathymetric changes that affect water capacity. In October 2018, the U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, completed one such survey of Beaver Lake using a multibeam echosounder. The echosounder data was combined with light detection and ranging (lidar) data to prepare a bathymetric map and a surface area and capacity table. Collection of bathymetric data in October 2018 at Beaver Lake near Rogers, Arkansas, used a marine-based mobile mapping unit that operates with several components: a multibeam echosounder (MBES) unit, an inertial navigation system (INS), and a data acquisition computer. Bathymetric data were collected using the MBES unit in longitudinal transects to provide complete coverage of the lake. The MBES was tilted in some areas to improve data collection along the shoreline, in coves, and in areas that are shallower than 2.5 meters deep (the practical limit of reasonable and safe data collection with the MBES). Two bathymetric datasets collected during the October 2018 survey include the gridded bathymetric point data (BeaverLake2018_bathy.zip) computed on a 3.28-foot (1-meter) grid using the Combined Uncertainty and Bathymetry Estimator (CUBE) method, and the bathymetric quality-assurance dataset (BeaverLake2018_QA.zip). The gridded point data used to create the bathymetric surface (BeaverLake2018_bathy.zip) was quality-assured with data from 9 selected resurvey areas (BeaverLake2018_QA.zip) to test the accuracy of the gridded bathymetric point data. The data are provided as comma delimited text files that have been compressed into zip archives. The shoreline was created from bare-earth lidar resampled to a 3.28-foot (1-meter) grid spacing. A contour line representing the flood pool elevation of 1,135 feet was generated from the gridded data. The data are provided in the Environmental Systems Research Institute shapefile format and have the common root name of BeaverLake2018_1135-ft. All files in the shapefile group must be retrieved to be useable.
A digital elevation model (DEM) of a portion of the Mobile-Tensaw Delta region and Three Mile Creek in Alabama was produced from remotely sensed, geographically referenced elevation measurements by the U.S. Geological Survey (USGS). Elevation measurements were collected over the area (bathymetry was irresolvable) using the Experimental Advanced Airborne Research Lidar (EAARL), a pulsed laser ranging system mounted onboard an aircraft to measure ground elevation, vegetation canopy, and coastal topography. The system uses high-frequency laser beams directed at the Earth's surface through an opening in the bottom of the aircraft's fuselage. The laser system records the time difference between emission of the laser beam and the reception of the reflected laser signal in the aircraft. The plane travels over the target area at approximately 50 meters per second at an elevation of approximately 300 meters, resulting in a laser swath of approximately 240 meters with an average point spacing of 2-3 meters. The EAARL, developed originally by the National Aeronautics and Space Administration (NASA) at Wallops Flight Facility in Virginia, measures ground elevation with a vertical resolution of +/-15 centimeters. A sampling rate of 3 kilohertz or higher results in an extremely dense spatial elevation dataset. Over 100 kilometers of coastline can be surveyed easily within a 3- to 4-hour mission. When resultant elevation maps for an area are analyzed, they provide a useful tool to make management decisions regarding land development. For more information on Lidar science and the Experimental Advanced Airborne Research Lidar (EAARL) system and surveys, see http://ngom.usgs.gov/dsp/overview/index.php and http://ngom.usgs.gov/dsp/tech/eaarl/index.php .
https://doi.org/10.17026/fp39-0x58https://doi.org/10.17026/fp39-0x58
These files are to support the published journal paper about indoor backpack mobile mapping system. They include point cloud conversion, segmentation and SLAM codes. In addition, code of evaluation method published in the journal paper. The resulting laser point cloud file is also uploaded.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains Bluetooth Low Energy signal strengths measured in a fully furnished flat. The dataset was originally used in the study concerning RSS-fingerprinting based indoor positioning systems. The data were gathered using a hybrid BLE-UWB localization system, which was installed in the apartment and a mobile robotic platform equipped for a LiDAR. The dataset comprises power measurement results and LiDAR scans performed in 4104 points. The scans used for initial environment mapping and power levels registered in two test scenarios are also attached.
The set contains both raw and preprocessed measurement data. The Python code for raw data loading is supplied.
The detailed dataset description can be found in the dataset_description.pdf file.
When using the dataset, please consider citing the original paper, in which the data were used:
M. Kolakowski, “Automated Calibration of RSS Fingerprinting Based Systems Using a Mobile Robot and Machine Learning”, Sensors , vol. 21, 6270, Sep. 2021 https://doi.org/10.3390/s21186270
This dataset contains Lidar Mobile Integrated Profiling System (MIPS) Ceilometer data from the University of Alabama in Huntsville.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The datasets are original and specifically collected for research aimed at reducing registration errors between Camera-LiDAR datasets. Traditional methods often struggle with aligning 2D-3D data from sources that have different coordinate systems and resolutions. Our collection comprises six datasets from two distinct setups, designed to enhance versatility in our approach and improve matching accuracy across both high-feature and low-feature environments.Survey-Grade Terrestrial Dataset:Collection Details: Data was gathered across various scenes on the University of New Brunswick campus, including low-feature walls, high-feature laboratory rooms, and outdoor tree environments.Equipment: LiDAR data was captured using a Trimble TX5 3D Laser Scanner, while optical images were taken with a Canon EOS 5D Mark III DSLR camera.Mobile Mapping System Dataset:Collection Details: This dataset was collected using our custom-built Simultaneous Localization and Multi-Sensor Mapping Robot (SLAMM-BOT) in several indoor mobile scenes to validate our methods.Equipment: Data was acquired using a Velodyne VLP-16 LiDAR scanner and an Arducam IMX477 Mini camera, controlled via a Raspberry Pi board.