Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a point cloud sampe data which was collected by a mobile Lidar system (MLS).
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">
Image credit: Sören Vogel
The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">
Image credit: Sören Vogel
The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">
The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:
To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:
roscore & rosrun rviz rviz -d icsens_data.rviz
Afterwards, start playing a rosbag with
rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
Below we provide some exemplary images and their corresponding point clouds.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">
R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.
R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.
R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.
R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.
R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.
R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.
R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.
R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The datasets are original and specifically collected for research aimed at reducing registration errors between Camera-LiDAR datasets. Traditional methods often struggle with aligning 2D-3D data from sources that have different coordinate systems and resolutions. Our collection comprises six datasets from two distinct setups, designed to enhance versatility in our approach and improve matching accuracy across both high-feature and low-feature environments.Survey-Grade Terrestrial Dataset:Collection Details: Data was gathered across various scenes on the University of New Brunswick campus, including low-feature walls, high-feature laboratory rooms, and outdoor tree environments.Equipment: LiDAR data was captured using a Trimble TX5 3D Laser Scanner, while optical images were taken with a Canon EOS 5D Mark III DSLR camera.Mobile Mapping System Dataset:Collection Details: This dataset was collected using our custom-built Simultaneous Localization and Multi-Sensor Mapping Robot (SLAMM-BOT) in several indoor mobile scenes to validate our methods.Equipment: Data was acquired using a Velodyne VLP-16 LiDAR scanner and an Arducam IMX477 Mini camera, controlled via a Raspberry Pi board.
Detroit Street View (DSV) is an urban remote sensing program run by the Enterprise Geographic Information Systems (EGIS) Team within the Department of Innovation and Technology at the City of Detroit. The mission of Detroit Street View is ‘To continuously observe and document Detroit’s changing physical environment through remote sensing, resulting in freely available foundational data that empowers effective city operations, informed decision making, awareness, and innovation.’ LiDAR (as well as panoramic imagery) is collected using a vehicle-mounted mobile mapping system.
Due to variations in processing, index lines are not currently available for all existing LiDAR datasets, including all data collected before September 2020. Index lines represent the approximate path of the vehicle within the time extent of the given LiDAR file. The actual geographic extent of the LiDAR point cloud varies dependent on line-of-sight.
Compressed (LAZ format) point cloud files may be requested by emailing gis@detroitmi.gov with a description of the desired geographic area, any specific dates/file names, and an explanation of interest and/or intended use. Requests will be filled at the discretion and availability of the Enterprise GIS Team. Deliverable file size limitations may apply and requestors may be asked to provide their own online location or physical media for transfer.
LiDAR was collected using an uncalibrated Trimble MX2 mobile mapping system. The data is not quality controlled, and no accuracy assessment is provided or implied. Results are known to vary significantly. Users should exercise caution and conduct their own comprehensive suitability assessments before requesting and applying this data.
Sample Dataset: https://detroitmi.maps.arcgis.com/home/item.html?id=69853441d944442f9e79199b57f26fe3
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Work in progress: data might be changed The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies. Vehicle Detections Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock. The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides. Figure 1: Overview map of detected vehicles Parking Areas
The Newer College Dataset is a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km through New College, Oxford. The dataset includes data from two commercially available devices - a stereoscopic-inertial camera and a multi-beam 3D LiDAR, which also provides inertial measurements. Additionally, the authors used a tripod-mounted survey grade LiDAR scanner to capture a detailed millimeter-accurate 3D map of the test location (containing ∼290 million points).
Using the map the authors inferred centimeter-accurate 6 Degree of Freedom (DoF) ground truth for the position of the device for each LiDAR scan to enable better evaluation of LiDAR and vision localisation, mapping and reconstruction systems. The dataset combines both built environments, open spaces and vegetated areas so as to test localization and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LIDAR reconstruction and appearance-based place recognition.
The understory plays a critical role in the disturbance dynamics of forest ecosystems, as it can influence wildfire behavior. Unfortunately, the 3D structure of understory fuels is often difficult to quantify and model due to vegetation and substrate heterogeneity. LiDAR remote sensing can measure changes in 3D forest structure more rapidly, comprehensively, and accurately than manual approaches, but a remote sensing approach is more frequently applied to the overstory compared to the understory. Here we evaluated the use of handheld mobile laser scanning (HMLS) to measure and detect changes in fine-scale surface fuels following wildfire and timber harvest in Northern Californian forests, USA. First, the ability of HMLS to quantify surface fuels was validated by destructively sampling vegetation below 1 m with a known occupied volume within a 3D frame and comparing destructive-based volumes with HMLS-based occupied volume estimates. There was a positive linear relationship (R2 = 0.72) b..., Data were collected in a few different ways. 3D frame data were collected by scanning a 3D frame with a handheld mobile laser scanner (HMLS) and then destructively sampling of the vegetation inside. The scans were processed by the scanner's software (GeoSLAM, SLAM algorithm), and the vegetation samples were oven dried to get dry mass measurements. Plot-level data were collected at 11.3 m radius circle plots at 2 locations across 3 time periods, lidar scans were taken with the HMLS and Brown's data were collected using the standard Brown's transect protocol. Brown's data were processed to extract estimates of fuel mass per area for each plot. All of the lidar scans taken with the HMLS (both frame and plot scans) were further processed in Lidar360, CloudCompare, and R with the lidR package to clip scans to the frame/plot boundary, height normalize, and voxelize the scans. Frame scans were voxelized at 4 different voxel sizes (1, 5, 10, and 25 cm), while plot scans were all voxelized at 1 ..., , # Data from: Using handheld mobile laser scanning to quantify fine-scale surface fuels and detect changes post-disturbance in Northern California forests
https://doi.org/10.5061/dryad.sxksn038g
The dataset includes processed handheld lidar data and dry mass, from 3D frame and plot sampling. The lidar system used is a handheld mobile laser scanner (GeoSLAM's Zeb-REVO).
Sheets within the Excel file are separated based on manuscript sections. '3D Frame' includes the data collected from lidar scans and destructive sampling which was collected to validate the use of handheld lidar for vegetation monitoring. 'Plot-level' contains the total occupied voxels from the processed plot scans taken in each survey/campaign. 'Brown's' is the mass per area calculated from Brown's transects collected at the plots and the predicted mass in grams as calculated from the voxelized plot scans. 'Point Density' contains...
https://doi.org/10.17026/fp39-0x58https://doi.org/10.17026/fp39-0x58
These files are to support the published journal and thesis about the IMU and LIDAR SLAM for indoor mapping. They include datasets and functions used for point clouds generation. Date Submitted: 2022-02-21
https://doi.org/10.17026/fp39-0x58https://doi.org/10.17026/fp39-0x58
These files are to support the published journal paper about indoor backpack mobile mapping system. They include point cloud conversion, segmentation and SLAM codes. In addition, code of evaluation method published in the journal paper. The resulting laser point cloud file is also uploaded.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This benchmark dataset was acquired during the SilviLaser conference 2021 in Vienna. The benchmark aims to demonstrate the different terrestrial system's capabilities for capturing 3D scenes in various forest conditions. A number of universities, institutes, and companies participated and contributed their outputs to this dataset, compiled by terrestrial laser scanning (TLS), mobile laser scanning (MLS), as well as terrestrial photogrammetric systems (TPS). Along with the terrestrial data, one airborne laser scanning (ALS) data was provided as a reference.
Eight forest plots were installed in the terrestrial challenge. Each plot was formed with a 25-meter radius circular area and different tree species (i.e. spruce, pine, beech, white fir), forest structures (i.e. one layer, multi-layer, natural regeneration, deadwood), and age classes (~50 – 120 years). The 3D point clouds acquired by each participant cover the eight plots. In addition to point clouds, traditional in-situ data (tree position, tree species, DBH) were recorded by the organization team.
All point clouds provided by participants were processed in the following steps: co-registration with geo-referenced data, setting a uniform coordinate reference system (CRS), and removing data located out of the plot. This work was performed by OPALS, a laser scanning data processing software developed by the Photogrammetry Group of the TU Wien Department of Geodesy and Geoinformation. Please note that some point clouds are not archived due to problems encountered during pre-processing. The final products consist of one metadata, 3D point clouds, ALS data for reference, and corresponding digital terrain models (DTM) derived from the ALS data using OPALS software. Point clouds are in laz 1.4 format, and DTMs are raster models in GeoTIFF format. Furthermore, all geo-data use CRS of WGS84 / UTM zone 33N (EPSG:32633). More information (e.g. instrument, point density, and extra attributes) can be found in the file "SL21BM_TER_metadata.csv".
This dataset is available to the community for a wide variety of scientific studies. These unique data sets will also form the basis for an international benchmark for parameter retrieval from different 3D recording methods.
This dataset was contributed by the universities/institutes/companies (alphabetical order):
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Robot-at-Home dataset (Robot@Home, paper here) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.
This dataset is unique in three aspects:
During the data collection, a total of 36 rooms were completely inspected, so the dataset is rich in contextual information of objects and rooms. This is a valuable feature, missing in most of the state-of-the-art datasets, which can be exploited by, for instance, semantic mapping systems that leverage relationships like pillows are usually on beds or ovens are not in bathrooms.
Robot@Home Toolbox
The dataset has a toolbox written in python that facilitates queries to the database and the extraction of RGBD images, 3D scenes, scanner data, as well as the application of computer vision and machine learning algorithms among other stuff.
Version history
v1.0.1 Fixed minor bugs.
v1.0.2 Fixed some inconsistencies in some directory names. Fixes were necessary to automate the generation of the next version.
v2.0.0 SQL based dataset. Robot@Home v1.0.2 has been packed into a sqlite database along with RGB-D and scene files which have been assembled into a hierarchical structured directory free of redundancies. Path tables are also provided to reference files in both v1.0.2 and v2.0.0 directory hierarchies. This version has been automatically generated from version 1.0.2 through the toolbox.
v2.0.1 A forgotten foreign key pair have been added
The Paris-Lille-3D is a Benchmark on Point Cloud Classification. The Point Cloud has been labeled entirely by hand with 50 different classes. The dataset consists of around 2km of Mobile Laser System point cloud acquired in two cities in France (Paris and Lille).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Overview The dataset includes data collected during the ATMO-ACCESS Trans-National Access project "Industrial Pollution Sensing with synergic techniques (IPOS TNA)" that has been conducted from June 8 to June 24, 2024 at the Cabauw Experimental Site for Atmospheric Research (CESAR, 51°58'03''N, 4°55'47"E, 3 m.a.s.l.) of the Royal Netherlands Meteorological Institute (KNMI). The IPOS TNA was supporting the 3rd Intercomparison Campaign of UV-VIS DOAS Instruments (CINDI-3).The observations were taken with use of three instruments:ESA Mobile Raman Lidar (EMORAL). Lidar emits pulses at fixed wavelengths (355, 532 and 1064 nm), simultaneously with the pulse repetition rate of 10 Hz and pulse duration of 5-7 ns. The backward scattered laser pulses are detected at 5 Mie narrow-band channels (355p,s 532p,s and 1064 nm) and 3 Raman narrow-band channels (for N2 at 387, 607 nm and H2O at 408nm) as well as broad-band fluorescence channel (470 nm). The temporal resolution was set at 1 min and and spatial resolution to 3.75 m. The overlap between the laser beam and the full field of view of the telescope was at ~250 m a.g.l. EMORAL lidar is a state-of-the-art lidar system developed through a collaborative effort involving the University of Warsaw (UW, Poland; leader and operator), Ludwig Maximilian University of Munich (LMU, Germany), National Observatory of Athens (NOA, Greece), Poznan University of Life Sciences (PULS, Poland), and companies Raymetrics (Greece; core manufacturer), Licel (Germany), and InnoLas Laser (Germany). This complex instrument, part of ESA’s Opto-Electronics section (TEC-MME) at the European Space Research and Technology Centre (ESA-ESTEC, The Netherlands), is designed to perform precise atmospheric measurements. EMORAL lidar was validated by the ACTRIS Centre for Aerosol Remote Sensing (CARS) at the Măgurele Center for Atmosphere and Radiation Studies (MARS) of National Institute of R&D for Optoelectronics (INOE, Romania).PM counter GrayWolf PC-3500, GRAYWOLF Graywolf Sensing Solutions (USA) https://graywolfsensing.com/wp-content/pdf/GrayWolfPC-3500Brochure-818.pdf (last access 25/2/2025)Model 540 Microtops II® Sunphotometer, Solar Light Company, LLC (USA) https://www.solarlight.com/product/microtops-ii-sunphotometer (last access 25/2/2025)The dataset contain following items:1) EMORAL lidar data files The data contain of two files LiLi_IPOS.zip and LiLi_IPOS_quicklooks.zip. Both are described in detail below.The LiLi_IPOS.zip file is a folder that contains the high-resolution data obtained using the Lidar, Radar, Microwave radiometer algorithm (LiRaMi; more in Wang et al., 2020). The results were obtained only from the lidar data (referred to as Limited LiRaMi, i.e. LiLi algorithm version). The folder contains files in netcdf4 format for each day of observations. The data products are calculated from the analog channels only.Each of the .nc file has a structure, which contains Variables:Location (string)Latitude (size: 1x1 [deg])Longitude (size: 1x1 [deg])Altitude (size: 1x1 [m a.g.l.])time vector (size: 1 x time, [UTC])range vector (size: range x 1, [m])RCS532p matrix (size: range x time, [V m2]), which contains the data of the range-corrected signal at 532nm, parallel polarizationRCS532s matrix (size: range x time, [V m2]), which contains the data of the range-corrected signal at 532nm, perpendicular polarizationRCS1064 matrix (size: range x time, [V m2]), which contains the data of the range-corrected signal at 1064nmSR532 matrix (size: range x time, [unitless]), which contains the data of the scattering ratio at 532nmATT_BETA532 matrix (size: range x time, [m2/sr]), which contains the data of the attenuated backscatter coefficient at 532nm, parallel polarizationC532 constant (size: 1x1, [V sr]), which is the instrumental factor for 532nmSR1064 matrix (size: range x time, [au]), which contains the data of the scattering ratio at 1064nmATT_BETA1064 matrix (size: range x time, [m2/sr]), which contains the data of the attenuated backscatter coefficient at 1064nmC1064 constant (size: 1x1, [V sr]), which is the instrumental factor for 1064nmCOLOR_RATIO matrix (size: range x time, [au]), which contains the data of color ratio of 532nm and 1064nm.PARTICLE_DEPOLARIZATIO_RATIO matrix (size: range x time, [au]), which contains the data of particle depolarization ratio at 532nmC constant (size: 1x1, [au]), which is the depolarization constant for 532nm.The LiLi_IPOS_quicklooks.zip file contains high-resolution figures representing the data in the form of quicklooks of following parameters:Range-corrected signal at 1064nmScattering ratio at 532nmColor ratio of 532 and 1064nmParticle depolarization ratio at 532nmAerosol target classification from LiLi algorithmWang, D., Stachlewska, I. S., Delanoë, J., Ene, D., Song, X., and Schüttemeyer D., (2020). Spatio-temporal discrimination of molecular, aerosol and cloud scattering and polarization using a combination of a Raman lidar, Doppler cloud radar and microwave radiometer, Opt. Express 28, 20117-20134 (2020).2) PM counterThe PM_counter.zip file contains a folder with data from measurements of atmospheric particulate matter collected using the GrayWolf PC-3500 particle counter from June 15 (16:16:21 CEST) to June 20 (07:06:21 CEST), 2024, at the CESAR station (51°58'04.0"N, 4°55'46.4"E). The data were processed using WolfSense PC software for validation and analysis. The final dataset, provided in XLSX format, includes temporal evaluation in particle concentration from 0.3 to 10.0 µm (6 size ranges). The data is divided into three levels:[1] Level 0: Raw data in XLSX format with measurement data in 4 units (µg/m3, cnts/m3, cnts dif, cnts cum).File structure:Line 1: headers describing columns,Line 2-6646: concentration of PM,Column 1: date and time in format DD-MMM-YY HH:MM:SS AM/PM,Column 2-7: concentration of specific PM values: 0.3, 0.5, 1.0, 2.5, 5.0, 10.0 µm, respectively,Column 8: Temperature,Column 9: Carbon Dioxide (CO2),Column 10: Total Volatile Organic Compounds (TVOC),Column 11: pressure in measuring chamber,Missing data (Column 8-10) represented as zero value (0).[2] Level 1: Tables with validated data in 4 units (µg/m3, cnts/m3, cnts dif, cnts cum) in XLSX format.File structure:Line 1: headers describing columns,Line 2-6646: concentration of PM,Column 1: date and time in format DD-MMM-YY HH:MM:SS AM/PM,Column 2-7: concentration of specific PM values: 0.3, 0.5, 1.0, 2.5, 5.0, 10.0 µm, respectively,Column 8: pressure in measuring chamber,Column 9: assembly method, where: [1] measurement at a height of 60 cm during rain (instrument protected by the table), [2] measurement at a height of 160 cm when there is no rain.[3] Level 2: Tables with post-processed data in XLSX format, and graphs in PNG format visualizing the received data.XLSX file structure:PM counter - level 2 (daily average concentrations), PM counter - level 2 (hourly average concentrations) sheets: structure of columns same as in level 1.PM counter - level 2 (data comparison) sheet: Column 1 - Date in format DD.MM.YYYY; Column 2 - PM2.5 concentration measured within IPOS; Column 3 - PM10.0 concentration measured within IPOS; Column 4 - PM2.5 concentration measured at Cabauw-Wielsekade (RIVM), Column 5 - PM10.0 concentration measured Cabauw-Wielsekade (RIVM).General information for all level files:Decimal separator: coma (,).3) SunphotometerThe MICROTOPS_IPOS.zip file is a folder that contains data from measurements of aerosol optical thickness at wavelengths 380, 500, 675, 870, and 1020 nm done with Microtops II hand-held sunphotometer. The final, quality assured dataset, provided in XLSX format, consists of measurement data for: temperature, pressure, solar zenith angle, signal strength at different wavelengths (340, 380, 500, 936, 1020 nm), standard deviation at specific wavelengths, ratio between signals at two different wavelengths (340/380, 380/500, 500/936, 936/1020), and atmospheric optical thickness at different wavelengths.During the IPOS TNA campaign, in total 29 measurements were taken. Each measurement is composed of 6 scans, whereas the first one is a dark scan. The days when a measurement took place were: 13, 23, 24, and 25 of June 2024. Level 0 of data means raw data converted from dbf to xslx format file. Level 1 of data mean raw data converted from dbf to xslx file format, without the dark scans.Files structure:Line 1: Headers describing columns,Column 1: Serial number of the instrumentColumn 2-3: Date and Time in format YYYY-MM-DD; HH:MM:SS,Column 4-8: Data desciprtion of the camapign; Location (decimal); Latitude; Longitude (decimal), AltitudeColumn 9-14: Atmospheric Pressure; Solar Zenith Angle; Air Mass; Standard Deviation Correction; Temperature; ID of the measurement, Column 15-24: Signal strength at specific wavelength and Standard Deviation,Column 25-28: Ratio between signals at two different wavelengths,Column 29-33: Atmospheric Optical Thickness,Column 34-39: Columnar Water Vapour and Natural Logarithm of Voltage,Column 40-47: Calibration coefficients,Column 48-49: Pressure offset and Pressure scale factor,READ ME sheet: Describing the file content and measurement location.4) readme fileATTENTION:We offer a free access to this dataset. The user is however encouraged to share the information on the data use by sending an e-mail to rslab@fuw.edu.plIn the case this dataset is used for a scientific communication (publication, conference contribution, thesis) we would like to kindly ask for considering to acknowledge data provision by citing this dataset.------------------------------------PI of IPOS TNA Iwona Stachlewska and IPOS team members Maciej Karasewicz, Anna Abramowicz, Kinga Wiśniewska, Zuzanna Rykowska, and Afwan Hafiz acknowledge that the published dataset was prepared within the Trans-National Access grant (IPOS TNA no. ATMO-TNA-7-0000000056) within the ATMO-ACCESS grant financed by European Commission Horizon 2020 program (G.A.
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Recently published datasets have been increasingly comprehensive with respect to their variety of simultaneously used sensors, traffic scenarios, environmental conditions, and provided annotations. However, these datasets typically only consider data collected by one independent vehicle. Hence, there is currently a lack of comprehensive, real-world, multi-vehicle datasets fostering research on cooperative applications such as object detection, urban navigation, or multi-agent SLAM. In this paper, we aim to fill this gap by introducing the novel LUCOOP dataset, which provides time-synchronized multi-modal data collected by three interacting measurement vehicles. The driving scenario corresponds to a follow-up setup of multiple rounds in an inner city triangular trajectory. Each vehicle was equipped with a broad sensor suite including at least one LiDAR sensor, one GNSS antenna, and up to three IMUs. Additionally, Ultra-Wide-Band (UWB) sensors were mounted on each vehicle, as well as statically placed along the trajectory enabling both V2V and V2X range measurements. Furthermore, a part of the trajectory was monitored by a total station resulting in a highly accurate reference trajectory. The LUCOOP dataset also includes a precise, dense 3D map point cloud, acquired simultaneously by a mobile mapping system, as well as an LOD2 city model of the measurement area. We provide sensor measurements in a multi-vehicle setup for a trajectory of more than 4 km and a time interval of more than 26 minutes, respectively. Overall, our dataset includes more than 54,000 LiDAR frames, approximately 700,000 IMU measurements, and more than 2.5 hours of 10 Hz GNSS raw measurements along with 1 Hz data from a reference station. Furthermore, we provide more than 6,000 total station measurements over a trajectory of more than 1 km and 1,874 V2V and 267 V2X UWB measurements. Additionally, we offer 3D bounding box annotations for evaluating object detection approaches, as well as highly accurate ground truth poses for each vehicle throughout the measurement campaign.
Important: Before downloading and using the data, please check the Updates.zip in the "Data and Resources" section at the bottom of this web site. There, you find updated files and annotations as well as update notes.
Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/541747ed-3d6e-41c4-9046-15bba3702e3b/download/lgln_logo.png" alt="Alt text" title="LGLN logo">
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/d141d4f1-49b0-40e6-b8d9-e49f420e3627/download/vans_with_redgreen_cs_vehicle.png" alt="Alt text" title="Sensor Setup of the three measurement vehicles">
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/5b6b37cf-a991-4dc4-8828-ad12755203ca/download/map_point_cloud.png" alt="Alt text" title="3D map point cloud">
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/6c61d297-8544-4788-bccf-7a28ccfa702a/download/scenario_with_osm_reference.png" alt="Alt text" title="Measurement scenario">
Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/8b0262b9-6769-4a5d-a37e-8fcb201720ef/download/annotations.png" alt="Alt text" title="Number of annotations per class">
Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/7358ed31-9886-4c74-bec2-6868d577a880/download/data_structure.png" alt="Alt text" title="Data structure">
https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/fc795ec2-f920-4415-aac6-6ad3be3df0a9/download/data_format.png" alt="Alt text" title="Data format">
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/a1974957-5ce2-456c-9f44-9d05c5a14b16/download/vans_merged.png" alt="Alt text" title="Measurement vehicles">
https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/53a58500-8847-4b3c-acd4-a3ac27fc8575/download/ts_uwb_mms.png" alt="Alt text">
This measurement campaign could not have been carried out without the help of many contributors. At this point, we thank Yuehan Jiang (Institute for Autonomous Cyber-Physical Systems, Hamburg), Franziska Altemeier, Ingo Neumann, Sören Vogel, Frederic Hake (all Geodetic Institute, Hannover), Colin Fischer (Institute of Cartography and Geoinformatics, Hannover), Thomas Maschke, Tobias Kersten, Nina Fletling (all Institut für Erdmessung, Hannover), Jörg Blankenbach (Geodetic Institute, Aachen), Florian Alpen (Hydromapper GmbH), Allison Kealy (Victorian Department of Environment, Land, Water and Planning, Melbourne), Günther Retscher, Jelena Gabela (both Department of Geodesy and Geoin- formation, Wien), Wenchao Li (Solinnov Pty Ltd), Adrian Bingham (Applied Artificial Intelligence Institute,
Dataset for performance evaluation of different approaches for detecting and tracking people by using lidar sensors. Information contained at the dataset is specially suitable to be used as training data for neural network-based classifiers.
A digital elevation model (DEM) of a portion of the Mobile-Tensaw Delta region and Three Mile Creek in Alabama was produced from remotely sensed, geographically referenced elevation measurements by the U.S. Geological Survey (USGS). Elevation measurements were collected over the area (bathymetry was irresolvable) using the Experimental Advanced Airborne Research Lidar (EAARL), a pulsed laser ranging system mounted onboard an aircraft to measure ground elevation, vegetation canopy, and coastal topography. The system uses high-frequency laser beams directed at the Earth's surface through an opening in the bottom of the aircraft's fuselage. The laser system records the time difference between emission of the laser beam and the reception of the reflected laser signal in the aircraft. The plane travels over the target area at approximately 50 meters per second at an elevation of approximately 300 meters, resulting in a laser swath of approximately 240 meters with an average point spacing of 2-3 meters. The EAARL, developed originally by the National Aeronautics and Space Administration (NASA) at Wallops Flight Facility in Virginia, measures ground elevation with a vertical resolution of +/-15 centimeters. A sampling rate of 3 kilohertz or higher results in an extremely dense spatial elevation dataset. Over 100 kilometers of coastline can be surveyed easily within a 3- to 4-hour mission. When resultant elevation maps for an area are analyzed, they provide a useful tool to make management decisions regarding land development. For more information on Lidar science and the Experimental Advanced Airborne Research Lidar (EAARL) system and surveys, see http://ngom.usgs.gov/dsp/overview/index.php and http://ngom.usgs.gov/dsp/tech/eaarl/index.php .
Quantum Spatial (QSI) and PrecisionHawk (PH) collected lidar for test sites within the Grand Bay National Estuarine Research Reserve (NERR) using an unmanned aerial system (UAS). Four sites were flown, covering a total of 177 acres. A fixed-wing PH Lancaster (revision 5) platform was used, carrying a Velodyne Puck VLP-16 based lidar system. The system provides 2 returns (strongest and last) with a pulse rate of 300 kHz using a 903 nm wavelength laser. Flights were conducted from May 9-11, 2017 and were flown at 50 meters above ground level. Specifications for the collection included 30 pulses per square meter and 0.10 meter RMSE vertical accuracy in non-vegetated areas. The average first return density was over 105 points per square meter with an average ground classified density of over 3 points per square meter. Deliverables included a 1-meter resolution bare-earth DEM in UTM zone 16 NAD83(2011). These data were ingested into the Digital Coast Data Access Viewer for custom processing. Original contact information: Contact Org: NOAA Office for Coastal Management Phone: 843-740-1202 Email: coastal.info@noaa.gov
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Tree summary from dataset 1.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Technical specifications of the Mobile LiDAR System.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is a point cloud sampe data which was collected by a mobile Lidar system (MLS).