24 datasets found
  1. F

    i.c.sens Visual-Inertial-LiDAR Dataset

    • data.uni-hannover.de
    bag, jpeg, pdf, png +2
    Updated Dec 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    i.c.sens (2024). i.c.sens Visual-Inertial-LiDAR Dataset [Dataset]. https://data.uni-hannover.de/dataset/i-c-sens-visual-inertial-lidar-dataset
    Explore at:
    txt(285), png(650007), jpeg(153522), txt(1049), jpeg(129333), rviz(6412), bag(7419679751), bag(9980268682), bag(9982003259), bag(9960305979), pdf(21788288), jpeg(556618), bag(9971699339), bag(9896857478), bag(9939783847), bag(9969171093)Available download formats
    Dataset updated
    Dec 12, 2024
    Dataset authored and provided by
    i.c.sens
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">

    Image credit: Sören Vogel

    The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">

    Image credit: Sören Vogel

    The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">

    The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:

    • Velodyne HDL-64 LiDAR
    • LORD MicroStrain 3DM-GQ4-45 GNSS aided IMU
    • Pointgrey GS3-U3-23S6C-C RGB camera

    To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:

    roscore & rosrun rviz rviz -d icsens_data.rviz
    

    Afterwards, start playing a rosbag with

    rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
    

    Below we provide some exemplary images and their corresponding point clouds.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">

    Related publications:

    • R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.

    • R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.

    • R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.

    • R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.

    • R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.

    • R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.

    • R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.

    • R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.

  2. f

    Camera-LiDAR Datasets

    • figshare.com
    zip
    Updated Aug 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jennifer Leahy (2024). Camera-LiDAR Datasets [Dataset]. http://doi.org/10.6084/m9.figshare.26660863.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 14, 2024
    Dataset provided by
    figshare
    Authors
    Jennifer Leahy
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The datasets are original and specifically collected for research aimed at reducing registration errors between Camera-LiDAR datasets. Traditional methods often struggle with aligning 2D-3D data from sources that have different coordinate systems and resolutions. Our collection comprises six datasets from two distinct setups, designed to enhance versatility in our approach and improve matching accuracy across both high-feature and low-feature environments.Survey-Grade Terrestrial Dataset:Collection Details: Data was gathered across various scenes on the University of New Brunswick campus, including low-feature walls, high-feature laboratory rooms, and outdoor tree environments.Equipment: LiDAR data was captured using a Trimble TX5 3D Laser Scanner, while optical images were taken with a Canon EOS 5D Mark III DSLR camera.Mobile Mapping System Dataset:Collection Details: This dataset was collected using our custom-built Simultaneous Localization and Multi-Sensor Mapping Robot (SLAMM-BOT) in several indoor mobile scenes to validate our methods.Equipment: Data was acquired using a Velodyne VLP-16 LiDAR scanner and an Arducam IMX477 Mini camera, controlled via a Raspberry Pi board.

  3. 2014 Mobile County, AL Lidar

    • catalog.data.gov
    • fisheries.noaa.gov
    Updated Oct 31, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NOAA Office for Coastal Management (Point of Contact, Custodian) (2024). 2014 Mobile County, AL Lidar [Dataset]. https://catalog.data.gov/dataset/2014-mobile-county-al-lidar1
    Explore at:
    Dataset updated
    Oct 31, 2024
    Dataset provided by
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    Area covered
    Mobile County, Alabama
    Description

    Atlantic was contracted to acquire high resolution topographic LiDAR (Light Detection and Ranging) data located in Mobile County, Alabama. The intent was to collect one (1) Area of Interest (AOI) that encompasses Mobile County. The total client defined AOI was 1,402 square miles or 3,361 square kilometers. The data were collected from January 12 - 22, 2014. Classifications of data available from NOAA OCM are: 1 (Unclassified, 2 (Ground), 3 (Low Vegetation), 7 (Low Noise), 8 (Model Key Points), 9 (Water), 10 (Ignored Ground, Breakline Proximity). Low vegetation points were removed as they were incorrect and not required for delivery. Digital Elevation Models (DEMs) created from this lidar data are available for download. They are available at: https://coast.noaa.gov/dataviewer/#/lidar/search/where:ID=5169 . Breaklines are available at: https://noaa-nos-coastal-lidar-pds.s3.amazonaws.com/laz/geoid18/4966/supplemental/breaklines Original contact information: Contact Name: Scott Kearney Contact Org: City of Mobile Phone: (251) 208-7942 Email: kearney@cityofmobile.org

  4. t

    i.c.sens Visual-Inertial-LiDAR Dataset

    • service.tib.eu
    Updated Aug 19, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). i.c.sens Visual-Inertial-LiDAR Dataset [Dataset]. https://service.tib.eu/ldmservice/dataset/luh-i-c-sens-visual-inertial-lidar-dataset
    Explore at:
    Dataset updated
    Aug 19, 2020
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file. Image credit: Sören Vogel The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system. Image credit: Sören Vogel The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes. The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors: Velodyne HDL-64 LiDAR LORD MicroStrain 3DM-GQ4-45 GNSS aided IMU Pointgrey GS3-U3-23S6C-C RGB camera To inspect the data, first start a rosmaster and launch rviz using the provided configuration file: roscore & rosrun rviz rviz -d icsens_data.rviz

  5. 4

    Annotated mobile laser scans of the Dutch railway environment

    • data.4tu.nl
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bram Ton; Strukton Rail, Annotated mobile laser scans of the Dutch railway environment [Dataset]. http://doi.org/10.4121/fa259c52-a585-420c-8a0c-af5e91518e29.v1
    Explore at:
    Dataset provided by
    4TU.ResearchData
    Authors
    Bram Ton; Strukton Rail
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    2021
    Area covered
    Netherlands
    Dataset funded by
    NWO
    Tech For Future Grant
    Description

    This dataset contains point cloud data of two stretches of railway track in the Netherlands. One stretch covers the trajectory between Deventer and Twello, the other stretch is located around Dronten. The data have been captured using a Veldodyne VLP-16 mobile laser scanner mounted on a dedicated measurement train by Strukton Rail. For both stretches, the EPSG:32631 coordinate reference system (CRS) was used for the captured point clouds and for the timestamps Coordinated Universal Time (UTC) was used. The provided files are LAZ files. The trajectory of the measurement train is also logged using the ETRS89 CRS.


    In common, these four classes are annotated:

    • Background, label=0
    • Catenary masts, label=1
    • Tension rods, label=3
    • Signals, label=4
    • Relay cabinets, label=5


    Label values are stored in conjunction with the data using a 'Scalar Field' attribute. Each annotated object also carries a unique identifier (uid). This uid is also provided as a 'Scalar Field' attribute within the data.



    ** Deventer-Twello **

    Length: 6.5 km

    Acquisition date: 2021-06-14

    Trajectory log time zone: Central European Summer Time (CEST, UTC+02:00)


    Additional notes:

    The measurement train drove four times back and forth on the same piece of track. The measurement train did not turn at the end of the section, but instead just drove backwards. Reversing a train is not easy. This implies that objects are not captured from both directions on the track. Each trip of the measurement train will be referred to as a run. All four runs are merged together into one file. Each of the runs can be distinguished by the `run` Scalar Field attribute.


    This dataset has partial labels for signs (label=6) and lamp posts (label=2).


    ** Dronten **

    Length: 2.9 km

    Acquisition date: 2021-11-16

    Trajectory log time zone: Central European Time (CET, UTC+01:00)


    Additional notes:

    This dataset has one additional label, the arms of catenary masts have also been labelled (label=7). The arm carries the same uid as the mast.

    The timestamps between the point cloud data and the trajectory log are not synchronised.


    ** Early access **

    The dataset is available under an embargo. If you would like to obtain the dataset before the embargo period expires, send an e-mail with a short motivation to Corne.vandekraats@strukton.com.

  6. t

    SilviLaser 2021 Benchmark Dataset - Terrestrial Challenge

    • researchdata.tuwien.ac.at
    • researchdata.tuwien.at
    bin, csv, zip
    Updated Jun 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Markus Hollaus; Markus Hollaus; Yi-Chen Chen; Yi-Chen Chen (2024). SilviLaser 2021 Benchmark Dataset - Terrestrial Challenge [Dataset]. http://doi.org/10.48436/kndye-egv02
    Explore at:
    bin, zip, csvAvailable download formats
    Dataset updated
    Jun 25, 2024
    Dataset provided by
    TU Wien
    Authors
    Markus Hollaus; Markus Hollaus; Yi-Chen Chen; Yi-Chen Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Sep 27, 2021
    Description

    This benchmark dataset was acquired during the SilviLaser conference 2021 in Vienna. The benchmark aims to demonstrate the different terrestrial system's capabilities for capturing 3D scenes in various forest conditions. A number of universities, institutes, and companies participated and contributed their outputs to this dataset, compiled by terrestrial laser scanning (TLS), mobile laser scanning (MLS), as well as terrestrial photogrammetric systems (TPS). Along with the terrestrial data, one airborne laser scanning (ALS) data was provided as a reference.

    Eight forest plots were installed in the terrestrial challenge. Each plot was formed with a 25-meter radius circular area and different tree species (i.e. spruce, pine, beech, white fir), forest structures (i.e. one layer, multi-layer, natural regeneration, deadwood), and age classes (~50 – 120 years). The 3D point clouds acquired by each participant cover the eight plots. In addition to point clouds, traditional in-situ data (tree position, tree species, DBH) were recorded by the organization team.

    All point clouds provided by participants were processed in the following steps: co-registration with geo-referenced data, setting a uniform coordinate reference system (CRS), and removing data located out of the plot. This work was performed by OPALS, a laser scanning data processing software developed by the Photogrammetry Group of the TU Wien Department of Geodesy and Geoinformation. Please note that some point clouds are not archived due to problems encountered during pre-processing. The final products consist of one metadata, 3D point clouds, ALS data for reference, and corresponding digital terrain models (DTM) derived from the ALS data using OPALS software. Point clouds are in laz 1.4 format, and DTMs are raster models in GeoTIFF format. Furthermore, all geo-data use CRS of WGS84 / UTM zone 33N (EPSG:32633). More information (e.g. instrument, point density, and extra attributes) can be found in the file "SL21BM_TER_metadata.csv".

    This dataset is available to the community for a wide variety of scientific studies. These unique data sets will also form the basis for an international benchmark for parameter retrieval from different 3D recording methods.

    Acknowledgements

    This dataset was contributed by the universities/institutes/companies (alphabetical order):

    • Czech University of Life Sciences Prague
    • Forest Design
    • Green Valley International
    • RIEGL
    • Silva Tarouca Research Institute
    • Swiss Federal Institute for Forest, Snow and Landscape Research
    • Umweltdata GmbH
    • University of Natural Resources and Life Sciences
    • Wageningen University & Research

    Notes

    1. In terms of in-situ data, please contact Markus Hollaus for details.
    2. To perform a bulk download, please use this file to get the URL list.

    Changelog

    • v1.0 First release
    • v1.1 Fix the misalignment issue for plot D

  7. P

    Paris-Lille-3D Dataset

    • paperswithcode.com
    Updated Dec 16, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xavier Roynard; Jean-Emmanuel Deschaud; François Goulette (2020). Paris-Lille-3D Dataset [Dataset]. https://paperswithcode.com/dataset/paris-lille-3d
    Explore at:
    Dataset updated
    Dec 16, 2020
    Authors
    Xavier Roynard; Jean-Emmanuel Deschaud; François Goulette
    Area covered
    Paris, Lille
    Description

    The Paris-Lille-3D is a Benchmark on Point Cloud Classification. The Point Cloud has been labeled entirely by hand with 50 different classes. The dataset consists of around 2km of Mobile Laser System point cloud acquired in two cities in France (Paris and Lille).

  8. d

    Lidar derived shoreline for Beaver Lake near Rogers, Arkansas, 2018

    • catalog.data.gov
    • data.usgs.gov
    • +2more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Lidar derived shoreline for Beaver Lake near Rogers, Arkansas, 2018 [Dataset]. https://catalog.data.gov/dataset/lidar-derived-shoreline-for-beaver-lake-near-rogers-arkansas-2018
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Beaver Lake, Arkansas, Rogers
    Description

    Beaver Lake was constructed in 1966 on the White River in the northwest corner of Arkansas for flood control, hydroelectric power, public water supply, and recreation. The surface area of Beaver Lake is about 27,900 acres and approximately 449 miles of shoreline are at the conservation pool level (1,120 feet above the North American Vertical Datum of 1988). Sedimentation in reservoirs can result in reduced water storage capacity and a reduction in usable aquatic habitat. Therefore, accurate and up-to-date estimates of reservoir water capacity are important for managing pool levels, power generation, water supply, recreation, and downstream aquatic habitat. Many of the lakes operated by the U.S. Army Corps of Engineers are periodically surveyed to monitor bathymetric changes that affect water capacity. In October 2018, the U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, completed one such survey of Beaver Lake using a multibeam echosounder. The echosounder data was combined with light detection and ranging (lidar) data to prepare a bathymetric map and a surface area and capacity table. Collection of bathymetric data in October 2018 at Beaver Lake near Rogers, Arkansas, used a marine-based mobile mapping unit that operates with several components: a multibeam echosounder (MBES) unit, an inertial navigation system (INS), and a data acquisition computer. Bathymetric data were collected using the MBES unit in longitudinal transects to provide complete coverage of the lake. The MBES was tilted in some areas to improve data collection along the shoreline, in coves, and in areas that are shallower than 2.5 meters deep (the practical limit of reasonable and safe data collection with the MBES). Two bathymetric datasets collected during the October 2018 survey include the gridded bathymetric point data (BeaverLake2018_bathy.zip) computed on a 3.28-foot (1-meter) grid using the Combined Uncertainty and Bathymetry Estimator (CUBE) method, and the bathymetric quality-assurance dataset (BeaverLake2018_QA.zip). The gridded point data used to create the bathymetric surface (BeaverLake2018_bathy.zip) was quality-assured with data from 9 selected resurvey areas (BeaverLake2018_QA.zip) to test the accuracy of the gridded bathymetric point data. The data are provided as comma delimited text files that have been compressed into zip archives. The shoreline was created from bare-earth lidar resampled to a 3.28-foot (1-meter) grid spacing. A contour line representing the flood pool elevation of 1,135 feet was generated from the gridded data. The data are provided in the Environmental Systems Research Institute shapefile format and have the common root name of BeaverLake2018_1135-ft. All files in the shapefile group must be retrieved to be useable.

  9. P

    ConSLAM Dataset

    • paperswithcode.com
    Updated Aug 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). ConSLAM Dataset [Dataset]. https://paperswithcode.com/dataset/conslam
    Explore at:
    Dataset updated
    Aug 27, 2024
    Description

    ConSLAM is a real-world dataset collected periodically on a construction site to measure the accuracy of mobile scanners' SLAM algorithms.

    The dataset contains time-synchronized and spatially registered RGB and NIR images and 360-deg LiDAR scans, 9-axis IMU measurements, and professional ground-truth terrestrial laser scans. This dataset reflects the periodic need to scan construction sites with the aim of accurately monitoring progress using a hand-held scanner. The sensors used for data acquisition are: - LiDAR: Velodyne VLP-16. - RGB camera: Alvium U-319c, 3.2 MP. - NIR camera: Alvium 1800 U-501, 5.0 MP. - 9-axis IMU: Xsens MTi-610.

  10. 3D laser scanning and photogrammetric data sets of arolla pine trees at...

    • doi.pangaea.de
    html, tsv
    Updated Nov 29, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francesco Pirotti; Martin Rutzinger; Katharina Anders; Thomas Zieher; Bernhard Höfle; Roderik Lindenbergh; Sander Oude Elberink; Martin Mokroš; Marco Scaioni; Caroline Gaevert; Andreas Mayr; Anette Eltner; Magnus Bremer (2023). 3D laser scanning and photogrammetric data sets of arolla pine trees at Bruggboden next to Zirbenwald close to Obergrugl, (Austria) acquired during the Sensing Mountains 2022 – Innsbruck Summer School [Dataset]. http://doi.org/10.1594/PANGAEA.961803
    Explore at:
    html, tsvAvailable download formats
    Dataset updated
    Nov 29, 2023
    Dataset provided by
    PANGAEA
    Authors
    Francesco Pirotti; Martin Rutzinger; Katharina Anders; Thomas Zieher; Bernhard Höfle; Roderik Lindenbergh; Sander Oude Elberink; Martin Mokroš; Marco Scaioni; Caroline Gaevert; Andreas Mayr; Anette Eltner; Magnus Bremer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Variables measured
    File content, Binary Object
    Description

    The site is situated south of Obergurgl (Ötztal Alps, Austria) above the Zirbenwald. The local field name of the site is "Bruggboden" It contains selected group of arolla pine trees (Pinus cembra). The data set encompasses a range of techniques and tools, including Mobile Laser Scanning (MLS) using GeoSLAM and IPhone Lidar, Terrestrial Laser Scanning (TLS) with Trimble and Riegl VZ2000-i, Airborne Laser Scanning (ALS) by RICOPTER RIEGL VUX-1LR, ALS 2020, and ICESat-2, as well as photogrammetric reconstructions using IPhone Video and Sony Alpha cameras. Additionally, ground truth data is collected using Total Station Leica, GNSS PPK, mobile phone GNSS, and caliper measurements. This data set has been acquired at the 4th edition of the Sensing Mountains 2022 - Innsbruck Summer School of Alpine Research – Close-range Sensing Techniques in Alpine Terrain.

  11. F

    LUCOOP: Leibniz University Cooperative Perception and Urban Navigation...

    • data.uni-hannover.de
    mp4, pdf, png, zip
    Updated Dec 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    i.c.sens (2024). LUCOOP: Leibniz University Cooperative Perception and Urban Navigation Dataset [Dataset]. https://data.uni-hannover.de/es/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe
    Explore at:
    png(25744249), png(918140), png(285246), png(445462), png(69506), png(137903), png(21345), mp4(27883878), zip(26808117), png(10545), png(87949977), pdf(643354), mp4(39029045), png(1157038), png(5957763), mp4(11636909), png(1102491)Available download formats
    Dataset updated
    Dec 12, 2024
    Dataset authored and provided by
    i.c.sens
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    A real-world multi-vehicle multi-modal V2V and V2X dataset

    Recently published datasets have been increasingly comprehensive with respect to their variety of simultaneously used sensors, traffic scenarios, environmental conditions, and provided annotations. However, these datasets typically only consider data collected by one independent vehicle. Hence, there is currently a lack of comprehensive, real-world, multi-vehicle datasets fostering research on cooperative applications such as object detection, urban navigation, or multi-agent SLAM. In this paper, we aim to fill this gap by introducing the novel LUCOOP dataset, which provides time-synchronized multi-modal data collected by three interacting measurement vehicles. The driving scenario corresponds to a follow-up setup of multiple rounds in an inner city triangular trajectory. Each vehicle was equipped with a broad sensor suite including at least one LiDAR sensor, one GNSS antenna, and up to three IMUs. Additionally, Ultra-Wide-Band (UWB) sensors were mounted on each vehicle, as well as statically placed along the trajectory enabling both V2V and V2X range measurements. Furthermore, a part of the trajectory was monitored by a total station resulting in a highly accurate reference trajectory. The LUCOOP dataset also includes a precise, dense 3D map point cloud, acquired simultaneously by a mobile mapping system, as well as an LOD2 city model of the measurement area. We provide sensor measurements in a multi-vehicle setup for a trajectory of more than 4 km and a time interval of more than 26 minutes, respectively. Overall, our dataset includes more than 54,000 LiDAR frames, approximately 700,000 IMU measurements, and more than 2.5 hours of 10 Hz GNSS raw measurements along with 1 Hz data from a reference station. Furthermore, we provide more than 6,000 total station measurements over a trajectory of more than 1 km and 1,874 V2V and 267 V2X UWB measurements. Additionally, we offer 3D bounding box annotations for evaluating object detection approaches, as well as highly accurate ground truth poses for each vehicle throughout the measurement campaign.

    Data access

    Important: Before downloading and using the data, please check the Updates.zip in the "Data and Resources" section at the bottom of this web site. There, you find updated files and annotations as well as update notes.

    • The dataset is available here.
    • Additional information are provided and constantly updated in our README.
    • The corresponding paper is available here.
    • Cite this as: J. Axmann et al., "LUCOOP: Leibniz University Cooperative Perception and Urban Navigation Dataset," 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 2023, pp. 1-8, doi: 10.1109/IV55152.2023.10186693.

    Preview

    Watch the video Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/541747ed-3d6e-41c4-9046-15bba3702e3b/download/lgln_logo.png" alt="Alt text" title="LGLN logo">

    Sensor Setup of the three measurement vehicles

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/d141d4f1-49b0-40e6-b8d9-e49f420e3627/download/vans_with_redgreen_cs_vehicle.png" alt="Alt text" title="Sensor Setup of the three measurement vehicles">

    Sensor setup of all the three vehicles: Each vehicle is equipped with a LiDAR sensor (green), a UWB unit (orange), a GNSS antenna (purple), and a Microstrain IMU (red). Additionally, each vehicle has its unique feature: Vehicle 1 has an additional LiDAR at the trailer hitch (green) and a prism for the tracking of the total station (dark red hexagon). Vehicle 2 provides an iMAR iPRENA (yellow) and iMAR FSAS (blue) IMU, where the platform containing the IMUs is mounted inside the car (dashed box). Vehicle 3 carries the RIEGL MMS (pink). Along with the sensors and platforms, the right-handed body frame of each vehicle is also indicated.

    3D map point cloud

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/5b6b37cf-a991-4dc4-8828-ad12755203ca/download/map_point_cloud.png" alt="Alt text" title="3D map point cloud">

    High resolution 3D map point cloud: Different locations and details along the trajectory. Colors according to reflectance values.

    Measurement scenario

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/6c61d297-8544-4788-bccf-7a28ccfa702a/download/scenario_with_osm_reference.png" alt="Alt text" title="Measurement scenario">

    Driven trajectory and locations of the static sensors: The blue hexagons indicate the positions of the static UWB sensors, the orange star represents the location of the total station, and the orange shaded area illustrates the coverage of the total station. The route of the three measurement vehicles is shown in purple. Background map: OpenStreetMap copyright

    Watch the video Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de

    Number of annotations per class (final)

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/8b0262b9-6769-4a5d-a37e-8fcb201720ef/download/annotations.png" alt="Alt text" title="Number of annotations per class">

    Watch the video Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de

    Data structure

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/7358ed31-9886-4c74-bec2-6868d577a880/download/data_structure.png" alt="Alt text" title="Data structure">

    Data format

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/fc795ec2-f920-4415-aac6-6ad3be3df0a9/download/data_format.png" alt="Alt text" title="Data format">

    Gallery

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/a1974957-5ce2-456c-9f44-9d05c5a14b16/download/vans_merged.png" alt="Alt text" title="Measurement vehicles">

    From left to right: Van 1, van 2, van 3.

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/53a58500-8847-4b3c-acd4-a3ac27fc8575/download/ts_uwb_mms.png" alt="Alt text">

    From left to right: Tracking of the prism on van 1 by means of the MS60 total station, the detected prism from the view point of the MS60 total station, PulsON 440 Ultra Wide Band (UWB) sensors, RIEGL VMX-250 Mobile Mapping System.

    Acknowledgement

    This measurement campaign could not have been carried out without the help of many contributors. At this point, we thank Yuehan Jiang (Institute for Autonomous Cyber-Physical Systems, Hamburg), Franziska Altemeier, Ingo Neumann, Sören Vogel, Frederic Hake (all Geodetic Institute, Hannover), Colin Fischer (Institute of Cartography and Geoinformatics, Hannover), Thomas Maschke, Tobias Kersten, Nina Fletling (all Institut für Erdmessung, Hannover), Jörg Blankenbach (Geodetic Institute, Aachen), Florian Alpen (Hydromapper GmbH), Allison Kealy (Victorian Department of Environment, Land, Water and Planning, Melbourne), Günther Retscher, Jelena Gabela (both Department of Geodesy and Geoin- formation, Wien), Wenchao Li (Solinnov Pty Ltd), Adrian Bingham (Applied Artificial Intelligence Institute,

  12. Indoor3Dmapping dataset

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Mar 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Armando Arturo Sánchez Alcázar; Giovanni Pintore; Giovanni Pintore; Matteo Sgrenzaroli; Armando Arturo Sánchez Alcázar; Matteo Sgrenzaroli (2022). Indoor3Dmapping dataset [Dataset]. http://doi.org/10.5281/zenodo.6367381
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 20, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Armando Arturo Sánchez Alcázar; Giovanni Pintore; Giovanni Pintore; Matteo Sgrenzaroli; Armando Arturo Sánchez Alcázar; Matteo Sgrenzaroli
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data Organization
    Under the root directory for the whole acquisition, there is a positions.csv file and 3 subdirectories: img, dense, and sparse. The mobile mapping 3D dataset was generated walking around an indoor space and each corresponds to a unique pose along the trajectory of this motion. This version of the dataset contains a total of 99 unique poses. There is a separation of 1 meter between each adjacent pose.

    root
    ├── img
    │ ├── 

    positions.csv

    • File format: One ASCII file.
    • File structure Rows: Each image is one record.
    • File structure Columns: Comma separated headers, with exact order described below.
      • Filename, column 0: Panorama file name as on disk, without file extension.
      • Timestamps, column 1: Absolute time at which the panorama was captured, Decimal notation, without thousands separator (microseconds).
      • X,Y,Z, columns 2 through 4: Position of the panoramic camera in decimal notation, without thousands separator (meters).
      • w,x,y,z, columns 5 through 8: Rotation of the camera, quaternion.

    sparse

    • Set of equirectangular rendered depth images.
    • 1920x960 resolution
    • 16-bit grayscale PNG
    • White → 0 m
    • Black → ≥ 16 m or absent geometry
    • Occlusions: If a pixel was hit by several rays, only the value of the closest one is represented.

    dense

    • Set of equirectangular rendered depth images.
    • 1920x960 resolution
    • 16-bit grayscale PNG
    • White → 0 m
    • Black → ≥ 16 m or absent geometry
    • Occlusions: If a pixel was hit by several rays, only the value of the closest one is represented.

    img
    A set of equirectangular panoramic images was taken with a 360° color camera in 1920x960 resolution. They follow the same trajectory.

  13. S

    LoD3 Road Space Models

    • catalog.savenow.de
    citygml
    Updated Dec 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lehrstuhl für Geoinformatik (2023). LoD3 Road Space Models [Dataset]. https://catalog.savenow.de/dataset/lod3-road-space-models
    Explore at:
    citygmlAvailable download formats
    Dataset updated
    Dec 4, 2023
    Dataset provided by
    Lehrstuhl für Geoinformatik
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    LoD3 (Level of Detail 3) Road Space Models is CityGML dataset which contains road space models (over 50 building models) in the area of Ingolstadt.

    There are several approaches to model Building in CityGML 2.0 (e.g. see Biljecki et al.). In our case, due to the acquisition geometry of MLS point clouds, the building objects consist of a very detailed representation of facade elements but on the other hand, it might lack roof elements and entities located in the Building's backyard. Thus, we encourage to see the list below for a detailed description of the Building in our Ingolstadt LoD3 dataset:

    The building consists of:

    • Ground Surfaces
    • Roof Surfaces
    • Wall Surfaces
    • Outer Ceiling Surfaces
    • Outer Floor Surfaces
    • Closure Surfaces
    • Windows modeled in detail
    • Doors modeled in detail
    • Building Installations (Balconies, Passages, Arcades, Loggias, Stairs and Porches, (Some) Dormers)
    • Textures (approximated based on visual inspection)

    Building does NOT consist of:

    • Overhanging Building Elements
    • Roof structure details
    • Objects located in the Building's backyard (not facing the street)
    • Building Installations (Chimneys, Rain Gutters, (Some) Dormers, Real (e.g. orthophoto) textures)

    The terminology according to SIG3D.

    To ensure the highest accuracy geometrically as well as semantically, the dataset was manually modeled based on the mobile laser scannings (MLS) provided by the company 3D Mapping Solutions GmbH (relative accuracy in the range of 1-3cm). Moreover, a complementary OpenDRIVE dataset is available, which includes the road network, traffic lights, fences, vegetation and so on:

    • CityGML & SketchUp
      • Download via the releases section
      • Please note, that the 'Download ZIP' button doesn't include the project files due to Git LFS
    • OpenDRIVE
      • Download via the website of 3D Mapping Solutions (initial registration required)
      • Relevant OpenDRIVE dataset is named *Ingolstadt Innercity Halls* and can be found in the demo data area
      • Conversion to CityGML can be carried out using the tool r:trån

    Further Information:

    https://raw.githubusercontent.com/savenow/lod3-road-space-models/main/documentation/images/lod3-models-citygml.png" alt="B3" width="800px">

    https://raw.githubusercontent.com/savenow/lod3-road-space-models/main/documentation/images/lod3-models-citygml-with-point-clouds.png" alt="B3" width="800px">

    https://raw.githubusercontent.com/savenow/lod3-road-space-models/main/documentation/images/lod3-models-citygml-overview-3d.png" alt="B3" width="800px">
  14. f

    Data_Sheet_1_Tracking People in a Mobile Robot From 2D LIDAR Scans Using...

    • frontiersin.figshare.com
    pdf
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ángel Manuel Guerrero-Higueras; Claudia Álvarez-Aparicio; María Carmen Calvo Olivera; Francisco J. Rodríguez-Lera; Camino Fernández-Llamas; Francisco Martín Rico; Vicente Matellán (2023). Data_Sheet_1_Tracking People in a Mobile Robot From 2D LIDAR Scans Using Full Convolutional Neural Networks for Security in Cluttered Environments.PDF [Dataset]. http://doi.org/10.3389/fnbot.2018.00085.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Ángel Manuel Guerrero-Higueras; Claudia Álvarez-Aparicio; María Carmen Calvo Olivera; Francisco J. Rodríguez-Lera; Camino Fernández-Llamas; Francisco Martín Rico; Vicente Matellán
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Tracking people has many applications, such as security or safe use of robots. Many onboard systems are based on Laser Imaging Detection and Ranging (LIDAR) sensors. Tracking peoples' legs using only information from a 2D LIDAR scanner in a mobile robot is a challenging problem because many legs can be present in an indoor environment, there are frequent occlusions and self-occlusions, many items in the environment such as table legs or columns could resemble legs as a result of the limited information provided by two-dimensional LIDAR usually mounted at knee height in mobile robots, etc. On the other hand, LIDAR sensors are affordable in terms of the acquisition price and processing requirements. In this article, we describe a tool named PeTra based on an off-line trained full Convolutional Neural Network capable of tracking pairs of legs in a cluttered environment. We describe the characteristics of the system proposed and evaluate its accuracy using a dataset from a public repository. Results show that PeTra provides better accuracy than Leg Detector (LD), the standard solution for Robot Operating System (ROS)-based robots.

  15. C

    Data from: MTLS_point_cloud_generation

    • dataverse.csuc.cat
    • portalrecerca.udl.cat
    pcap, text/markdown +4
    Updated Jun 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jordi Gené Mola; Jordi Gené Mola; Eduard Gregorio López; Eduard Gregorio López; Fernando Auat Cheein; Fernando Auat Cheein; Javier Guevara; Javier Guevara; Jordi Llorens Calveras; Jordi Llorens Calveras; Ricardo Sanz Cortiella; Ricardo Sanz Cortiella; Alexandre Escolà i Agustí; Alexandre Escolà i Agustí; Joan Ramon Rosell Polo; Joan Ramon Rosell Polo (2025). MTLS_point_cloud_generation [Dataset]. http://doi.org/10.34810/data2336
    Explore at:
    tsv(3400340), tsv(3400222), tsv(3398822), tsv(3397223), tsv(3402107), text/x-matlab(13035), tsv(3397375), text/markdown(4443), tsv(3400720), tsv(3397595), text/x-matlab(8055), pcap(2082118), tsv(3402035), tsv(3412182), tsv(3403138), tsv(3402150), tsv(3396257), txt(1114558), tsv(3179), tsv(3405178), tsv(3397514), tsv(169), tsv(3400970), tsv(3399901), tsv(3400078), text/plain; charset=us-ascii(1068), tsv(3396951), tsv(3403277)Available download formats
    Dataset updated
    Jun 6, 2025
    Dataset provided by
    CORA.Repositori de Dades de Recerca
    Authors
    Jordi Gené Mola; Jordi Gené Mola; Eduard Gregorio López; Eduard Gregorio López; Fernando Auat Cheein; Fernando Auat Cheein; Javier Guevara; Javier Guevara; Jordi Llorens Calveras; Jordi Llorens Calveras; Ricardo Sanz Cortiella; Ricardo Sanz Cortiella; Alexandre Escolà i Agustí; Alexandre Escolà i Agustí; Joan Ramon Rosell Polo; Joan Ramon Rosell Polo
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Dataset funded by
    Agencia Estatal de Investigación
    European Union
    Description

    Matlab implementation to generate 3D point clouds from data acquired with VLP-16 and GNSS GPS1200+. This project is a Matlab implementation to generate 3D point clouds from data acquired with a mobile terrestrial laser scanner (MTLS) comprised of a LiDAR sensor Velodyne VLP-16 (Velodyne LIDAR Inc., San Jose, CA, USA) and a GNSS position sensor GPS1200+ (Leica Geosystems AG, Heerbrugg, Swizeland). This implementation was used to generate the point clouds provided in LFuji-air dataset, which contains 3D LiDAR data of 11Fuji apple trees with the corresponding fruit position annotations.

  16. 4

    Data from: Robotic Lava Tube Mapping and Multimodal Data Collection Using...

    • data.4tu.nl
    zip
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arwin Hidding; Henriette Bier; Luka Peternel; Alexander James Becoy; Francesco Romio; Giuseppe Calabrese (2025). Robotic Lava Tube Mapping and Multimodal Data Collection Using Quadruped and LiDAR [Dataset]. http://doi.org/10.4121/778253cc-3193-4f66-bc13-03b80380424e.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 13, 2025
    Dataset provided by
    4TU.ResearchData
    Authors
    Arwin Hidding; Henriette Bier; Luka Peternel; Alexander James Becoy; Francesco Romio; Giuseppe Calabrese
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Nov 18, 2024 - Nov 23, 2024
    Area covered
    Dataset funded by
    Vertico
    TU Delft Robotics Institute
    Moonshot TU Delft
    Erasmus+
    Description

    As part of the Rhizome 2.0 project—an initiative investigating the development of human habitats in Martian lava tubes—we conducted an extensive robotic mapping mission inside the Grotta di Monte Intraleo, a terrestrial lava tube in Sicily serving as an analogue site. This dataset supports the exploration of construction and habitation strategies in similarly structured Martian environments.

    The dataset includes multi-modal mapping and environmental data collected using manual and robotic scanning techniques. Specifically, the data comprises:

    • High-resolution floor reference images: Manually collected photographs of the lava tube floor were taken at a constant height and under controlled illumination to serve as visual calibration and textural references for scale and surface feature analysis.
    • 3D mesh data from mobile scanning: Phone-based 3D scans were conducted using the Scaniverse app, producing .obj files for rapid spatial documentation.
    • LiDAR scans: High-density LiDAR point clouds of the tube interior provide accurate geometric representations.
    • Robot FPV video footage: A robotic quadruped equipped with a forward-facing camera collected immersive, first-person video while navigating the cave, providing visual context and documenting terrain traversal.
    • SLAM-based navigation maps: Simultaneous Localization and Mapping (SLAM) data recorded during robotic traversal were used to generate autonomous navigation maps.
    • Ambient environmental data: Time-synchronized logs from onboard sensors recorded temperature, humidity, and light levels, contributing to environmental characterization of the site.
    • System interface recordings: Screen captures of the framework’s user interface during data acquisition sessions offer insights into the control, mapping, and visualization tools used throughout the mission.

    Data collection adhered to all local regulations, and care was taken to minimize impact on the natural environment of the cave. This dataset is intended to support reproducible research in robotic mapping, autonomous navigation, and extraterrestrial habitat design.

  17. ROS2 bag dataset for tree trunk detection and mapping using an OAK-D mounted...

    • zenodo.org
    Updated Aug 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Queirós da Silva; Daniel Queirós da Silva (2024). ROS2 bag dataset for tree trunk detection and mapping using an OAK-D mounted on a terrestrial robot in FEUP's garden [Dataset]. http://doi.org/10.5281/zenodo.7371422
    Explore at:
    Dataset updated
    Aug 23, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Daniel Queirós da Silva; Daniel Queirós da Silva
    Description

    A dataset acquired in FEUP's garden with an OAK-D mounted on a mobile terrestrial robot to perform tree trunk mapping.

  18. 2005 United States Army Corps of Engineers (USACE) Post-Hurricane Katrina...

    • fisheries.noaa.gov
    • datadiscoverystudio.org
    • +3more
    html
    Updated Oct 15, 2005
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    OCM Partners (2005). 2005 United States Army Corps of Engineers (USACE) Post-Hurricane Katrina Levee Surveys [Dataset]. https://www.fisheries.noaa.gov/inport/item/50054
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Oct 15, 2005
    Dataset provided by
    OCM Partners, LLC
    Area covered
    Description

    These topographic data were collected for the U.S. Army Corps of Engineers by a helicopter-mounted LiDAR sensor over the New Orleans Hurricane Protection Levee System in Louisiana.

    Original contact information: Contact Org: NOAA Office for Coastal Management Phone: 843-740-1202 Email: coastal.info@noaa.gov

  19. Shoreline Mapping Program of WESTERN MOBILE BAY, AL, AL0904-CM-N

    • fisheries.noaa.gov
    • catalog.data.gov
    Updated Jan 1, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Geodetic Survey (2020). Shoreline Mapping Program of WESTERN MOBILE BAY, AL, AL0904-CM-N [Dataset]. https://www.fisheries.noaa.gov/inport/item/61311
    Explore at:
    pdf - adobe portable document formatAvailable download formats
    Dataset updated
    Jan 1, 2020
    Dataset provided by
    U.S. National Geodetic Survey
    Time period covered
    Oct 7, 2012
    Area covered
    Description

    These data provide an accurate high-resolution shoreline compiled from imagery of WESTERN MOBILE BAY, AL . This vector shoreline data is based on an office interpretation of imagery that may be suitable as a geographic information system (GIS) data layer. This metadata describes information for both the line and point shapefiles. The NGS attribution scheme 'Coastal Cartographic Object Attribu...

  20. P

    PixelHelp Dataset

    • paperswithcode.com
    • opendatalab.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yang Li; Jiacong He; Xin Zhou; Yuan Zhang; Jason Baldridge, PixelHelp Dataset [Dataset]. https://paperswithcode.com/dataset/pixelhelp
    Explore at:
    Authors
    Yang Li; Jiacong He; Xin Zhou; Yuan Zhang; Jason Baldridge
    Description

    PixelHelp includes 187 multi-step instructions of 4 task categories deined in https://support.google.com/pixelphone and annotated by human. This dataset includes 88 general tasks, such as configuring accounts, 38 Gmail tasks, 31 Chrome tasks, and 30 Photos related tasks. This dataset is an updated opensource version of the original PixelHelp dataset, which was used for testing the end-to-end grounding quality of the model in paper "Mapping Natural Language Instructions to Mobile UI Action Sequences". The similar accuracy is acquired on this version of the dataset.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
i.c.sens (2024). i.c.sens Visual-Inertial-LiDAR Dataset [Dataset]. https://data.uni-hannover.de/dataset/i-c-sens-visual-inertial-lidar-dataset

i.c.sens Visual-Inertial-LiDAR Dataset

Explore at:
txt(285), png(650007), jpeg(153522), txt(1049), jpeg(129333), rviz(6412), bag(7419679751), bag(9980268682), bag(9982003259), bag(9960305979), pdf(21788288), jpeg(556618), bag(9971699339), bag(9896857478), bag(9939783847), bag(9969171093)Available download formats
Dataset updated
Dec 12, 2024
Dataset authored and provided by
i.c.sens
License

Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically

Description

The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.

https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">

Image credit: Sören Vogel

The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.

https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">

Image credit: Sören Vogel

The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.

https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">

The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:

  • Velodyne HDL-64 LiDAR
  • LORD MicroStrain 3DM-GQ4-45 GNSS aided IMU
  • Pointgrey GS3-U3-23S6C-C RGB camera

To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:

roscore & rosrun rviz rviz -d icsens_data.rviz

Afterwards, start playing a rosbag with

rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock

Below we provide some exemplary images and their corresponding point clouds.

https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">

Related publications:

  • R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.

  • R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.

  • R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.

  • R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.

  • R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.

  • R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.

  • R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.

  • R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.

Search
Clear search
Close search
Google apps
Main menu