15 datasets found
  1. F

    i.c.sens Visual-Inertial-LiDAR Dataset

    • data.uni-hannover.de
    bag, jpeg, pdf, png +2
    Updated Dec 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    i.c.sens (2024). i.c.sens Visual-Inertial-LiDAR Dataset [Dataset]. https://data.uni-hannover.de/dataset/i-c-sens-visual-inertial-lidar-dataset
    Explore at:
    txt(285), png(650007), jpeg(153522), txt(1049), jpeg(129333), rviz(6412), bag(7419679751), bag(9980268682), bag(9982003259), bag(9960305979), pdf(21788288), jpeg(556618), bag(9971699339), bag(9896857478), bag(9939783847), bag(9969171093)Available download formats
    Dataset updated
    Dec 12, 2024
    Dataset authored and provided by
    i.c.sens
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">

    Image credit: Sören Vogel

    The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">

    Image credit: Sören Vogel

    The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">

    The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:

    • Velodyne HDL-64 LiDAR
    • LORD MicroStrain 3DM-GQ4-45 GNSS aided IMU
    • Pointgrey GS3-U3-23S6C-C RGB camera

    To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:

    roscore & rosrun rviz rviz -d icsens_data.rviz
    

    Afterwards, start playing a rosbag with

    rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
    

    Below we provide some exemplary images and their corresponding point clouds.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">

    Related publications:

    • R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.

    • R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.

    • R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.

    • R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.

    • R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.

    • R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.

    • R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.

    • R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.

  2. i

    IILABS 3D: iilab Indoor LiDAR-based SLAM Dataset

    • rdm.inesctec.pt
    Updated Feb 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). IILABS 3D: iilab Indoor LiDAR-based SLAM Dataset [Dataset]. https://rdm.inesctec.pt/dataset/nis-2025-001
    Explore at:
    Dataset updated
    Feb 27, 2025
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    The IILABS 3D dataset is a rigorously designed benchmark intended to advance research in 3D LiDAR-based Simultaneous Localization and Mapping (SLAM) algorithms within indoor environments. It provides a robust and diverse foundation for evaluating and enhancing SLAM techniques in complex indoor settings. The dataset was retrived in the Industry and Innovation Laboratory (iiLab) and comprises synchronized data from a suite of sensors—including four distinct 3D LiDAR sensors, a 2D LiDAR, an Inertial Measurement Unit (IMU), and wheel odometry—complemented by high-precision ground truth obtained via a Motion Capture (MoCap) system. Project Webpage https://jorgedfr.github.io/3d_lidar_slam_benchmark_at_iilab/ Dataset Toolkit https://github.com/JorgeDFR/iilabs3d-toolkit Data Collection Method Sensor data was captured using the Robot Operating System (ROS) framework’s rosbag record tool on a LattePanda 3 Delta embedded computer. Post-processing involved timestamp correction for the Xsens MTi-630 IMU via custom Python scripts. Ground-truth data was captured using an OptiTrack MoCap system featuring 24 high-resolution PrimeX 22 cameras. These cameras were connected via Ethernet to a primary Windows computer running the Motive software (https://optitrack.com/software/motive), which processed the camera data. This Windows computer was then connected via Ethernet to a secondary Ubuntu machine running the NatNet 4 ROS driver (https://github.com/L2S-lab/natnet_ros_cpp). The driver published the data as ROS topics, which were recorded into rosbag files. Additionally, temporal synchronization between the robot platform and the ground-truth system was achieved using the Network Time Protocol (NTP). Finally, the bag files were processed using the EVO open-source Python library (https://github.com/MichaelGrupp/evo) to convert the data into TUM format and adjust the initial position offsets for accurate SLAM odometry benchmarking. Type of Instrument Mobile Robot Platform: INESC TEC MRDT Modified Hangfa Discovery Q2 Platform. R.B. Sousa, H.M. Sobreira, J.G. Martins, P.G. Costa, M.F. Silva and A.P. Moreira, "Integrating Multimodal Perception into Ground Mobile Robots," 2025 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC2025), Madeira, Portugal, 2025, pp. TBD, doi: TBD [Manuscript accepted for publication].https://sousarbarb.github.io/inesctec_mrdt_hangfa_discovery_q2/ Sensor data: Livox Mid-360, Ouster OS1-64 RevC, RoboSense RS-HELIOS-5515, and Velodyne VLP-16 (3D LiDARs); Hokuyo UST-10LX-H01 (2D LiDAR); Xsens MTi-630 (IMU); and Faulhaber 2342 wheel encoders (64:1 gear ratio, 12 Counts Per Revolution (CPR)). Ground Truth data: OptiTrack Motion Capture System with 24 PrimeX 22 cameras installed in Room A, Floor 0 at iilab

  3. SnowXinghu - Lidar inertial data under heavy snow by a handheld device about...

    • zenodo.org
    zip
    Updated Apr 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianzhu Huai; Jianzhu Huai; Binliang Wang; Binliang Wang (2025). SnowXinghu - Lidar inertial data under heavy snow by a handheld device about Xinghu Building [Dataset]. http://doi.org/10.5281/zenodo.15181513
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 9, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jianzhu Huai; Jianzhu Huai; Binliang Wang; Binliang Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Feb 4, 2024
    Description

    The data were collected by Binliang Wang, Jianzhu Huai at a snowy night Feb 3 2024, and snowy morning Feb 4 2024, around Xinghu Bldg and the basketball court before Xinghu gym, Wuhan Univ, with the handheld lidar and IMU rig of Yu Tengfei.
    The rig consists of a hesai pandar xt32, a xsens mti680 IMU, and an AGX orin 32GB computer.
    Unfortunately, no hardware sync is done between the xt32 and mti680.
    Good news is that mti680 records both host time and sensor time in /imu/time_ref.
    The hesai pandar xt32 records host times.

    We provides 4 seqs, but two of them have some missing data. The other 2 seqs are captured under light snow and heavy snow, resp.

    These data show typical performance of lidar under snow conditions, much more noisy points. But a strong LIO algorithm can deal with them without significant drift.

  4. Simultaneous Localization and Mapping (SLAM) Market By Mapping Type (2D...

    • verifiedmarketresearch.com
    Updated Mar 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VERIFIED MARKET RESEARCH (2024). Simultaneous Localization and Mapping (SLAM) Market By Mapping Type (2D SLAM, 3D SLAM), Product (Filter-Based SLAM, Graph-Based SLAM, Visual SLAM, Deep Learning Based SLAM, LiDAR SLAM), Application (UAVs, Robots, AVs, AR), End-User (Manufacturing & Logistics, Agriculture, Consumer Electronics, Construction), & Region 2024-2031 [Dataset]. https://www.verifiedmarketresearch.com/product/simultaneous-localization-and-mapping-slam-market/
    Explore at:
    Dataset updated
    Mar 20, 2024
    Dataset provided by
    Verified Market Researchhttps://www.verifiedmarketresearch.com/
    Authors
    VERIFIED MARKET RESEARCH
    License

    https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/

    Time period covered
    2024 - 2031
    Area covered
    Global
    Description

    Simultaneous Localization And Mapping (SLAM) Market size was valued at USD 262 Million in 2023 and is projected to reach USD 1.8 Billion by 2031, growing at a CAGR of 41.6% from 2024 to 2031.

    The Global SLAM market is being driven by several key factors that are driving its adoption and growth. One significant factor is the escalating demand for autonomous mobile robots and vehicles across diverse industries. These robotics and vehicles rely on SLAM technology to navigate and map their surroundings accurately without human intervention.

    As industries such as manufacturing, logistics, and agriculture continue to automate their operations, the demand for robust SLAM solutions continues to grow. The escalating popularity of augmented reality (AR) and virtual reality (VR) applications. SLAM technology has a crucial role in enabling immersive AR experiences by accurately tracking the user's position and surroundings in real time.

    In virtual reality applications, SLAM facilitates the creation of authentic virtual environments by mapping physical spaces and seamlessly integrating digital content. The increasing use cases for AR and VR in gaming, entertainment, education, and enterprise applications are driving demand for advanced SLAM solutions.

    Furthermore, advances in sensor technology, particularly in the fields of LIDAR, camera systems, and inertial sensors, have greatly improved the accuracy and reliability of SLAM algorithms. These technological advances have led to the development of more robust and efficient SLAM systems that are capable of operating in diverse environments and under challenging conditions. Consequently, various industries, such as robotics, automotive, and consumer electronics, challenges are increasingly incorporating SLAM technology into their products and services to enhance their performance and functionality.

  5. f

    Table_2_Autonomous Navigation System of Greenhouse Mobile Robot Based on 3D...

    • frontiersin.figshare.com
    xlsx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Saike Jiang; Shilin Wang; Zhongyi Yi; Meina Zhang; Xiaolan Lv (2023). Table_2_Autonomous Navigation System of Greenhouse Mobile Robot Based on 3D Lidar and 2D Lidar SLAM.XLSX [Dataset]. http://doi.org/10.3389/fpls.2022.815218.s002
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Saike Jiang; Shilin Wang; Zhongyi Yi; Meina Zhang; Xiaolan Lv
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The application of mobile robots is an important link in the development of intelligent greenhouses. In view of the complex environment of a greenhouse, achieving precise positioning and navigation by robots has become the primary problem to be solved. Simultaneous localization and mapping (SLAM) technology is a hot spot in solving the positioning and navigation in an unknown indoor environment in recent years. Among them, the SLAM based on a two-dimensional (2D) Lidar can only collect the environmental information at the level of Lidar, while the SLAM based on a 3D Lidar demands a high computation cost; hence, it has higher requirements for the industrial computers. In this study, the robot navigation control system initially filtered the information of a 3D greenhouse environment collected by a 3D Lidar and fused the information into 2D information, and then, based on the robot odometers and inertial measurement unit information, the system has achieved a timely positioning and construction of the greenhouse environment by a robot using a 2D Lidar SLAM algorithm in Cartographer. This method not only ensures the accuracy of a greenhouse environmental map but also reduces the performance requirements on the industrial computer. In terms of path planning, the Dijkstra algorithm was used to plan the global navigation path of the robot while the Dynamic Window Approach (DWA) algorithm was used to plan the local navigation path of the robot. Through the positioning test, the average position deviation of the robot from the target positioning point is less than 8 cm with a standard deviation (SD) of less than 3 cm; the average course deviation is less than 3° with an SD of less than 1° at the moving speed of 0.4 m/s. The robot moves at the speed of 0.2, 0.4, and 0.6 m/s, respectively; the average lateral deviation between the actual movement path and the target movement path is less than 10 cm, and the SD is less than 6 cm; the average course deviation is

  6. Dataset of A Multi-Drone System Proof of Concept for Forestry Applications

    • zenodo.org
    bin
    Updated Jan 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    André Araújo; André Araújo; Carlos Alexandre Pontes Pizzino; Carlos Alexandre Pontes Pizzino; Micael Couceiro; Micael Couceiro; Rui P. Rocha; Rui P. Rocha (2025). Dataset of A Multi-Drone System Proof of Concept for Forestry Applications [Dataset]. http://doi.org/10.5281/zenodo.14701641
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 21, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    André Araújo; André Araújo; Carlos Alexandre Pontes Pizzino; Carlos Alexandre Pontes Pizzino; Micael Couceiro; Micael Couceiro; Rui P. Rocha; Rui P. Rocha
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset presented in this study originates from a multi-drone proof of concept designed for efficient forest mapping and autonomous operation, conducted within the framework of the OPENSWARM EU Project. It comprises data collected from field experiments where multiple drones collaboratively navigated and mapped a forest environment. The dataset includes sensor data from state-of-the-art open-source SLAM frameworks, such as LiDAR-Inertial Odometry via Smoothing and Mapping (LIO-SAM) and Distributed Collaborative LiDAR SLAM (DCL-SLAM). These frameworks were integrated within the MRS UAV System and Swarm Formation packages, utilizing Robot Operating System (ROS) middleware. The recorded data consists of LiDAR point clouds, IMU readings, GPS trajectories, and system logs, capturing the drones' performance in complex, real-world forestry conditions. Additionally, flight control parameters optimized using an auto-tuning particle swarm optimization method are included to support reproducibility and further analysis. This dataset aims to facilitate research in autonomous multi-drone systems for forestry applications, offering valuable insights for scalable and sustainable forest management solutions.

  7. S

    SLAM Scanner Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated May 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). SLAM Scanner Report [Dataset]. https://www.archivemarketresearch.com/reports/slam-scanner-453681
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    May 31, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global SLAM (Simultaneous Localization and Mapping) scanner market is experiencing robust growth, driven by increasing adoption across diverse sectors like robotics, autonomous vehicles, and 3D mapping. The market size in 2025 is estimated at $500 million, exhibiting a Compound Annual Growth Rate (CAGR) of 15% from 2025 to 2033. This growth trajectory is fueled by several key factors. The rising demand for precise and efficient 3D spatial data acquisition is a major catalyst, as SLAM scanners offer a cost-effective and versatile solution compared to traditional surveying methods. Furthermore, advancements in sensor technology, particularly in LiDAR and visual-inertial odometry (VIO), are enhancing the accuracy and reliability of SLAM scanners, expanding their applicability in various applications. The integration of SLAM technology with artificial intelligence (AI) and machine learning (ML) is further accelerating market expansion, enabling sophisticated functionalities like object recognition and autonomous navigation. Despite the considerable growth potential, the market faces certain challenges. High initial investment costs associated with procuring advanced SLAM scanners can be a barrier for small and medium-sized enterprises. Furthermore, technical complexities in data processing and the requirement for skilled personnel can hinder broader adoption. However, ongoing technological advancements are addressing these issues, making SLAM technology more accessible and user-friendly. The market segmentation reveals a diverse landscape, with players like Leica, Stonex, and Autonics leading the way in providing high-quality solutions. The emergence of innovative startups and the expanding geographical reach are contributing to market dynamism and creating opportunities for both established players and new entrants. The forecast period of 2025-2033 suggests a continuously expanding market, with the potential for significant technological disruptions and innovative applications shaping the future of SLAM scanning.

  8. P

    Newer College Dataset

    • paperswithcode.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Newer College Dataset [Dataset]. https://paperswithcode.com/dataset/newer-college
    Explore at:
    Description

    The Newer College Dataset is a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km through New College, Oxford. The dataset includes data from two commercially available devices - a stereoscopic-inertial camera and a multi-beam 3D LiDAR, which also provides inertial measurements. Additionally, the authors used a tripod-mounted survey grade LiDAR scanner to capture a detailed millimeter-accurate 3D map of the test location (containing ∼290 million points).

    Using the map the authors inferred centimeter-accurate 6 Degree of Freedom (DoF) ground truth for the position of the device for each LiDAR scan to enable better evaluation of LiDAR and vision localisation, mapping and reconstruction systems. The dataset combines both built environments, open spaces and vegetated areas so as to test localization and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LIDAR reconstruction and appearance-based place recognition.

  9. a

    Data from: DiTer: Diverse Terrain and Multi-Modal Dataset for Field Robot...

    • academictorrents.com
    bittorrent
    Updated Dec 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeong, Seokhwan and Kim, Hogyun and Cho, Younggun (2024). DiTer: Diverse Terrain and Multi-Modal Dataset for Field Robot Navigation in Outdoor Environments [Dataset]. https://academictorrents.com/details/c604afddbe24fff0b873acbed370ff89df0c1614
    Explore at:
    bittorrent(755092256494)Available download formats
    Dataset updated
    Dec 25, 2024
    Dataset authored and provided by
    Jeong, Seokhwan and Kim, Hogyun and Cho, Younggun
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    Field robots require autonomy in diverse environments to navigate and map their surround-ings efficiently. However, the lack of diverse and comprehensive datasets hinders the evaluation and development of autonomous field robots. To address this challenge, we present a multimodal, multisession, and diverse terrain dataset for the ground mapping of field robots. First of all, we utilize a quadrupedal robot as a base platform to collect the dataset. Also, the dataset includes various terrain types, such as sandy roads, vegetation, and sloping terrain. It comprises RGB-D camera for ground, RGB camera, thermal camera, light detection and ranging (LiDAR), inertial measurement unit (IMU), and global positioning system (GPS). In addition, we provide not only the reference trajectories of each dataset but also the global map by leveraging LiDAR-based simultaneous localization and mapping (SLAM) algorithms. Also, we assess our dataset from a terrain perspective and generate the fusion maps, suc

  10. P

    Data from: Ford Campus Vision and Lidar Data Set Dataset

    • paperswithcode.com
    Updated Mar 6, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Ford Campus Vision and Lidar Data Set Dataset [Dataset]. https://paperswithcode.com/dataset/ford-campus-vision-and-lidar-data-set
    Explore at:
    Dataset updated
    Mar 6, 2022
    Description

    Ford Campus Vision and Lidar Data Set is a dataset collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS LV) and consumer (Xsens MTI-G) Inertial Measuring Unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system.

    This dataset consists of the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research campus and downtown Dearborn, Michigan during November-December 2009. The vehicle path trajectory in these datasets contain several large and small-scale loop closures, which should be useful for testing various state of the art computer vision and SLAM (Simultaneous Localization and Mapping) algorithms.

    Paper: Ford Campus vision and lidar data set

  11. S

    SLAM Robotics Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jun 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). SLAM Robotics Report [Dataset]. https://www.datainsightsmarket.com/reports/slam-robotics-667024
    Explore at:
    ppt, pdf, docAvailable download formats
    Dataset updated
    Jun 5, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The SLAM (Simultaneous Localization and Mapping) robotics market is experiencing robust growth, projected to reach a substantial size, driven by increasing adoption across diverse sectors. The market's Compound Annual Growth Rate (CAGR) of 13.7% from 2019 to 2024 indicates a significant upward trajectory. This expansion is fueled by several key factors. Firstly, the demand for automation in logistics and warehousing is a major catalyst. Companies like Amazon Robotics, Locus Robotics, and Fetch Robotics are leading the charge, deploying SLAM-enabled robots for tasks like order fulfillment and inventory management. Secondly, the growing need for autonomous mobile robots (AMRs) in manufacturing and industrial settings is further boosting market growth. The precision and adaptability offered by SLAM technology are invaluable for navigating complex and dynamic environments, enhancing operational efficiency and safety. Finally, advancements in sensor technology, particularly LiDAR and visual-inertial odometry (VIO), are driving down costs and improving the accuracy and reliability of SLAM-based robots, making them increasingly accessible to a wider range of businesses. Looking ahead to 2033, the market is poised for continued expansion. The ongoing development of more sophisticated algorithms and the integration of artificial intelligence (AI) capabilities are expected to further enhance the functionalities of SLAM robots. This will open up new application areas, including healthcare (surgical robots, delivery robots), agriculture (autonomous tractors and harvesters), and even domestic settings (robotic vacuum cleaners and assistants). While challenges such as regulatory hurdles and the need for robust cybersecurity measures remain, the overall market outlook for SLAM robotics is exceptionally positive, driven by the confluence of technological advancements and rising demand for automation across various industries. The prominent players listed – KUKA, Omron Adept, Clearpath Robotics, and others – are well-positioned to capitalize on this growth, driving innovation and competition within the sector.

  12. P

    Data from: VBR Dataset

    • paperswithcode.com
    Updated May 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Leonardo Brizi; Emanuele Giacomini; Luca Di Giammarino; Simone Ferrari; Omar Salem; Lorenzo De Rebotti; Giorgio Grisetti (2024). VBR Dataset [Dataset]. https://paperswithcode.com/dataset/vbr
    Explore at:
    Dataset updated
    May 8, 2024
    Authors
    Leonardo Brizi; Emanuele Giacomini; Luca Di Giammarino; Simone Ferrari; Omar Salem; Lorenzo De Rebotti; Giorgio Grisetti
    Description

    This dataset presents a vision and perception research dataset collected in Rome, featuring RGB data, 3D point clouds, IMU, and GPS data. We introduce a new benchmark targeting visual odometry and SLAM, to advance the research in autonomous robotics and computer vision. This work complements existing datasets by simultaneously addressing several issues, such as environment diversity, motion patterns, and sensor frequency. It uses up-to-date devices and presents effective procedures to accurately calibrate the intrinsic and extrinsic of the sensors while addressing temporal synchronization. During recording, we cover multi-floor buildings, gardens, urban and highway scenarios. Combining handheld and car-based data collections, our setup can simulate any robot (quadrupeds, quadrotors, autonomous vehicles). The dataset includes an accurate 6-dof ground truth based on a novel methodology that refines the RTK-GPS estimate with LiDAR point clouds through Bundle Adjustment. All sequences divided in training and testing are accessible through our website.

  13. I

    Inertial Navigation Intelligent Sweeping Robot Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jun 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Inertial Navigation Intelligent Sweeping Robot Report [Dataset]. https://www.datainsightsmarket.com/reports/inertial-navigation-intelligent-sweeping-robot-1321771
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Jun 22, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global market for inertial navigation intelligent sweeping robots is experiencing robust growth, driven by increasing consumer demand for automated home cleaning solutions and advancements in robotic navigation technology. The market's expansion is fueled by several key factors: rising disposable incomes in developed and developing economies, a growing preference for convenient and time-saving appliances, and the integration of sophisticated features like smart home compatibility and improved mapping capabilities. The increasing adoption of these robots in both residential and commercial settings is further accelerating market expansion. While the initial investment cost remains a potential barrier for some consumers, the long-term benefits in terms of time savings and improved cleaning efficiency are proving persuasive. Competition among established players like iRobot, Ecovacs, and Xiaomi, along with emerging innovative companies, is driving innovation and affordability, making these robots accessible to a wider range of consumers. We project a continued, albeit slightly moderated, growth rate over the next decade, with the market benefiting from ongoing technological improvements and expanding market penetration. Technological advancements in sensor technology, particularly improved inertial navigation systems coupled with other sensors like LiDAR and visual SLAM, are enhancing the accuracy and efficiency of these robots. The integration of artificial intelligence (AI) and machine learning (ML) algorithms is further refining their cleaning capabilities, allowing for more intelligent navigation and obstacle avoidance. This improved performance is directly impacting consumer satisfaction and driving increased adoption rates. However, challenges remain, including the development of more robust and energy-efficient batteries and addressing concerns related to data privacy and security. The market segmentation reveals a strong preference for high-end models offering advanced features, creating opportunities for premium-priced products. Regional variations exist, with developed markets showing higher adoption rates initially, followed by rapid growth in emerging economies as affordability improves.

  14. P

    Hilti-Oxford Dataset Dataset

    • paperswithcode.com
    Updated Mar 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Hilti-Oxford Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/hilti-oxford-dataset
    Explore at:
    Dataset updated
    Mar 1, 2022
    Description

    This benchmark is based on the HILTI-OXFORD Dataset, which has been collected on construction sites as well as on the famous Sheldonian Theatre in Oxford, providing a large range of difficult problems for SLAM.

    All these sequences are characterized by featureless areas and varying illumination conditions that are typical in real-world scenarios and pose great challenges to SLAM algorithms that have been developed in confined lab environments. Accurate ground truth, at millimeter level, is provided for each sequence. The sensor platform used to record the data includes a number of visual, lidar, and inertial sensors, which are spatially and temporally calibrated.

  15. a

    Ford Campus Vision and Laser dataset

    • academictorrents.com
    bittorrent
    Updated Nov 18, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gaurav Pandey and James R. McBride and Ryan M. Eustice (2015). Ford Campus Vision and Laser dataset [Dataset]. https://academictorrents.com/details/9aeefe49b754722eb5c051e77bacc5d75eca3ef2
    Explore at:
    bittorrent(216009646228)Available download formats
    Dataset updated
    Nov 18, 2015
    Dataset authored and provided by
    Gaurav Pandey and James R. McBride and Ryan M. Eustice
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    We provide a dataset collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS LV) and consumer (Xsens MTI-G) Inertial Measuring Unit (IMU), a Velodyne 3D-lidar scanner, two push-broom forward looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. Here we present the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research campus and downtown Dearborn, Michigan during November-December 2009. The vehicle path trajectory in these datasets contain several large and small-scale loop closures, which should be useful for testing various state of the art computer vision and SLAM (Simultaneous Localization and Mapping) algorithms. The size of the dataset is huge (~100 GB) so make sure that you have sufficient bandwidth before downloading the dataset. Once downloaded you can unzip the dataset.tgz file to ge

  16. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
i.c.sens (2024). i.c.sens Visual-Inertial-LiDAR Dataset [Dataset]. https://data.uni-hannover.de/dataset/i-c-sens-visual-inertial-lidar-dataset

i.c.sens Visual-Inertial-LiDAR Dataset

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
txt(285), png(650007), jpeg(153522), txt(1049), jpeg(129333), rviz(6412), bag(7419679751), bag(9980268682), bag(9982003259), bag(9960305979), pdf(21788288), jpeg(556618), bag(9971699339), bag(9896857478), bag(9939783847), bag(9969171093)Available download formats
Dataset updated
Dec 12, 2024
Dataset authored and provided by
i.c.sens
License

Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically

Description

The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.

https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">

Image credit: Sören Vogel

The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.

https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">

Image credit: Sören Vogel

The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.

https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">

The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:

  • Velodyne HDL-64 LiDAR
  • LORD MicroStrain 3DM-GQ4-45 GNSS aided IMU
  • Pointgrey GS3-U3-23S6C-C RGB camera

To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:

roscore & rosrun rviz rviz -d icsens_data.rviz

Afterwards, start playing a rosbag with

rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock

Below we provide some exemplary images and their corresponding point clouds.

https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">

Related publications:

  • R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.

  • R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.

  • R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.

  • R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.

  • R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.

  • R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.

  • R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.

  • R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.

Search
Clear search
Close search
Google apps
Main menu