44 datasets found
  1. F

    Parking lot locations and utilization samples in the Hannover Linden-Nord...

    • data.uni-hannover.de
    geojson, png
    Updated Apr 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Institut für Kartographie und Geoinformatik (2024). Parking lot locations and utilization samples in the Hannover Linden-Nord area from LiDAR mobile mapping surveys [Dataset]. https://data.uni-hannover.de/dataset/parking-locations-and-utilization-from-lidar-mobile-mapping-surveys
    Explore at:
    png(1288581), geojson(1348252), geojson(4361255), png(10065), geojson(233948), png(445868), png(1370680)Available download formats
    Dataset updated
    Apr 17, 2024
    Dataset authored and provided by
    Institut für Kartographie und Geoinformatik
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Area covered
    Hanover, Linden - Nord
    Description

    Work in progress: data might be changed

    The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies.

    Vehicle Detections

    Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock.

    The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/807618b6-5c38-4456-88a1-cb47500081ff/download/detection_map.png" alt="Overview map of detected vehicles" title="Overview map of detected vehicles"> Figure 1: Overview map of detected vehicles

    Parking Areas

    The public parking areas were digitized manually using aerial images and the detected vehicles in order to exclude irregular parking spaces as far as possible. They were also tagged as to whether they were aligned parallel to the road and assigned to a use at the time of recording, as some are used for construction sites or outdoor catering, for example. Depending on the intended use, they can be filtered individually.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/16b14c61-d1d6-4eda-891d-176bdd787bf5/download/parking_area_example.png" alt="Example parking area occupation pattern" title="Visualization of example parking areas on top of an aerial image [by LGLN]"> Figure 2: Visualization of example parking areas on top of an aerial image [by LGLN]

    Parking Occupancy

    For modelling the parking occupancy, single slots are sampled as center points every 5 m from the parking areas. In this way, they can be integrated into a street/routing graph, for example, as prepared in Wage et al. (2023). Own representations can be generated from the parking area and vehicle detections. Those parking points were intersected with the vehicle boxes to identify occupancy at the respective epochs.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/ca0b97c8-2542-479e-83d7-74adb2fc47c0/download/datenpub-bays.png" alt="Overview map of parking slots' average load" title="Overview map of parking slots' average load"> Figure 3: Overview map of average parking lot load

    However, unoccupied spaces cannot be determined quite as trivially the other way around, since no detected vehicle can result just as from no measurement/observation. Therefore, a parking space is only recorded as unoccupied if a vehicle was detected at the same time in the neighborhood on the same parking lane and therefore it can be assumed that there is a measurement.

    To close temporal gaps, interpolations were made by hour for each parking slot, assuming that between two consecutive observations with an occupancy the space was also occupied in between - or if both times free also free in between. If there was a change, this is indicated by a proportional value. To close spatial gaps, unobserved spaces in the area are drawn randomly from the ten closest occupation patterns around.

    This results in an exemplary occupancy pattern of a synthetic day. Depending on the application, the value could be interpreted as occupancy probability or occupancy share.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/184a1f75-79ab-4d0e-bb1b-8ed170678280/download/occupation_example.png" alt="Example parking area occupation pattern" title="Example parking area occupation pattern"> Figure 4: Example parking area occupation pattern

    References

    • F. Bock, D. Eggert and M. Sester (2015): On-street Parking Statistics Using LiDAR Mobile Mapping, 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 2015, pp. 2812-2818. https://doi.org/10.1109/ITSC.2015.452
    • A. Leichter, U. Feuerhake, and M. Sester (2021): Determination of Parking Space and its Concurrent Usage Over Time Using Semantically Segmented Mobile Mapping Data, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2021, 185–192. https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-185-2021
    • O. Wage, M. Heumann, and L. Bienzeisler (2023): Modeling and Calibration of Last-Mile Logistics to Study Smart-City Dynamic Space Management Scenarios. In 1st ACM SIGSPATIAL International Workshop on Sustainable Mobility (SuMob ’23), November 13, 2023, Hamburg, Germany. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3615899.3627930
  2. f

    Camera-LiDAR Datasets

    • figshare.com
    zip
    Updated Aug 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jennifer Leahy (2024). Camera-LiDAR Datasets [Dataset]. http://doi.org/10.6084/m9.figshare.26660863.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 14, 2024
    Dataset provided by
    figshare
    Authors
    Jennifer Leahy
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The datasets are original and specifically collected for research aimed at reducing registration errors between Camera-LiDAR datasets. Traditional methods often struggle with aligning 2D-3D data from sources that have different coordinate systems and resolutions. Our collection comprises six datasets from two distinct setups, designed to enhance versatility in our approach and improve matching accuracy across both high-feature and low-feature environments.Survey-Grade Terrestrial Dataset:Collection Details: Data was gathered across various scenes on the University of New Brunswick campus, including low-feature walls, high-feature laboratory rooms, and outdoor tree environments.Equipment: LiDAR data was captured using a Trimble TX5 3D Laser Scanner, while optical images were taken with a Canon EOS 5D Mark III DSLR camera.Mobile Mapping System Dataset:Collection Details: This dataset was collected using our custom-built Simultaneous Localization and Multi-Sensor Mapping Robot (SLAMM-BOT) in several indoor mobile scenes to validate our methods.Equipment: Data was acquired using a Velodyne VLP-16 LiDAR scanner and an Arducam IMX477 Mini camera, controlled via a Raspberry Pi board.

  3. F

    i.c.sens Visual-Inertial-LiDAR Dataset

    • data.uni-hannover.de
    bag, jpeg, pdf, png +2
    Updated Dec 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    i.c.sens (2024). i.c.sens Visual-Inertial-LiDAR Dataset [Dataset]. https://data.uni-hannover.de/dataset/i-c-sens-visual-inertial-lidar-dataset
    Explore at:
    txt(285), png(650007), jpeg(153522), txt(1049), jpeg(129333), rviz(6412), bag(7419679751), bag(9980268682), bag(9982003259), bag(9960305979), pdf(21788288), jpeg(556618), bag(9971699339), bag(9896857478), bag(9939783847), bag(9969171093)Available download formats
    Dataset updated
    Dec 12, 2024
    Dataset authored and provided by
    i.c.sens
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">

    Image credit: Sören Vogel

    The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">

    Image credit: Sören Vogel

    The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">

    The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:

    • Velodyne HDL-64 LiDAR
    • LORD MicroStrain 3DM-GQ4-45 GNSS aided IMU
    • Pointgrey GS3-U3-23S6C-C RGB camera

    To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:

    roscore & rosrun rviz rviz -d icsens_data.rviz
    

    Afterwards, start playing a rosbag with

    rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
    

    Below we provide some exemplary images and their corresponding point clouds.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">

    Related publications:

    • R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.

    • R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.

    • R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.

    • R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.

    • R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.

    • R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.

    • R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.

    • R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.

  4. i

    Mobile mapping system (MMS2) for detecting roadkills. - Dataset - CKAN

    • pre.iepnb.es
    Updated May 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Mobile mapping system (MMS2) for detecting roadkills. - Dataset - CKAN [Dataset]. https://pre.iepnb.es/catalogo/dataset/mobile-mapping-system-mms2-for-detecting-roadkills1
    Explore at:
    Dataset updated
    May 23, 2025
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Roads affect negatively wildlife, from direct mortality to habitat fragmentation. Mortality caused by collision with vehicles on roads is a major threat to many species. Monitoring animal road-kills is essential to stablish correct road mitigation measures. Many countries have national monitoring systems for identifying mortality hotspots. We present here an improved version of the mobile mapping system (MMS2) for detecting Roadkills not only for amphibians but small birds as well. It is composed by two stereo multi-spectral and high definition camera (ZED), a high-power processing laptop, a GPS device connected to the laptop, and a small support device attachable to the back of any vehicle. The system is controlled by several applications that manage all the video recording steps as well as the GPS acquisition, merging everything in a single final file, ready to be examine by an algorithm at posterior. We used the state-of-the-art machine learning computer vision algorithm (CNN: Convolutional Neural Network) to automatically detect animals on roads. This self-learning algorithm needs a large number of images with alive animals, road-killed animals and any objects likely to be found on roads (e.g. garbage thrown away by drivers) in order to be trained. The greater the image database, the greater the detection efficiency. This improved version of the mobile mapping system presents very good results. The algorithm has a good effectiveness in detecting small birds and amphibians.

  5. D

    Data from: Developing a SLAM-based backpack mobile mapping system for indoor...

    • phys-techsciences.datastations.nl
    bin, exe, zip
    Updated Feb 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    S. Karam; S. Karam (2022). Developing a SLAM-based backpack mobile mapping system for indoor mapping [Dataset]. http://doi.org/10.17026/DANS-XME-KEPM
    Explore at:
    bin(11456605), zip(21733), exe(17469035), exe(18190303), exe(447), bin(20142672), bin(62579), exe(17513963), bin(45862), exe(17284627), bin(6856377), bin(9279586), exe(17548337), exe(199), exe(17969103), bin(235037), exe(18250973), bin(192189), bin(14741220), bin(3471971), bin(127397), bin(338998), exe(23702808)Available download formats
    Dataset updated
    Feb 22, 2022
    Dataset provided by
    DANS Data Station Physical and Technical Sciences
    Authors
    S. Karam; S. Karam
    License

    https://doi.org/10.17026/fp39-0x58https://doi.org/10.17026/fp39-0x58

    Description

    These files are to support the published journal and thesis about the IMU and LIDAR SLAM for indoor mapping. They include datasets and functions used for point clouds generation. Date Submitted: 2022-02-21

  6. P

    Newer College Dataset

    • paperswithcode.com
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Newer College Dataset [Dataset]. https://paperswithcode.com/dataset/newer-college
    Explore at:
    Description

    The Newer College Dataset is a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km through New College, Oxford. The dataset includes data from two commercially available devices - a stereoscopic-inertial camera and a multi-beam 3D LiDAR, which also provides inertial measurements. Additionally, the authors used a tripod-mounted survey grade LiDAR scanner to capture a detailed millimeter-accurate 3D map of the test location (containing ∼290 million points).

    Using the map the authors inferred centimeter-accurate 6 Degree of Freedom (DoF) ground truth for the position of the device for each LiDAR scan to enable better evaluation of LiDAR and vision localisation, mapping and reconstruction systems. The dataset combines both built environments, open spaces and vegetated areas so as to test localization and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LIDAR reconstruction and appearance-based place recognition.

  7. D

    Detroit Street View Terrestrial LiDAR (2020-2022)

    • detroitdata.org
    • data.ferndalemi.gov
    • +1more
    Updated Apr 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Detroit (2023). Detroit Street View Terrestrial LiDAR (2020-2022) [Dataset]. https://detroitdata.org/dataset/detroit-street-view-terrestrial-lidar-2020-2022
    Explore at:
    arcgis geoservices rest api, zip, csv, gdb, gpkg, txt, html, geojson, kml, xlsxAvailable download formats
    Dataset updated
    Apr 18, 2023
    Dataset provided by
    City of Detroit
    Area covered
    Detroit
    Description

    Detroit Street View (DSV) is an urban remote sensing program run by the Enterprise Geographic Information Systems (EGIS) Team within the Department of Innovation and Technology at the City of Detroit. The mission of Detroit Street View is ‘To continuously observe and document Detroit’s changing physical environment through remote sensing, resulting in freely available foundational data that empowers effective city operations, informed decision making, awareness, and innovation.’ LiDAR (as well as panoramic imagery) is collected using a vehicle-mounted mobile mapping system.

    Due to variations in processing, index lines are not currently available for all existing LiDAR datasets, including all data collected before September 2020. Index lines represent the approximate path of the vehicle within the time extent of the given LiDAR file. The actual geographic extent of the LiDAR point cloud varies dependent on line-of-sight.

    Compressed (LAZ format) point cloud files may be requested by emailing gis@detroitmi.gov with a description of the desired geographic area, any specific dates/file names, and an explanation of interest and/or intended use. Requests will be filled at the discretion and availability of the Enterprise GIS Team. Deliverable file size limitations may apply and requestors may be asked to provide their own online location or physical media for transfer.

    LiDAR was collected using an uncalibrated Trimble MX2 mobile mapping system. The data is not quality controlled, and no accuracy assessment is provided or implied. Results are known to vary significantly. Users should exercise caution and conduct their own comprehensive suitability assessments before requesting and applying this data.

    Sample Dataset: https://detroitmi.maps.arcgis.com/home/item.html?id=69853441d944442f9e79199b57f26fe3

    DSV Logo

  8. i

    A road mobile mapping device for supervised classification of amphibians on...

    • pre.iepnb.es
    Updated May 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). A road mobile mapping device for supervised classification of amphibians on roads. - Dataset - CKAN [Dataset]. https://pre.iepnb.es/catalogo/dataset/a-road-mobile-mapping-device-for-supervised-classification-of-amphibians-on-roads1
    Explore at:
    Dataset updated
    May 23, 2025
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    We present the classification results of a supervised algorithm of road images containing amphibians. We used a prototype of a mobile mapping system composed of a scanning system attached to a traction vehicle capable of recording road surface images at speed up to 30 km/h. We tested the algorithm in three test situations (two control and one real): with plastic models of amphibians; with dead specimens of amphibians; and with real specimens of amphibians in a road survey. The classification results of the algorithm changed among tests, but in any case, it was able to detect more than 80% of the amphibians (more than 90% in control tests). Unfortunately, the algorithm presented as well a high rate of false-positive detections, varying from 80% in the real test to 14% in the control test with dead specimens. The Mobile Mapping Systems (MMS) is ideal for passive surveys and can work by day or night. This is the first study presenting an automatic solution to detect amphibians on roads. The classification algorithm can be adapted to any animal group. Robotics and computer vision are opening new horizons for wildlife conservation Palabras clave: Amphibian

  9. F

    LUCOOP: Leibniz University Cooperative Perception and Urban Navigation...

    • data.uni-hannover.de
    mp4, pdf, png, zip
    Updated Dec 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    i.c.sens (2024). LUCOOP: Leibniz University Cooperative Perception and Urban Navigation Dataset [Dataset]. https://data.uni-hannover.de/es/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe
    Explore at:
    zip(26808117), png(69506), png(10545), png(25744249), png(1157038), png(5957763), mp4(27883878), png(285246), pdf(643354), png(87949977), png(21345), png(445462), mp4(39029045), mp4(11636909), png(137903), png(1102491), png(918140)Available download formats
    Dataset updated
    Dec 12, 2024
    Dataset authored and provided by
    i.c.sens
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    A real-world multi-vehicle multi-modal V2V and V2X dataset

    Recently published datasets have been increasingly comprehensive with respect to their variety of simultaneously used sensors, traffic scenarios, environmental conditions, and provided annotations. However, these datasets typically only consider data collected by one independent vehicle. Hence, there is currently a lack of comprehensive, real-world, multi-vehicle datasets fostering research on cooperative applications such as object detection, urban navigation, or multi-agent SLAM. In this paper, we aim to fill this gap by introducing the novel LUCOOP dataset, which provides time-synchronized multi-modal data collected by three interacting measurement vehicles. The driving scenario corresponds to a follow-up setup of multiple rounds in an inner city triangular trajectory. Each vehicle was equipped with a broad sensor suite including at least one LiDAR sensor, one GNSS antenna, and up to three IMUs. Additionally, Ultra-Wide-Band (UWB) sensors were mounted on each vehicle, as well as statically placed along the trajectory enabling both V2V and V2X range measurements. Furthermore, a part of the trajectory was monitored by a total station resulting in a highly accurate reference trajectory. The LUCOOP dataset also includes a precise, dense 3D map point cloud, acquired simultaneously by a mobile mapping system, as well as an LOD2 city model of the measurement area. We provide sensor measurements in a multi-vehicle setup for a trajectory of more than 4 km and a time interval of more than 26 minutes, respectively. Overall, our dataset includes more than 54,000 LiDAR frames, approximately 700,000 IMU measurements, and more than 2.5 hours of 10 Hz GNSS raw measurements along with 1 Hz data from a reference station. Furthermore, we provide more than 6,000 total station measurements over a trajectory of more than 1 km and 1,874 V2V and 267 V2X UWB measurements. Additionally, we offer 3D bounding box annotations for evaluating object detection approaches, as well as highly accurate ground truth poses for each vehicle throughout the measurement campaign.

    Data access

    Important: Before downloading and using the data, please check the Updates.zip in the "Data and Resources" section at the bottom of this web site. There, you find updated files and annotations as well as update notes.

    • The dataset is available here.
    • Additional information are provided and constantly updated in our README.
    • The corresponding paper is available here.
    • Cite this as: J. Axmann et al., "LUCOOP: Leibniz University Cooperative Perception and Urban Navigation Dataset," 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 2023, pp. 1-8, doi: 10.1109/IV55152.2023.10186693.

    Preview

    Watch the video Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/541747ed-3d6e-41c4-9046-15bba3702e3b/download/lgln_logo.png" alt="Alt text" title="LGLN logo">

    Sensor Setup of the three measurement vehicles

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/d141d4f1-49b0-40e6-b8d9-e49f420e3627/download/vans_with_redgreen_cs_vehicle.png" alt="Alt text" title="Sensor Setup of the three measurement vehicles">

    Sensor setup of all the three vehicles: Each vehicle is equipped with a LiDAR sensor (green), a UWB unit (orange), a GNSS antenna (purple), and a Microstrain IMU (red). Additionally, each vehicle has its unique feature: Vehicle 1 has an additional LiDAR at the trailer hitch (green) and a prism for the tracking of the total station (dark red hexagon). Vehicle 2 provides an iMAR iPRENA (yellow) and iMAR FSAS (blue) IMU, where the platform containing the IMUs is mounted inside the car (dashed box). Vehicle 3 carries the RIEGL MMS (pink). Along with the sensors and platforms, the right-handed body frame of each vehicle is also indicated.

    3D map point cloud

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/5b6b37cf-a991-4dc4-8828-ad12755203ca/download/map_point_cloud.png" alt="Alt text" title="3D map point cloud">

    High resolution 3D map point cloud: Different locations and details along the trajectory. Colors according to reflectance values.

    Measurement scenario

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/6c61d297-8544-4788-bccf-7a28ccfa702a/download/scenario_with_osm_reference.png" alt="Alt text" title="Measurement scenario">

    Driven trajectory and locations of the static sensors: The blue hexagons indicate the positions of the static UWB sensors, the orange star represents the location of the total station, and the orange shaded area illustrates the coverage of the total station. The route of the three measurement vehicles is shown in purple. Background map: OpenStreetMap copyright

    Watch the video Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de

    Number of annotations per class (final)

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/8b0262b9-6769-4a5d-a37e-8fcb201720ef/download/annotations.png" alt="Alt text" title="Number of annotations per class">

    Watch the video Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de

    Data structure

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/7358ed31-9886-4c74-bec2-6868d577a880/download/data_structure.png" alt="Alt text" title="Data structure">

    Data format

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/fc795ec2-f920-4415-aac6-6ad3be3df0a9/download/data_format.png" alt="Alt text" title="Data format">

    Gallery

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/a1974957-5ce2-456c-9f44-9d05c5a14b16/download/vans_merged.png" alt="Alt text" title="Measurement vehicles">

    From left to right: Van 1, van 2, van 3.

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/53a58500-8847-4b3c-acd4-a3ac27fc8575/download/ts_uwb_mms.png" alt="Alt text">

    From left to right: Tracking of the prism on van 1 by means of the MS60 total station, the detected prism from the view point of the MS60 total station, PulsON 440 Ultra Wide Band (UWB) sensors, RIEGL VMX-250 Mobile Mapping System.

    Acknowledgement

    This measurement campaign could not have been carried out without the help of many contributors. At this point, we thank Yuehan Jiang (Institute for Autonomous Cyber-Physical Systems, Hamburg), Franziska Altemeier, Ingo Neumann, Sören Vogel, Frederic Hake (all Geodetic Institute, Hannover), Colin Fischer (Institute of Cartography and Geoinformatics, Hannover), Thomas Maschke, Tobias Kersten, Nina Fletling (all Institut für Erdmessung, Hannover), Jörg Blankenbach (Geodetic Institute, Aachen), Florian Alpen (Hydromapper GmbH), Allison Kealy (Victorian Department of Environment, Land, Water and Planning, Melbourne), Günther Retscher, Jelena Gabela (both Department of Geodesy and Geoin- formation, Wien), Wenchao Li (Solinnov Pty Ltd), Adrian Bingham (Applied Artificial Intelligence Institute,

  10. t

    Parking lot locations and utilization samples in the Hannover Linden-Nord...

    • service.tib.eu
    Updated May 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Parking lot locations and utilization samples in the Hannover Linden-Nord area from LiDAR mobile mapping surveys [Dataset]. https://service.tib.eu/ldmservice/dataset/luh-parking-locations-and-utilization-from-lidar-mobile-mapping-surveys
    Explore at:
    Dataset updated
    May 12, 2024
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Area covered
    Hanover, Linden - Nord
    Description

    Work in progress: data might be changed The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies. Vehicle Detections Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock. The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides. Figure 1: Overview map of detected vehicles Parking Areas

  11. Mobile LiDAR Data

    • figshare.com
    bin
    Updated Jan 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bin Wu (2021). Mobile LiDAR Data [Dataset]. http://doi.org/10.6084/m9.figshare.13625054.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 22, 2021
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Bin Wu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is a point cloud sampe data which was collected by a mobile Lidar system (MLS).

  12. Z

    BLE RSS dataset for fingerprinting radio map calibration

    • data.niaid.nih.gov
    • explore.openaire.eu
    Updated Sep 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marcin Kolakowski (2021). BLE RSS dataset for fingerprinting radio map calibration [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5457590
    Explore at:
    Dataset updated
    Sep 20, 2021
    Dataset authored and provided by
    Marcin Kolakowski
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset contains Bluetooth Low Energy signal strengths measured in a fully furnished flat. The dataset was originally used in the study concerning RSS-fingerprinting based indoor positioning systems. The data were gathered using a hybrid BLE-UWB localization system, which was installed in the apartment and a mobile robotic platform equipped for a LiDAR. The dataset comprises power measurement results and LiDAR scans performed in 4104 points. The scans used for initial environment mapping and power levels registered in two test scenarios are also attached.

    The set contains both raw and preprocessed measurement data. The Python code for raw data loading is supplied.

    The detailed dataset description can be found in the dataset_description.pdf file.

    When using the dataset, please consider citing the original paper, in which the data were used:

    M. Kolakowski, “Automated Calibration of RSS Fingerprinting Based Systems Using a Mobile Robot and Machine Learning”, Sensors , vol. 21, 6270, Sep. 2021 https://doi.org/10.3390/s21186270

  13. i

    Intelligent systems for mapping amphibian mortality on Portuguese roads. -...

    • iepnb.es
    • pre.iepnb.es
    Updated Jul 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Intelligent systems for mapping amphibian mortality on Portuguese roads. - Dataset - CKAN [Dataset]. https://iepnb.es/catalogo/dataset/intelligent-systems-for-mapping-amphibian-mortality-on-portuguese-roads11
    Explore at:
    Dataset updated
    Jul 15, 2025
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Roads have multiple effects on wildlife, from animal mortality, habitat and population fragmentation, to modification of animal reproductive behavior. Amphibians in particular, due to their activity patterns, population structure, and preferred habitats, are strongly affected by traffic intensity and road density. On the other hand, road-kills studies and conservation measures have been extensively applied on highways, although amphibians die massively on country roads, where conservation measures are not applied. Many countries (e.g. Portugal) have not any national program for monitoring road-kills, a common practice in other European countries (e.g. UK; The Netherlands). This is necessary to identify hotspots of road-kills in order to implement conservation measures correctly. However, monitoring road-kills is expensive and time consuming, and depend mainly on volunteers. Therefore, cheap, easy to implement, and automatic methods for detecting road-kills over larger areas (broad monitoring) and along time (continuous monitoring) are necessary. We present here the preliminary results from a research project which aims to build a cheap and efficient system for detecting amphibians roadkills using computer-vision techniques from robotics. We propose two different solutions: 1) a Mobile Mapping System to detect automatically amphibians’ road-kills in roads, and 2) a Fixed Detection System to monitor automatically road-kills in a particular road place during a long time. The first methodology will detect and locate road-kills through the automatic classification of road surface images taken from a car with a digital camera, linked to a GPS. Road kill casualties will be detected automatically in the image through a classification algorithm developed specifically for this purpose. The second methodology will detect amphibians crossing a particular road point, and determine if they survive or not. Both Fixed and Mobile system will use similar programs. The algorithm is trained with existing data. For now, we can only present some results about the Mobile Mapping System. We are performing different tests with different cameras, namely a lineal camera, used in different industrial solutions of control quality, and an outdoor Go-pro camera, very famous on different sports like biking. Our results prove that we can detect different road-killed and live animals to an acceptable car speed and at a high spatial resolution. Both Mapping Systems will provide the capacity to detect automatically the casualties of road-kills. With these data, it will be possible to analyze the distribution of road-kills and hotspots, to identify the main migration routes, to count the total number of amphibians crossing a road, to determine how many of that individuals are effectively road-killed, and to define where conservation measures should be implemented. All these objectives will be achieved more easily at with a lower cost in funds, time, and personal resources.

  14. Data from: Robot@Home, a robotic dataset for semantic mapping of home...

    • zenodo.org
    application/gzip
    Updated Sep 28, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    José Raul Ruiz-Sarmiento; Cipriano Galindo; Javier González-Jiménez; Gregorio Ambrosio-Cestero; José Raul Ruiz-Sarmiento; Cipriano Galindo; Javier González-Jiménez; Gregorio Ambrosio-Cestero (2023). Robot@Home, a robotic dataset for semantic mapping of home environments [Dataset]. http://doi.org/10.5281/zenodo.3901564
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Sep 28, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    José Raul Ruiz-Sarmiento; Cipriano Galindo; Javier González-Jiménez; Gregorio Ambrosio-Cestero; José Raul Ruiz-Sarmiento; Cipriano Galindo; Javier González-Jiménez; Gregorio Ambrosio-Cestero
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Robot-at-Home dataset (Robot@Home, paper here) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.

    This dataset is unique in three aspects:

    • The provided data were captured with a rig of 4 RGB-D sensors with an overall field of view of 180°H. and 58°V., and with a 2D laser scanner.
    • It comprises diverse and numerous data: sequences of RGB-D images and laser scans from the rooms of five apartments (87,000+ observations were collected), topological information about the connectivity of these rooms, and 3D reconstructions and 2D geometric maps of the visited rooms.
    • The provided ground truth is dense, including per-point annotations of the categories of the objects and rooms appearing in the reconstructed scenarios, and per-pixel annotations of each RGB-D image within the recorded sequences

    During the data collection, a total of 36 rooms were completely inspected, so the dataset is rich in contextual information of objects and rooms. This is a valuable feature, missing in most of the state-of-the-art datasets, which can be exploited by, for instance, semantic mapping systems that leverage relationships like pillows are usually on beds or ovens are not in bathrooms.

  15. d

    Data from: EAARL Topography--Three Mile Creek and Mobile-Tensaw Delta,...

    • catalog.data.gov
    • data.usgs.gov
    • +3more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). EAARL Topography--Three Mile Creek and Mobile-Tensaw Delta, Alabama, 2010 [Dataset]. https://catalog.data.gov/dataset/eaarl-topography-three-mile-creek-and-mobile-tensaw-delta-alabama-2010
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Alabama, Mobile–Tensaw River Delta
    Description

    A digital elevation model (DEM) of a portion of the Mobile-Tensaw Delta region and Three Mile Creek in Alabama was produced from remotely sensed, geographically referenced elevation measurements by the U.S. Geological Survey (USGS). Elevation measurements were collected over the area (bathymetry was irresolvable) using the Experimental Advanced Airborne Research Lidar (EAARL), a pulsed laser ranging system mounted onboard an aircraft to measure ground elevation, vegetation canopy, and coastal topography. The system uses high-frequency laser beams directed at the Earth's surface through an opening in the bottom of the aircraft's fuselage. The laser system records the time difference between emission of the laser beam and the reception of the reflected laser signal in the aircraft. The plane travels over the target area at approximately 50 meters per second at an elevation of approximately 300 meters, resulting in a laser swath of approximately 240 meters with an average point spacing of 2-3 meters. The EAARL, developed originally by the National Aeronautics and Space Administration (NASA) at Wallops Flight Facility in Virginia, measures ground elevation with a vertical resolution of +/-15 centimeters. A sampling rate of 3 kilohertz or higher results in an extremely dense spatial elevation dataset. Over 100 kilometers of coastline can be surveyed easily within a 3- to 4-hour mission. When resultant elevation maps for an area are analyzed, they provide a useful tool to make management decisions regarding land development. For more information on Lidar science and the Experimental Advanced Airborne Research Lidar (EAARL) system and surveys, see http://ngom.usgs.gov/dsp/overview/index.php and http://ngom.usgs.gov/dsp/tech/eaarl/index.php .

  16. C

    Traffic Signs: Berlin, 2020

    • ckan.mobidatalab.eu
    json
    Updated Mar 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AIPARK GmbH (2023). Traffic Signs: Berlin, 2020 [Dataset]. https://ckan.mobidatalab.eu/sr/dataset/traffic-signs-berlin-2020
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Mar 6, 2023
    Dataset provided by
    AIPARK GmbH
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Time period covered
    Feb 29, 2020 - Sep 30, 2020
    Area covered
    Berlin
    Description

    1. Introduction
    The main idea of ​​the project is to obtain traffic sign locations by analyzing videos using a combination of artificial intelligence and image recognition methods. Each video file also includes a geolocation file that has the same name as the video file and contains latitude, longitude, and timestamp attributes from the beginning of the video. A total of 3350 videos with a total range of 1040 km are used in the area of ​​the Berlin S-Bahn ring. The result file contains longitude and latitude (WGS84, EPSG:4326) of traffic sign locations and their types in 43 categories.

    2. Data sets
    To train AI networks, two publicly available data sets are used: for traffic sign recognition “German Traffic Sign Detection Benchmark Dataset[1]” and for traffic sign classification “German Traffic Sign Recognition Benchmark Dataset[1 ]". You can find more information here: Detection Dataset, Classification Dataset

    3. Methodology and models
    The TensorFlow[2] framework is used to analyze videos. An object detection[3] model for traffic sign recognition is trained using the transfer learning method[4]. To improve the accuracy of traffic sign classification, a custom image classification[5] model for categorizing traffic sign types is trained. The output of the traffic sign recognition model is used as input of the traffic sign classification model.

    4. Source
    [1] Houben, S., Stallkamp, ​​J., Salmen, J., Schlipsing, M. and Igel, C. (2013). "Detection of traffic signs in real-world images: the German traffic sign detection benchmark", in Proceedings of the International Joint Conference on Neural Networks.10.1109/IJCNN.2013.6706807

    [2] Martín A., Paul B ., Jianmin C, Zhifeng C, Andy D, Jeffrey D, .... Xiaoqiang Zheng. (2016). "TensorFlow: a system for large-scale machine learning", in Proceedings of the 12th USENIX conference on Operating Systems Design and Implementation(OSDI'16). USENIX Association, USA, 265–283

    [3] Girshick R, Donahue J, Darrell T, Malik J (2014). "Rich feature hierarchies for accurate object detection and semantic segmentation", 2014 IEEE Conference on Computer Vision and Pattern Recognition. doi:10.1109/cvpr.2014.81

    [4] Tan C, Sun F, Kong T, Zhang W, Yang C and Liu C (2018) “A survey on deep transfer learning”, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11141 LNCS, pp. 270–279. doi: 10.1007/978-3-030-01424-7_27.

    [5] Sultana, F., Sufian, A. and Dutta, P. (2018). “Advancements in image classification using convolutional neural network”, in Proceedings - 2018 4th IEEE International Conference on Research in Computational Intelligence and Communication Networks, ICRCICN 2018, pp. 122–129. doi: 10.1109/ICRCCICN.2018.8718718.

  17. Z

    Robot@Home2, a robotic dataset of home environments

    • data.niaid.nih.gov
    Updated Apr 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ambrosio-Cestero, Gregorio (2024). Robot@Home2, a robotic dataset of home environments [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3901563
    Explore at:
    Dataset updated
    Apr 4, 2024
    Dataset provided by
    González-Jiménez, Javier
    Ambrosio-Cestero, Gregorio
    Ruiz-Sarmiento, José Raul
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Robot-at-Home dataset (Robot@Home, paper here) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.

    This dataset is unique in three aspects:

    The provided data were captured with a rig of 4 RGB-D sensors with an overall field of view of 180°H. and 58°V., and with a 2D laser scanner.

    It comprises diverse and numerous data: sequences of RGB-D images and laser scans from the rooms of five apartments (87,000+ observations were collected), topological information about the connectivity of these rooms, and 3D reconstructions and 2D geometric maps of the visited rooms.

    The provided ground truth is dense, including per-point annotations of the categories of the objects and rooms appearing in the reconstructed scenarios, and per-pixel annotations of each RGB-D image within the recorded sequences

    During the data collection, a total of 36 rooms were completely inspected, so the dataset is rich in contextual information of objects and rooms. This is a valuable feature, missing in most of the state-of-the-art datasets, which can be exploited by, for instance, semantic mapping systems that leverage relationships like pillows are usually on beds or ovens are not in bathrooms.

    Robot@Home2

    Robot@Home2, is an enhanced version aimed at improving usability and functionality for developing and testing mobile robotics and computer vision algorithms. It consists of three main components. Firstly, a relational database that states the contextual information and data links, compatible with Standard Query Language. Secondly,a Python package for managing the database, including downloading, querying, and interfacing functions. Finally, learning resources in the form of Jupyter notebooks, runnable locally or on the Google Colab platform, enabling users to explore the dataset without local installations. These freely available tools are expected to enhance the ease of exploiting the Robot@Home dataset and accelerate research in computer vision and robotics.

    If you use Robot@Home2, please cite the following paper:

    Gregorio Ambrosio-Cestero, Jose-Raul Ruiz-Sarmiento, Javier Gonzalez-Jimenez, The Robot@Home2 dataset: A new release with improved usability tools, in SoftwareX, Volume 23, 2023, 101490, ISSN 2352-7110, https://doi.org/10.1016/j.softx.2023.101490.

    @article{ambrosio2023robotathome2,title = {The Robot@Home2 dataset: A new release with improved usability tools},author = {Gregorio Ambrosio-Cestero and Jose-Raul Ruiz-Sarmiento and Javier Gonzalez-Jimenez},journal = {SoftwareX},volume = {23},pages = {101490},year = {2023},issn = {2352-7110},doi = {https://doi.org/10.1016/j.softx.2023.101490},url = {https://www.sciencedirect.com/science/article/pii/S2352711023001863},keywords = {Dataset, Mobile robotics, Relational database, Python, Jupyter, Google Colab}}

    Version historyv1.0.1 Fixed minor bugs.v1.0.2 Fixed some inconsistencies in some directory names. Fixes were necessary to automate the generation of the next version.v2.0.0 SQL based dataset. Robot@Home v1.0.2 has been packed into a sqlite database along with RGB-D and scene files which have been assembled into a hierarchical structured directory free of redundancies. Path tables are also provided to reference files in both v1.0.2 and v2.0.0 directory hierarchies. This version has been automatically generated from version 1.0.2 through the toolbox.v2.0.1 A forgotten foreign key pair have been added.v.2.0.2 The views have been consolidated as tables which allows a considerable improvement in access time.v.2.0.3 The previous version does not include the database. In this version the database has been uploaded.v.2.1.0 Depth images have been updated to 16-bit. Additionally, both the RGB images and the depth images are oriented in the original camera format, i.e. landscape.

  18. T

    1:100,000 desert (sand) distribution dataset in China

    • data.tpdc.ac.cn
    • tpdc.ac.cn
    zip
    Updated Apr 19, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jianhua WANG; Yimou WANG; Changzhen YAN; Yuan QI (2021). 1:100,000 desert (sand) distribution dataset in China [Dataset]. http://doi.org/10.3972/westdc.006.2013.db
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 19, 2021
    Dataset provided by
    TPDC
    Authors
    Jianhua WANG; Yimou WANG; Changzhen YAN; Yuan QI
    Area covered
    Description

    This dataset is the first 1: 100,000 desert spatial database in China based on the graphic data of desert thematic maps. It mainly reflects the geographical distribution, area size, and mobility of sand dunes in China. According to the system design requirements and relevant standards, the input data is standardized and uniformly converted into a standard format for various types of data input. Build a library to run the delivery system. This project uses the TM image in 2000 as the information source, and interprets, extracts, and edits the coverage of the national land use map and TM digital image information in 2000. It uses remote sensing and geographic information system technology to 1: 100,000 Thematic mapping requirements for scale bar maps were made on the desert, sandy land and gravel Gobi in China. The 1: 100,000 desert map across the country can save users a lot of data entry and editing work when they are engaged in research on resources and the environment. Digital maps can be easily converted into layout maps The dataset properties are as follows: Divided into two folders e00 and shp: Desert map name and province comparison table in each folder 01 Ahsm Anhui 02 Bjsm Beijing 03 Fjsm Fujian 04 Gdsm Guangdong 05 Gssm Gansu 06 Gxsm Guangxi Zhuang Autonomous Region 07 Gzsm Guizhou 08 Hebsm Hebei 09 Hensm Henan 10 Hljsm Heilongjiang 11 Hndsm Hainan 12 Hubsm Hubei 13 Jlsm Jilin Province 14 Jssm Jiangsu 15 Jxsm Jiangxi 16 Lnsm Liaoning 17 Nmsm Inner Mongolia Gu Autonomous Region 18 Nxsm Ningxia Hui Autonomous Region 19 Qhsm Qinghai 20 Scsm Sichuan 21 Sdsm Shandong 22 Sxsm Shaanxi Province 23 Tjsm Tianjin 24 Twsm Taiwan Province 25 Xjsm Xinjiang Uygur Autonomous Region 26 Xzsm Tibet Autonomous Region 27 Zjsm Zhejiang 28 Shxsm Shanxi 1. Data projection: Projection: Albers False_Easting: 0.000000 False_Northing: 0.000000 Central_Meridian: 105.000000 Standard_Parallel_1: 25.000000 Standard_Parallel_2: 47.000000 Latitude_Of_Origin: 0.000000 Linear Unit: Meter (1.000000) 2. Data attribute table: area (area) perimeter ashm_ (sequence code) class (desert encoding) ashm_id (desert encoding) 3. Desert coding: mobile sandy land 2341010 Semi-mobile sandy land Semi-fixed sandy land 2341030 Gobi 2342000 Saline land 2343000 4: File format: National, sub-provincial and county-level desert map data types are vector shapefiles and E00 5: File naming: Data organization based on the National Basic Resources and Environmental Remote Sensing Dynamic Information Service System is performed on the file management layer of Windows NT. The file and directory names are compound names of English characters and numbers. Pinyin + SM composition, such as the desert map of Gansu Province is GSSM. The flag and county desert map is the pinyin + xxxx of the province name, and xxxx is the last four digits of the flag and county code. The division of provinces, districts, flags and counties is based on the administrative division data files in the national basic resources and environmental remote sensing dynamic information service operation system.

  19. i

    IILABS 3D: iilab Indoor LiDAR-based SLAM Dataset

    • rdm.inesctec.pt
    Updated Feb 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). IILABS 3D: iilab Indoor LiDAR-based SLAM Dataset [Dataset]. https://rdm.inesctec.pt/dataset/nis-2025-001
    Explore at:
    Dataset updated
    Feb 27, 2025
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    The IILABS 3D dataset is a rigorously designed benchmark intended to advance research in 3D LiDAR-based Simultaneous Localization and Mapping (SLAM) algorithms within indoor environments. It provides a robust and diverse foundation for evaluating and enhancing SLAM techniques in complex indoor settings. The dataset was retrived in the Industry and Innovation Laboratory (iiLab) and comprises synchronized data from a suite of sensors—including four distinct 3D LiDAR sensors, a 2D LiDAR, an Inertial Measurement Unit (IMU), and wheel odometry—complemented by high-precision ground truth obtained via a Motion Capture (MoCap) system. Project Webpage https://jorgedfr.github.io/3d_lidar_slam_benchmark_at_iilab/ Dataset Toolkit https://github.com/JorgeDFR/iilabs3d-toolkit Data Collection Method Sensor data was captured using the Robot Operating System (ROS) framework’s rosbag record tool on a LattePanda 3 Delta embedded computer. Post-processing involved timestamp correction for the Xsens MTi-630 IMU via custom Python scripts. Ground-truth data was captured using an OptiTrack MoCap system featuring 24 high-resolution PrimeX 22 cameras. These cameras were connected via Ethernet to a primary Windows computer running the Motive software (https://optitrack.com/software/motive), which processed the camera data. This Windows computer was then connected via Ethernet to a secondary Ubuntu machine running the NatNet 4 ROS driver (https://github.com/L2S-lab/natnet_ros_cpp). The driver published the data as ROS topics, which were recorded into rosbag files. Additionally, temporal synchronization between the robot platform and the ground-truth system was achieved using the Network Time Protocol (NTP). Finally, the bag files were processed using the EVO open-source Python library (https://github.com/MichaelGrupp/evo) to convert the data into TUM format and adjust the initial position offsets for accurate SLAM odometry benchmarking. Type of Instrument Mobile Robot Platform: INESC TEC MRDT Modified Hangfa Discovery Q2 Platform. R.B. Sousa, H.M. Sobreira, J.G. Martins, P.G. Costa, M.F. Silva and A.P. Moreira, "Integrating Multimodal Perception into Ground Mobile Robots," 2025 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC2025), Madeira, Portugal, 2025, pp. TBD, doi: TBD [Manuscript accepted for publication].https://sousarbarb.github.io/inesctec_mrdt_hangfa_discovery_q2/ Sensor data: Livox Mid-360, Ouster OS1-64 RevC, RoboSense RS-HELIOS-5515, and Velodyne VLP-16 (3D LiDARs); Hokuyo UST-10LX-H01 (2D LiDAR); Xsens MTi-630 (IMU); and Faulhaber 2342 wheel encoders (64:1 gear ratio, 12 Counts Per Revolution (CPR)). Ground Truth data: OptiTrack Motion Capture System with 24 PrimeX 22 cameras installed in Room A, Floor 0 at iilab

  20. d

    Lidar derived shoreline for Beaver Lake near Rogers, Arkansas, 2018

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Lidar derived shoreline for Beaver Lake near Rogers, Arkansas, 2018 [Dataset]. https://catalog.data.gov/dataset/lidar-derived-shoreline-for-beaver-lake-near-rogers-arkansas-2018
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Beaver Lake, Rogers, Arkansas
    Description

    Beaver Lake was constructed in 1966 on the White River in the northwest corner of Arkansas for flood control, hydroelectric power, public water supply, and recreation. The surface area of Beaver Lake is about 27,900 acres and approximately 449 miles of shoreline are at the conservation pool level (1,120 feet above the North American Vertical Datum of 1988). Sedimentation in reservoirs can result in reduced water storage capacity and a reduction in usable aquatic habitat. Therefore, accurate and up-to-date estimates of reservoir water capacity are important for managing pool levels, power generation, water supply, recreation, and downstream aquatic habitat. Many of the lakes operated by the U.S. Army Corps of Engineers are periodically surveyed to monitor bathymetric changes that affect water capacity. In October 2018, the U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, completed one such survey of Beaver Lake using a multibeam echosounder. The echosounder data was combined with light detection and ranging (lidar) data to prepare a bathymetric map and a surface area and capacity table. Collection of bathymetric data in October 2018 at Beaver Lake near Rogers, Arkansas, used a marine-based mobile mapping unit that operates with several components: a multibeam echosounder (MBES) unit, an inertial navigation system (INS), and a data acquisition computer. Bathymetric data were collected using the MBES unit in longitudinal transects to provide complete coverage of the lake. The MBES was tilted in some areas to improve data collection along the shoreline, in coves, and in areas that are shallower than 2.5 meters deep (the practical limit of reasonable and safe data collection with the MBES). Two bathymetric datasets collected during the October 2018 survey include the gridded bathymetric point data (BeaverLake2018_bathy.zip) computed on a 3.28-foot (1-meter) grid using the Combined Uncertainty and Bathymetry Estimator (CUBE) method, and the bathymetric quality-assurance dataset (BeaverLake2018_QA.zip). The gridded point data used to create the bathymetric surface (BeaverLake2018_bathy.zip) was quality-assured with data from 9 selected resurvey areas (BeaverLake2018_QA.zip) to test the accuracy of the gridded bathymetric point data. The data are provided as comma delimited text files that have been compressed into zip archives. The shoreline was created from bare-earth lidar resampled to a 3.28-foot (1-meter) grid spacing. A contour line representing the flood pool elevation of 1,135 feet was generated from the gridded data. The data are provided in the Environmental Systems Research Institute shapefile format and have the common root name of BeaverLake2018_1135-ft. All files in the shapefile group must be retrieved to be useable.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Institut für Kartographie und Geoinformatik (2024). Parking lot locations and utilization samples in the Hannover Linden-Nord area from LiDAR mobile mapping surveys [Dataset]. https://data.uni-hannover.de/dataset/parking-locations-and-utilization-from-lidar-mobile-mapping-surveys

Parking lot locations and utilization samples in the Hannover Linden-Nord area from LiDAR mobile mapping surveys

Explore at:
png(1288581), geojson(1348252), geojson(4361255), png(10065), geojson(233948), png(445868), png(1370680)Available download formats
Dataset updated
Apr 17, 2024
Dataset authored and provided by
Institut für Kartographie und Geoinformatik
License

Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically

Area covered
Hanover, Linden - Nord
Description

Work in progress: data might be changed

The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies.

Vehicle Detections

Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock.

The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides.

https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/807618b6-5c38-4456-88a1-cb47500081ff/download/detection_map.png" alt="Overview map of detected vehicles" title="Overview map of detected vehicles"> Figure 1: Overview map of detected vehicles

Parking Areas

The public parking areas were digitized manually using aerial images and the detected vehicles in order to exclude irregular parking spaces as far as possible. They were also tagged as to whether they were aligned parallel to the road and assigned to a use at the time of recording, as some are used for construction sites or outdoor catering, for example. Depending on the intended use, they can be filtered individually.

https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/16b14c61-d1d6-4eda-891d-176bdd787bf5/download/parking_area_example.png" alt="Example parking area occupation pattern" title="Visualization of example parking areas on top of an aerial image [by LGLN]"> Figure 2: Visualization of example parking areas on top of an aerial image [by LGLN]

Parking Occupancy

For modelling the parking occupancy, single slots are sampled as center points every 5 m from the parking areas. In this way, they can be integrated into a street/routing graph, for example, as prepared in Wage et al. (2023). Own representations can be generated from the parking area and vehicle detections. Those parking points were intersected with the vehicle boxes to identify occupancy at the respective epochs.

https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/ca0b97c8-2542-479e-83d7-74adb2fc47c0/download/datenpub-bays.png" alt="Overview map of parking slots' average load" title="Overview map of parking slots' average load"> Figure 3: Overview map of average parking lot load

However, unoccupied spaces cannot be determined quite as trivially the other way around, since no detected vehicle can result just as from no measurement/observation. Therefore, a parking space is only recorded as unoccupied if a vehicle was detected at the same time in the neighborhood on the same parking lane and therefore it can be assumed that there is a measurement.

To close temporal gaps, interpolations were made by hour for each parking slot, assuming that between two consecutive observations with an occupancy the space was also occupied in between - or if both times free also free in between. If there was a change, this is indicated by a proportional value. To close spatial gaps, unobserved spaces in the area are drawn randomly from the ten closest occupation patterns around.

This results in an exemplary occupancy pattern of a synthetic day. Depending on the application, the value could be interpreted as occupancy probability or occupancy share.

https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/184a1f75-79ab-4d0e-bb1b-8ed170678280/download/occupation_example.png" alt="Example parking area occupation pattern" title="Example parking area occupation pattern"> Figure 4: Example parking area occupation pattern

References

  • F. Bock, D. Eggert and M. Sester (2015): On-street Parking Statistics Using LiDAR Mobile Mapping, 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 2015, pp. 2812-2818. https://doi.org/10.1109/ITSC.2015.452
  • A. Leichter, U. Feuerhake, and M. Sester (2021): Determination of Parking Space and its Concurrent Usage Over Time Using Semantically Segmented Mobile Mapping Data, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2021, 185–192. https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-185-2021
  • O. Wage, M. Heumann, and L. Bienzeisler (2023): Modeling and Calibration of Last-Mile Logistics to Study Smart-City Dynamic Space Management Scenarios. In 1st ACM SIGSPATIAL International Workshop on Sustainable Mobility (SuMob ’23), November 13, 2023, Hamburg, Germany. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3615899.3627930
Search
Clear search
Close search
Google apps
Main menu