64 datasets found
  1. F

    Parking lot locations and utilization samples in the Hannover Linden-Nord...

    • data.uni-hannover.de
    geojson, png
    Updated Apr 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Institut für Kartographie und Geoinformatik (2024). Parking lot locations and utilization samples in the Hannover Linden-Nord area from LiDAR mobile mapping surveys [Dataset]. https://data.uni-hannover.de/dataset/parking-locations-and-utilization-from-lidar-mobile-mapping-surveys
    Explore at:
    geojson, pngAvailable download formats
    Dataset updated
    Apr 17, 2024
    Dataset authored and provided by
    Institut für Kartographie und Geoinformatik
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Area covered
    Hanover, Linden - Nord
    Description

    Work in progress: data might be changed

    The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies.

    Vehicle Detections

    Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock.

    The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/807618b6-5c38-4456-88a1-cb47500081ff/download/detection_map.png" alt="Overview map of detected vehicles" title="Overview map of detected vehicles"> Figure 1: Overview map of detected vehicles

    Parking Areas

    The public parking areas were digitized manually using aerial images and the detected vehicles in order to exclude irregular parking spaces as far as possible. They were also tagged as to whether they were aligned parallel to the road and assigned to a use at the time of recording, as some are used for construction sites or outdoor catering, for example. Depending on the intended use, they can be filtered individually.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/16b14c61-d1d6-4eda-891d-176bdd787bf5/download/parking_area_example.png" alt="Example parking area occupation pattern" title="Visualization of example parking areas on top of an aerial image [by LGLN]"> Figure 2: Visualization of example parking areas on top of an aerial image [by LGLN]

    Parking Occupancy

    For modelling the parking occupancy, single slots are sampled as center points every 5 m from the parking areas. In this way, they can be integrated into a street/routing graph, for example, as prepared in Wage et al. (2023). Own representations can be generated from the parking area and vehicle detections. Those parking points were intersected with the vehicle boxes to identify occupancy at the respective epochs.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/ca0b97c8-2542-479e-83d7-74adb2fc47c0/download/datenpub-bays.png" alt="Overview map of parking slots' average load" title="Overview map of parking slots' average load"> Figure 3: Overview map of average parking lot load

    However, unoccupied spaces cannot be determined quite as trivially the other way around, since no detected vehicle can result just as from no measurement/observation. Therefore, a parking space is only recorded as unoccupied if a vehicle was detected at the same time in the neighborhood on the same parking lane and therefore it can be assumed that there is a measurement.

    To close temporal gaps, interpolations were made by hour for each parking slot, assuming that between two consecutive observations with an occupancy the space was also occupied in between - or if both times free also free in between. If there was a change, this is indicated by a proportional value. To close spatial gaps, unobserved spaces in the area are drawn randomly from the ten closest occupation patterns around.

    This results in an exemplary occupancy pattern of a synthetic day. Depending on the application, the value could be interpreted as occupancy probability or occupancy share.

    https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/184a1f75-79ab-4d0e-bb1b-8ed170678280/download/occupation_example.png" alt="Example parking area occupation pattern" title="Example parking area occupation pattern"> Figure 4: Example parking area occupation pattern

    References

    • F. Bock, D. Eggert and M. Sester (2015): On-street Parking Statistics Using LiDAR Mobile Mapping, 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 2015, pp. 2812-2818. https://doi.org/10.1109/ITSC.2015.452
    • A. Leichter, U. Feuerhake, and M. Sester (2021): Determination of Parking Space and its Concurrent Usage Over Time Using Semantically Segmented Mobile Mapping Data, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2021, 185–192. https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-185-2021
    • O. Wage, M. Heumann, and L. Bienzeisler (2023): Modeling and Calibration of Last-Mile Logistics to Study Smart-City Dynamic Space Management Scenarios. In 1st ACM SIGSPATIAL International Workshop on Sustainable Mobility (SuMob ’23), November 13, 2023, Hamburg, Germany. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3615899.3627930
  2. i

    Mobile mapping system (MMS2) for detecting roadkills. - Dataset - CKAN

    • pre.iepnb.es
    • iepnb.es
    Updated May 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Mobile mapping system (MMS2) for detecting roadkills. - Dataset - CKAN [Dataset]. https://pre.iepnb.es/catalogo/dataset/mobile-mapping-system-mms2-for-detecting-roadkills1
    Explore at:
    Dataset updated
    May 23, 2025
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Roads affect negatively wildlife, from direct mortality to habitat fragmentation. Mortality caused by collision with vehicles on roads is a major threat to many species. Monitoring animal road-kills is essential to stablish correct road mitigation measures. Many countries have national monitoring systems for identifying mortality hotspots. We present here an improved version of the mobile mapping system (MMS2) for detecting Roadkills not only for amphibians but small birds as well. It is composed by two stereo multi-spectral and high definition camera (ZED), a high-power processing laptop, a GPS device connected to the laptop, and a small support device attachable to the back of any vehicle. The system is controlled by several applications that manage all the video recording steps as well as the GPS acquisition, merging everything in a single final file, ready to be examine by an algorithm at posterior. We used the state-of-the-art machine learning computer vision algorithm (CNN: Convolutional Neural Network) to automatically detect animals on roads. This self-learning algorithm needs a large number of images with alive animals, road-killed animals and any objects likely to be found on roads (e.g. garbage thrown away by drivers) in order to be trained. The greater the image database, the greater the detection efficiency. This improved version of the mobile mapping system presents very good results. The algorithm has a good effectiveness in detecting small birds and amphibians.

  3. Camera-LiDAR Datasets

    • figshare.com
    zip
    Updated Aug 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jennifer Leahy (2024). Camera-LiDAR Datasets [Dataset]. http://doi.org/10.6084/m9.figshare.26660863.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 14, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Jennifer Leahy
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The datasets are original and specifically collected for research aimed at reducing registration errors between Camera-LiDAR datasets. Traditional methods often struggle with aligning 2D-3D data from sources that have different coordinate systems and resolutions. Our collection comprises six datasets from two distinct setups, designed to enhance versatility in our approach and improve matching accuracy across both high-feature and low-feature environments.Survey-Grade Terrestrial Dataset:Collection Details: Data was gathered across various scenes on the University of New Brunswick campus, including low-feature walls, high-feature laboratory rooms, and outdoor tree environments.Equipment: LiDAR data was captured using a Trimble TX5 3D Laser Scanner, while optical images were taken with a Canon EOS 5D Mark III DSLR camera.Mobile Mapping System Dataset:Collection Details: This dataset was collected using our custom-built Simultaneous Localization and Multi-Sensor Mapping Robot (SLAMM-BOT) in several indoor mobile scenes to validate our methods.Equipment: Data was acquired using a Velodyne VLP-16 LiDAR scanner and an Arducam IMX477 Mini camera, controlled via a Raspberry Pi board.

  4. F

    i.c.sens Visual-Inertial-LiDAR Dataset

    • data.uni-hannover.de
    bag, jpeg, pdf, png +2
    Updated Dec 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    i.c.sens (2024). i.c.sens Visual-Inertial-LiDAR Dataset [Dataset]. https://data.uni-hannover.de/dataset/i-c-sens-visual-inertial-lidar-dataset
    Explore at:
    bag, jpeg, txt, pdf, png, rvizAvailable download formats
    Dataset updated
    Dec 12, 2024
    Dataset authored and provided by
    i.c.sens
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">

    Image credit: Sören Vogel

    The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">

    Image credit: Sören Vogel

    The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">

    The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:

    • Velodyne HDL-64 LiDAR
    • LORD MicroStrain 3DM-GQ4-45 GNSS aided IMU
    • Pointgrey GS3-U3-23S6C-C RGB camera

    To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:

    roscore & rosrun rviz rviz -d icsens_data.rviz
    

    Afterwards, start playing a rosbag with

    rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
    

    Below we provide some exemplary images and their corresponding point clouds.

    https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">

    Related publications:

    • R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.

    • R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.

    • R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.

    • R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.

    • R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.

    • R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.

    • R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.

    • R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.

  5. D

    Data from: Developing a SLAM-based backpack mobile mapping system for indoor...

    • phys-techsciences.datastations.nl
    bin, exe, zip
    Updated Feb 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    S. Karam; S. Karam (2022). Developing a SLAM-based backpack mobile mapping system for indoor mapping [Dataset]. http://doi.org/10.17026/DANS-XME-KEPM
    Explore at:
    bin(11456605), zip(21733), exe(17469035), exe(18190303), exe(447), bin(20142672), bin(62579), exe(17513963), bin(45862), exe(17284627), bin(6856377), bin(9279586), exe(17548337), exe(199), exe(17969103), bin(235037), exe(18250973), bin(192189), bin(14741220), bin(3471971), bin(127397), bin(338998), exe(23702808)Available download formats
    Dataset updated
    Feb 22, 2022
    Dataset provided by
    DANS Data Station Physical and Technical Sciences
    Authors
    S. Karam; S. Karam
    License

    https://doi.org/10.17026/fp39-0x58https://doi.org/10.17026/fp39-0x58

    Description

    These files are to support the published journal and thesis about the IMU and LIDAR SLAM for indoor mapping. They include datasets and functions used for point clouds generation. Date Submitted: 2022-02-21

  6. F

    LUCOOP: Leibniz University Cooperative Perception and Urban Navigation...

    • data.uni-hannover.de
    mp4, pdf, png, zip
    Updated Dec 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    i.c.sens (2024). LUCOOP: Leibniz University Cooperative Perception and Urban Navigation Dataset [Dataset]. https://data.uni-hannover.de/dataset/lucoop-leibniz-university-cooperative-perception-and-urban-navigation-dataset
    Explore at:
    png, mp4, pdf, zipAvailable download formats
    Dataset updated
    Dec 12, 2024
    Dataset authored and provided by
    i.c.sens
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    A real-world multi-vehicle multi-modal V2V and V2X dataset

    Recently published datasets have been increasingly comprehensive with respect to their variety of simultaneously used sensors, traffic scenarios, environmental conditions, and provided annotations. However, these datasets typically only consider data collected by one independent vehicle. Hence, there is currently a lack of comprehensive, real-world, multi-vehicle datasets fostering research on cooperative applications such as object detection, urban navigation, or multi-agent SLAM. In this paper, we aim to fill this gap by introducing the novel LUCOOP dataset, which provides time-synchronized multi-modal data collected by three interacting measurement vehicles. The driving scenario corresponds to a follow-up setup of multiple rounds in an inner city triangular trajectory. Each vehicle was equipped with a broad sensor suite including at least one LiDAR sensor, one GNSS antenna, and up to three IMUs. Additionally, Ultra-Wide-Band (UWB) sensors were mounted on each vehicle, as well as statically placed along the trajectory enabling both V2V and V2X range measurements. Furthermore, a part of the trajectory was monitored by a total station resulting in a highly accurate reference trajectory. The LUCOOP dataset also includes a precise, dense 3D map point cloud, acquired simultaneously by a mobile mapping system, as well as an LOD2 city model of the measurement area. We provide sensor measurements in a multi-vehicle setup for a trajectory of more than 4 km and a time interval of more than 26 minutes, respectively. Overall, our dataset includes more than 54,000 LiDAR frames, approximately 700,000 IMU measurements, and more than 2.5 hours of 10 Hz GNSS raw measurements along with 1 Hz data from a reference station. Furthermore, we provide more than 6,000 total station measurements over a trajectory of more than 1 km and 1,874 V2V and 267 V2X UWB measurements. Additionally, we offer 3D bounding box annotations for evaluating object detection approaches, as well as highly accurate ground truth poses for each vehicle throughout the measurement campaign.

    Data access

    Important: Before downloading and using the data, please check the Updates.zip in the "Data and Resources" section at the bottom of this web site. There, you find updated files and annotations as well as update notes.

    • The dataset is available here.
    • Additional information are provided and constantly updated in our README.
    • The corresponding paper is available here.
    • Cite this as: J. Axmann et al., "LUCOOP: Leibniz University Cooperative Perception and Urban Navigation Dataset," 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 2023, pp. 1-8, doi: 10.1109/IV55152.2023.10186693.

    Preview

    Watch the video Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/541747ed-3d6e-41c4-9046-15bba3702e3b/download/lgln_logo.png" alt="Alt text" title="LGLN logo">

    Sensor Setup of the three measurement vehicles

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/d141d4f1-49b0-40e6-b8d9-e49f420e3627/download/vans_with_redgreen_cs_vehicle.png" alt="Alt text" title="Sensor Setup of the three measurement vehicles">

    Sensor setup of all the three vehicles: Each vehicle is equipped with a LiDAR sensor (green), a UWB unit (orange), a GNSS antenna (purple), and a Microstrain IMU (red). Additionally, each vehicle has its unique feature: Vehicle 1 has an additional LiDAR at the trailer hitch (green) and a prism for the tracking of the total station (dark red hexagon). Vehicle 2 provides an iMAR iPRENA (yellow) and iMAR FSAS (blue) IMU, where the platform containing the IMUs is mounted inside the car (dashed box). Vehicle 3 carries the RIEGL MMS (pink). Along with the sensors and platforms, the right-handed body frame of each vehicle is also indicated.

    3D map point cloud

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/5b6b37cf-a991-4dc4-8828-ad12755203ca/download/map_point_cloud.png" alt="Alt text" title="3D map point cloud">

    High resolution 3D map point cloud: Different locations and details along the trajectory. Colors according to reflectance values.

    Measurement scenario

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/6c61d297-8544-4788-bccf-7a28ccfa702a/download/scenario_with_osm_reference.png" alt="Alt text" title="Measurement scenario">

    Driven trajectory and locations of the static sensors: The blue hexagons indicate the positions of the static UWB sensors, the orange star represents the location of the total station, and the orange shaded area illustrates the coverage of the total station. The route of the three measurement vehicles is shown in purple. Background map: OpenStreetMap copyright

    Watch the video Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de

    Number of annotations per class (final)

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/8b0262b9-6769-4a5d-a37e-8fcb201720ef/download/annotations.png" alt="Alt text" title="Number of annotations per class">

    Watch the video Source LOD2 City model: Auszug aus den Geodaten des Landesamtes für Geoinformation und Landesvermessung Niedersachsen, ©2023, www.lgln.de

    Data structure

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/7358ed31-9886-4c74-bec2-6868d577a880/download/data_structure.png" alt="Alt text" title="Data structure">

    Data format

    https://data.uni-hannover.de/de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/fc795ec2-f920-4415-aac6-6ad3be3df0a9/download/data_format.png" alt="Alt text" title="Data format">

    Gallery

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/a1974957-5ce2-456c-9f44-9d05c5a14b16/download/vans_merged.png" alt="Alt text" title="Measurement vehicles">

    From left to right: Van 1, van 2, van 3.

    https://data.uni-hannover.de/dataset/a20cf8fa-f692-40b3-9b9b-d2f7c8a1e3fe/resource/53a58500-8847-4b3c-acd4-a3ac27fc8575/download/ts_uwb_mms.png" alt="Alt text">

    From left to right: Tracking of the prism on van 1 by means of the MS60 total station, the detected prism from the view point of the MS60 total station, PulsON 440 Ultra Wide Band (UWB) sensors, RIEGL VMX-250 Mobile Mapping System.

    Acknowledgement

    This measurement campaign could not have been carried out without the help of many contributors. At this point, we thank Yuehan Jiang (Institute for Autonomous Cyber-Physical Systems, Hamburg), Franziska Altemeier, Ingo Neumann, Sören Vogel, Frederic Hake (all Geodetic Institute, Hannover), Colin Fischer (Institute of Cartography and Geoinformatics, Hannover), Thomas Maschke, Tobias Kersten, Nina Fletling (all Institut für Erdmessung, Hannover), Jörg Blankenbach (Geodetic Institute, Aachen), Florian Alpen (Hydromapper GmbH), Allison Kealy (Victorian Department of Environment, Land, Water and Planning, Melbourne), Günther Retscher, Jelena Gabela (both Department of Geodesy and Geoin- formation, Wien), Wenchao Li (Solinnov Pty Ltd), Adrian Bingham (Applied Artificial Intelligence Institute,

  7. H

    Extracted and classified road markings from a mobile lidar dataset collected...

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Jan 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Olsen; Jaehoon Jung (2024). Extracted and classified road markings from a mobile lidar dataset collected in Philomath, OR. [Dataset]. http://doi.org/10.7910/DVN/0STTJR
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 26, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Michael Olsen; Jaehoon Jung
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Philomath
    Description

    The dataset is an annotated point cloud in ASPRS LAS v1.2 format, which is annotated with different classification numbers representing six different road markings, including lane markings (1), pedestrian crosswalk and text (2), bike (3), left arrow (4), right arrow (5), straight arrow (6), and others (0). The point cloud dataset was obtained using Oregon Department of Transportation current mobile lidar system (Leica Pegasus:Two). The data were georeferenced in the supporting software for the Leica Pegasus:Two by Oregon DOT. The authors processed the data to extract the road markings using the road marking extraction tool (Rome2) developed in this Pactrans research.

  8. Mobile LiDAR Data

    • figshare.com
    bin
    Updated Jan 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bin Wu (2021). Mobile LiDAR Data [Dataset]. http://doi.org/10.6084/m9.figshare.13625054.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 22, 2021
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Bin Wu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is a point cloud sampe data which was collected by a mobile Lidar system (MLS).

  9. m

    Hungarian MLS point clouds of railroad environment and annotated ground...

    • data.mendeley.com
    Updated Apr 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mate Cserep (2022). Hungarian MLS point clouds of railroad environment and annotated ground truth data [Dataset]. http://doi.org/10.17632/ccxpzhx9dj.1
    Explore at:
    Dataset updated
    Apr 4, 2022
    Authors
    Mate Cserep
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    These sample LiDAR datasets were collected by the Hungarian State Railways with a Riegl VMX-450 high density mobile mapping system (MMS) mounted on a railroad vehicle. The sensor was capable of recording 1.1 million points / sec with an average 3 dimensional range precision of 3 mm and a maximum threshold of 7 mm. Average positional accuracy was 3 cm with a maximum threshold of 5 cm. The acquired point clouds contain the georeferenced spatial information (3D coordinates) with intensity and RGB data attached to the points. The applied reference system is the Hungarian national spatial reference system, EPSG:23700.

    3 datasets with different topographical regions of Hungary were selected: 1) mav_szabadszallas_csengod_665500_162600_665900_163200.laz is a curved rail track segment on flat terrain between the city of Szabadszállás and the town of Csengőd. The selected segment is ca. 600 m long, containing 51.8 million points. 2) mav_sztg_szh_439040_183444_440377_183863.laz is a curved rail track segment with varied terrain and slopes between the cities Szentgotthárd and Szombathely. The selected segment is ca. 1500 m long, containing 58.6 million points. 3) mav_szabadszallas_csengod_666285_159100_666436_159200.laz is a curved rail track segment on flat terrain between Szabadszállás and Csengőd, 100 m long, containing 7.3 million points. Manually annotated ground truth data for cable and rail track recognition is also attached.

  10. i

    Intelligent systems for mapping amphibian mortality on Portuguese roads. -...

    • iepnb.es
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Intelligent systems for mapping amphibian mortality on Portuguese roads. - Dataset - CKAN [Dataset]. https://iepnb.es/catalogo/dataset/intelligent-systems-for-mapping-amphibian-mortality-on-portuguese-roads1
    Explore at:
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Roads have multiple effects on wildlife, from animal mortality, habitat and population fragmentation, to modification of animal reproductive behavior. Amphibians in particular, due to their activity patterns, population structure, and preferred habitats, are strongly affected by traffic intensity and road density. On the other hand, road-kills studies and conservation measures have been extensively applied on highways, although amphibians die massively on country roads, where conservation measures are not applied. Many countries (e.g. Portugal) have not any national program for monitoring road-kills, a common practice in other European countries (e.g. UK; The Netherlands). This is necessary to identify hotspots of road-kills in order to implement conservation measures correctly. However, monitoring road-kills is expensive and time consuming, and depend mainly on volunteers. Therefore, cheap, easy to implement, and automatic methods for detecting road-kills over larger areas (broad monitoring) and along time (continuous monitoring) are necessary. We present here the preliminary results from a research project which aims to build a cheap and efficient system for detecting amphibians roadkills using computer-vision techniques from robotics. We propose two different solutions: 1) a Mobile Mapping System to detect automatically amphibians’ road-kills in roads, and 2) a Fixed Detection System to monitor automatically road-kills in a particular road place during a long time. The first methodology will detect and locate road-kills through the automatic classification of road surface images taken from a car with a digital camera, linked to a GPS. Road kill casualties will be detected automatically in the image through a classification algorithm developed specifically for this purpose. The second methodology will detect amphibians crossing a particular road point, and determine if they survive or not. Both Fixed and Mobile system will use similar programs. The algorithm is trained with existing data. For now, we can only present some results about the Mobile Mapping System. We are performing different tests with different cameras, namely a lineal camera, used in different industrial solutions of control quality, and an outdoor Go-pro camera, very famous on different sports like biking. Our results prove that we can detect different road-killed and live animals to an acceptable car speed and at a high spatial resolution. Both Mapping Systems will provide the capacity to detect automatically the casualties of road-kills. With these data, it will be possible to analyze the distribution of road-kills and hotspots, to identify the main migration routes, to count the total number of amphibians crossing a road, to determine how many of that individuals are effectively road-killed, and to define where conservation measures should be implemented. All these objectives will be achieved more easily at with a lower cost in funds, time, and personal resources.

  11. Z

    BLE RSS dataset for fingerprinting radio map calibration

    • data.niaid.nih.gov
    Updated Sep 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marcin Kolakowski (2021). BLE RSS dataset for fingerprinting radio map calibration [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5457590
    Explore at:
    Dataset updated
    Sep 20, 2021
    Dataset provided by
    Warsaw University of Technology
    Authors
    Marcin Kolakowski
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset contains Bluetooth Low Energy signal strengths measured in a fully furnished flat. The dataset was originally used in the study concerning RSS-fingerprinting based indoor positioning systems. The data were gathered using a hybrid BLE-UWB localization system, which was installed in the apartment and a mobile robotic platform equipped for a LiDAR. The dataset comprises power measurement results and LiDAR scans performed in 4104 points. The scans used for initial environment mapping and power levels registered in two test scenarios are also attached.

    The set contains both raw and preprocessed measurement data. The Python code for raw data loading is supplied.

    The detailed dataset description can be found in the dataset_description.pdf file.

    When using the dataset, please consider citing the original paper, in which the data were used:

    M. Kolakowski, “Automated Calibration of RSS Fingerprinting Based Systems Using a Mobile Robot and Machine Learning”, Sensors , vol. 21, 6270, Sep. 2021 https://doi.org/10.3390/s21186270

  12. SemanticRail3D Pointcloud Images

    • kaggle.com
    zip
    Updated Apr 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arshia Ghasemlou (2025). SemanticRail3D Pointcloud Images [Dataset]. https://www.kaggle.com/datasets/arshiagha/semanticrail3d-pointcloud-images
    Explore at:
    zip(1073261799 bytes)Available download formats
    Dataset updated
    Apr 9, 2025
    Authors
    Arshia Ghasemlou
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    ✨ Introduction The SemanticRail3D dataset is a 3D point cloud collection tailored for railway infrastructure semantic and instance segmentation. Originally, the dataset comprises 438 point clouds covering approximately 200 meters of track each—with a total of around 2.8 billion points annotated into 11 semantic classes. Collected using high-resolution LiDAR via a LYNX Mobile Mapper (≈980 points/m² with 5mm precision), this dataset serves as an excellent benchmark for state-of-the-art AI models .

    🚀 Key Enhancements & Processing To further enrich its utility for machine learning applications, the dataset has undergone several advanced preprocessing steps and quality assurance measures:

    🔍 Data Standardization via PCA Targeted Features: • Linear elements, including rails and all associated wires.

    PCA Application: • Extracts the principal orientation of these elements by identifying the axis of maximum variance.

    Reorientation: • Aligns the extracted principal axis with the x-axis, ensuring consistency and simplifying downstream analysis.

    📸 Multi-Perspective Visualizations Each point cloud in the dataset is accompanied by four rendered images, generated from distinct camera viewpoints to enhance interpretability and usability. These views are designed to showcase the spatial structure of the railway environment from meaningful angles, aiding both visual inspection and AI model training.

    The saved camera views are based on spherical coordinates and include:

    🔹 Front View • A head-on perspective with a slight downward angle (azimuth = 50°, elevation = 35°) to give a balanced overview of the scene structure.

    🔹 Side View • A lateral perspective (azimuth = 130°, elevation = 55°) that highlights the side profile of rail and overhead wire structures.

    🔹 Diagonal View • An oblique angle (azimuth = -40°, elevation = 55°) providing depth perception and a richer understanding of the 3D layout.

    🔹 Overhead View • A top-down (bird’s-eye) perspective (azimuth = -140°, elevation = 35°) showing the full track arrangement and spatial alignment.

    🎨 Visual Color Coding

    Color Code Mapping: The points in the images are colorized based on a standardized mapping to clearly differentiate between semantic classes:

    🎨 Semantic Class Color Legend

    ClassColor
    Unclassified🔘 Gray
    Rail🟫 Brown
    Catenary🔵 Blue
    Contact🔴 Red
    Droppers🟣 Purple
    Other Wires🟦 Cyan
    Masts🟢 Green
    Signs🟧 Orange
    Traffic Lights🟡 Yellow
    Marks🩷 Pink
    Signs in Masts🟪 Magenta
    Lights⚫ Black

    ✅ Quality Assurance through Human Evaluation

    Detailed Review: • Each point cloud undergoes a rigorous expert review to ensure accurate and consistent labeling.

    Rating System: • Files are rated on a scale from 1 (needs improvement) to 5 (excellent quality). • The ratings are compiled in a separate CSV file for ease of reference.

    Label Error Codes: Within the CSV file, objects with labeling mistakes are flagged using the following codes: • R: Rails • W: Any kind of wires and cables • M: Masts • TS: Traffic signs • Noise: Miscellaneous errors or irrelevant data

    🎯 Dataset Highlights Comprehensive Coverage: • 438 point clouds covering ~200 meters each • Approximately 2.8 billion points annotated into 11 semantic classes

    High-Quality LiDAR Acquisition: • Dual LiDAR sensors on a Mobile Mapping System • Point density of ~980 points/m² and a precision of 5 mm

    Consistent Data Alignment: • PCA is applied to linear elements (rails and wires) for reorientation along the x-axis

    Enhanced Visualizations: • Four images per point cloud provide multiple viewpoints • Points are colorized based on the standardized color code for immediate visual clarity

    Robust Quality Control: • Expert human evaluation rates each point cloud (1 to 5) • A separate CSV file holds the quality ratings along with detailed error codes for any mislabeling

    🔗 Summary The enhanced SemanticRail3D dataset builds on a robust collection of 3D railway point clouds with advanced preprocessing techniques and comprehensive quality assurance. Through PCA-driven alignment, multi-perspective image generation, and an intuitive color coding system, the dataset standardizes data for efficient model training. Furthermore, the additional CSV file detailing human evaluation ratings and specific label error codes provides users with clear insights into the reliability and accuracy of the annotations. This complete solution sets a new benchmark for railway infrastructure analysis, empowering researchers and practitioners to develop more precise and reliable AI solutions.

  13. D

    Detroit Street View Panoramic Imagery

    • detroitdata.org
    • data.detroitmi.gov
    • +1more
    Updated Mar 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Detroit (2025). Detroit Street View Panoramic Imagery [Dataset]. https://detroitdata.org/dataset/detroit-street-view-panoramic-imagery
    Explore at:
    html, arcgis geoservices rest apiAvailable download formats
    Dataset updated
    Mar 24, 2025
    Dataset provided by
    City of Detroit
    Area covered
    Detroit
    Description
    Detroit Street View (DSV) is an urban remote sensing program run by the Enterprise Geographic Information Systems (EGIS) Team within the Department of Innovation and Technology at the City of Detroit. The mission of Detroit Street View is ‘To continuously observe and document Detroit’s changing physical environment through remote sensing, resulting in freely available foundational data that empowers effective city operations, informed decision making, awareness, and innovation.’ 360° panoramic imagery (as well as LiDAR) is collected using a vehicle-mounted mobile mapping system.

    The City of Detroit distributes 360° panoramic street view imagery from the Detroit Street View program via Mapillary.com. Within Mapillary, users can search address, pan/zoom around the map, and load images by clicking on image points. Mapillary also provides several tools for accessing and analyzing information including:
    Please see Mapillary API documentation for more information about programmatic access and specific data components within Mapillary.
    DSV Logo
  14. Z

    Robot@Home2, a robotic dataset of home environments

    • data.niaid.nih.gov
    • data-staging.niaid.nih.gov
    • +1more
    Updated Apr 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ambrosio-Cestero, Gregorio; Ruiz-Sarmiento, José Raul; González-Jiménez, Javier (2024). Robot@Home2, a robotic dataset of home environments [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3901563
    Explore at:
    Dataset updated
    Apr 4, 2024
    Dataset provided by
    Universitiy of Málaga
    University of Málaga
    Authors
    Ambrosio-Cestero, Gregorio; Ruiz-Sarmiento, José Raul; González-Jiménez, Javier
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Robot-at-Home dataset (Robot@Home, paper here) is a collection of raw and processed data from five domestic settings compiled by a mobile robot equipped with 4 RGB-D cameras and a 2D laser scanner. Its main purpose is to serve as a testbed for semantic mapping algorithms through the categorization of objects and/or rooms.

    This dataset is unique in three aspects:

    The provided data were captured with a rig of 4 RGB-D sensors with an overall field of view of 180°H. and 58°V., and with a 2D laser scanner.

    It comprises diverse and numerous data: sequences of RGB-D images and laser scans from the rooms of five apartments (87,000+ observations were collected), topological information about the connectivity of these rooms, and 3D reconstructions and 2D geometric maps of the visited rooms.

    The provided ground truth is dense, including per-point annotations of the categories of the objects and rooms appearing in the reconstructed scenarios, and per-pixel annotations of each RGB-D image within the recorded sequences

    During the data collection, a total of 36 rooms were completely inspected, so the dataset is rich in contextual information of objects and rooms. This is a valuable feature, missing in most of the state-of-the-art datasets, which can be exploited by, for instance, semantic mapping systems that leverage relationships like pillows are usually on beds or ovens are not in bathrooms.

    Robot@Home2

    Robot@Home2, is an enhanced version aimed at improving usability and functionality for developing and testing mobile robotics and computer vision algorithms. It consists of three main components. Firstly, a relational database that states the contextual information and data links, compatible with Standard Query Language. Secondly,a Python package for managing the database, including downloading, querying, and interfacing functions. Finally, learning resources in the form of Jupyter notebooks, runnable locally or on the Google Colab platform, enabling users to explore the dataset without local installations. These freely available tools are expected to enhance the ease of exploiting the Robot@Home dataset and accelerate research in computer vision and robotics.

    If you use Robot@Home2, please cite the following paper:

    Gregorio Ambrosio-Cestero, Jose-Raul Ruiz-Sarmiento, Javier Gonzalez-Jimenez, The Robot@Home2 dataset: A new release with improved usability tools, in SoftwareX, Volume 23, 2023, 101490, ISSN 2352-7110, https://doi.org/10.1016/j.softx.2023.101490.

    @article{ambrosio2023robotathome2,title = {The Robot@Home2 dataset: A new release with improved usability tools},author = {Gregorio Ambrosio-Cestero and Jose-Raul Ruiz-Sarmiento and Javier Gonzalez-Jimenez},journal = {SoftwareX},volume = {23},pages = {101490},year = {2023},issn = {2352-7110},doi = {https://doi.org/10.1016/j.softx.2023.101490},url = {https://www.sciencedirect.com/science/article/pii/S2352711023001863},keywords = {Dataset, Mobile robotics, Relational database, Python, Jupyter, Google Colab}}

    Version historyv1.0.1 Fixed minor bugs.v1.0.2 Fixed some inconsistencies in some directory names. Fixes were necessary to automate the generation of the next version.v2.0.0 SQL based dataset. Robot@Home v1.0.2 has been packed into a sqlite database along with RGB-D and scene files which have been assembled into a hierarchical structured directory free of redundancies. Path tables are also provided to reference files in both v1.0.2 and v2.0.0 directory hierarchies. This version has been automatically generated from version 1.0.2 through the toolbox.v2.0.1 A forgotten foreign key pair have been added.v.2.0.2 The views have been consolidated as tables which allows a considerable improvement in access time.v.2.0.3 The previous version does not include the database. In this version the database has been uploaded.v.2.1.0 Depth images have been updated to 16-bit. Additionally, both the RGB images and the depth images are oriented in the original camera format, i.e. landscape.

  15. f

    Detroit Street View Terrestrial LiDAR (2020-2022)

    • data.ferndalemi.gov
    • detroitdata.org
    • +2more
    Updated Mar 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Detroit (2023). Detroit Street View Terrestrial LiDAR (2020-2022) [Dataset]. https://data.ferndalemi.gov/datasets/4eb4b739af0b4ba38def4a6726ba7d5c
    Explore at:
    Dataset updated
    Mar 30, 2023
    Dataset authored and provided by
    City of Detroit
    Area covered
    Detroit,
    Description

    Detroit Street View (DSV) is an urban remote sensing program run by the Enterprise Geographic Information Systems (EGIS) Team within the Department of Innovation and Technology at the City of Detroit. The mission of Detroit Street View is ‘To continuously observe and document Detroit’s changing physical environment through remote sensing, resulting in freely available foundational data that empowers effective city operations, informed decision making, awareness, and innovation.’ LiDAR (as well as panoramic imagery) is collected using a vehicle-mounted mobile mapping system.Due to variations in processing, index lines are not currently available for all existing LiDAR datasets, including all data collected before September 2020. Index lines represent the approximate path of the vehicle within the time extent of the given LiDAR file. The actual geographic extent of the LiDAR point cloud varies dependent on line-of-sight.Compressed (LAZ format) point cloud files may be requested by emailing gis@detroitmi.gov with a description of the desired geographic area, any specific dates/file names, and an explanation of interest and/or intended use. Requests will be filled at the discretion and availability of the Enterprise GIS Team. Deliverable file size limitations may apply and requestors may be asked to provide their own online location or physical media for transfer.LiDAR was collected using an uncalibrated Trimble MX2 mobile mapping system. The data is not quality controlled, and no accuracy assessment is provided or implied. Results are known to vary significantly. Users should exercise caution and conduct their own comprehensive suitability assessments before requesting and applying this data.Sample Dataset: https://detroitmi.maps.arcgis.com/home/item.html?id=69853441d944442f9e79199b57f26fe3

  16. m

    UNORGANIZED-LIDAR POINT CLOUD-DATASET

    • data.mendeley.com
    Updated May 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    hong lang (2025). UNORGANIZED-LIDAR POINT CLOUD-DATASET [Dataset]. http://doi.org/10.17632/9ry6mj4dw8.1
    Explore at:
    Dataset updated
    May 8, 2025
    Authors
    hong lang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Pavement planar coefficients are critical for a wide range of civil engineering applications, including 3D city modeling, extraction of pavement design parameters, and assessment of pavement conditions. Existing plane fitting methods, however, often struggle to maintain accuracy and stability in complex road environments, particularly when the point cloud is affected by non-pavement objects such as trees, curbstones, pedestrians, and vehicles.

    REoPC is proposed as a robust two-stage estimation method based on road point clouds acquired using a hybrid solid-state LiDAR. The method consists of two main parts: coarse estimation and refined estimation. The first stage employs a dual-plane sliding window to remove major outliers and extract an initial surface. The second stage introduces a new cost function based on the Geman-McClure estimator to further suppress residual noise and reduce fitting instability caused by outlier influence and algorithmic randomness.

    Evaluation is conducted on both synthetic and real-world datasets collected using a custom mobile LiDAR scanning system across three urban road scenarios—flat, crowned, and traffic-interfered segments. Each scenario includes 100 frames of road point clouds, with approximately 35,000 points per frame, offering a diverse and challenging benchmark. REoPC consistently outperforms existing methods in terms of accuracy and robustness and exhibits low sensitivity to parameter tuning, demonstrating strong applicability in varied real-world conditions.

  17. f

    Technical specifications of the Mobile LiDAR System.

    • figshare.com
    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mónica Herrero-Huerta; Roderik Lindenbergh; Pablo Rodríguez-Gonzálvez (2023). Technical specifications of the Mobile LiDAR System. [Dataset]. http://doi.org/10.1371/journal.pone.0196004.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Mónica Herrero-Huerta; Roderik Lindenbergh; Pablo Rodríguez-Gonzálvez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Technical specifications of the Mobile LiDAR System.

  18. d

    Lidar derived shoreline for Beaver Lake near Rogers, Arkansas, 2018

    • catalog.data.gov
    • data.usgs.gov
    • +2more
    Updated Nov 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Lidar derived shoreline for Beaver Lake near Rogers, Arkansas, 2018 [Dataset]. https://catalog.data.gov/dataset/lidar-derived-shoreline-for-beaver-lake-near-rogers-arkansas-2018
    Explore at:
    Dataset updated
    Nov 19, 2025
    Dataset provided by
    U.S. Geological Survey
    Area covered
    Beaver Lake, Rogers, Arkansas
    Description

    Beaver Lake was constructed in 1966 on the White River in the northwest corner of Arkansas for flood control, hydroelectric power, public water supply, and recreation. The surface area of Beaver Lake is about 27,900 acres and approximately 449 miles of shoreline are at the conservation pool level (1,120 feet above the North American Vertical Datum of 1988). Sedimentation in reservoirs can result in reduced water storage capacity and a reduction in usable aquatic habitat. Therefore, accurate and up-to-date estimates of reservoir water capacity are important for managing pool levels, power generation, water supply, recreation, and downstream aquatic habitat. Many of the lakes operated by the U.S. Army Corps of Engineers are periodically surveyed to monitor bathymetric changes that affect water capacity. In October 2018, the U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, completed one such survey of Beaver Lake using a multibeam echosounder. The echosounder data was combined with light detection and ranging (lidar) data to prepare a bathymetric map and a surface area and capacity table. Collection of bathymetric data in October 2018 at Beaver Lake near Rogers, Arkansas, used a marine-based mobile mapping unit that operates with several components: a multibeam echosounder (MBES) unit, an inertial navigation system (INS), and a data acquisition computer. Bathymetric data were collected using the MBES unit in longitudinal transects to provide complete coverage of the lake. The MBES was tilted in some areas to improve data collection along the shoreline, in coves, and in areas that are shallower than 2.5 meters deep (the practical limit of reasonable and safe data collection with the MBES). Two bathymetric datasets collected during the October 2018 survey include the gridded bathymetric point data (BeaverLake2018_bathy.zip) computed on a 3.28-foot (1-meter) grid using the Combined Uncertainty and Bathymetry Estimator (CUBE) method, and the bathymetric quality-assurance dataset (BeaverLake2018_QA.zip). The gridded point data used to create the bathymetric surface (BeaverLake2018_bathy.zip) was quality-assured with data from 9 selected resurvey areas (BeaverLake2018_QA.zip) to test the accuracy of the gridded bathymetric point data. The data are provided as comma delimited text files that have been compressed into zip archives. The shoreline was created from bare-earth lidar resampled to a 3.28-foot (1-meter) grid spacing. A contour line representing the flood pool elevation of 1,135 feet was generated from the gridded data. The data are provided in the Environmental Systems Research Institute shapefile format and have the common root name of BeaverLake2018_1135-ft. All files in the shapefile group must be retrieved to be useable.

  19. i

    Better intelligent systems for mapping amphibian and small bird roadkill. -...

    • iepnb.es
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Better intelligent systems for mapping amphibian and small bird roadkill. - Dataset - CKAN [Dataset]. https://iepnb.es/catalogo/dataset/better-intelligent-systems-for-mapping-amphibian-and-small-bird-roadkill1
    Explore at:
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Roads have multiple effects on wildlife, from animal mortality, habitat and population fragmentation, to modification of animal reproductive behaviour. Monitoring roadkill is expensive and time-consuming, and depend mainly on volunteers. Thus, cheap, easy to implement, and automatic methods for detecting roadkill over larger areas and over time are necessary. We present results from the research project Life LINES, where we developed a cheap and efficient system for detecting amphibians and small birds roadkill using computer vision techniques. We present here the Mobile Mapping System 2, an improved version of the Mobile Mapping System 1 developed during the Roadkill-project and presented in previous IENE congresses. We have successfully reduced the size and energetic consumption of the MMS, so now the device can be attached directly to the back of any car. The MMS2 is composed by several cameras (multi-spectral, visual with 3D laser technology, and high definition). The algorithms were trained with previous collected pictures of road-killed amphibians and small birds. We have tested all images using the Haar Cascade algorithm from the OpenCV library, which provided high rate classification results. We tested the MMS2 in three conditions: a control test with plastic models of amphibians and birds in a small road; a control test with collection specimens of amphibians and birds; and a real test on a 30 km road survey in Southern Portugal. The MMS2 has been developed using low cost components with the idea of saving funds, time and personal resources for wildlife preservation.

  20. m

    Shanghai Huace Navigation Technology Ltd -...

    • macro-rankings.com
    csv, excel
    Updated Aug 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    macro-rankings (2025). Shanghai Huace Navigation Technology Ltd - Net-Income-From-Continuing-Operations [Dataset]. https://www.macro-rankings.com/markets/stocks/300627-she/income-statement/net-income-from-continuing-operations
    Explore at:
    excel, csvAvailable download formats
    Dataset updated
    Aug 30, 2025
    Dataset authored and provided by
    macro-rankings
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    china
    Description

    Net-Income-From-Continuing-Operations Time Series for Shanghai Huace Navigation Technology Ltd. Shanghai Huace Navigation Technology Ltd. engages in the research and development, manufacturing, and integration high-precision satellite navigation and positioning technologies in China and internationally. The company offers global navigation satellite system (GNSS) smart antennas and antennas, controllers and tablets, surveying and mapping software, GNSS sensors, total stations, and data links; handheld laser scanners, airborne LiDAR and mobile mapping systems, and UAV platforms and cameras; USV platforms and hydrographic sensors; SAR systems; and GNSS corrections for use in survey and engineering, 3D mobile mapping, marine surveying, monitoring and infrastructure, and positioning services. It also provides machine control systems for excavators, graders, and dozers; GNSS+INS and IMU sensors; and auto steering, manual guidance, land leveling, and GNSS systems. The company serves the geospatial, machine control, navigation, and agriculture industries. It also engages in property management, investing, and research and development activities. Shanghai Huace Navigation Technology Ltd. was founded in 2003 and is headquartered in Shanghai, China.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Institut für Kartographie und Geoinformatik (2024). Parking lot locations and utilization samples in the Hannover Linden-Nord area from LiDAR mobile mapping surveys [Dataset]. https://data.uni-hannover.de/dataset/parking-locations-and-utilization-from-lidar-mobile-mapping-surveys

Parking lot locations and utilization samples in the Hannover Linden-Nord area from LiDAR mobile mapping surveys

Explore at:
geojson, pngAvailable download formats
Dataset updated
Apr 17, 2024
Dataset authored and provided by
Institut für Kartographie und Geoinformatik
License

Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically

Area covered
Hanover, Linden - Nord
Description

Work in progress: data might be changed

The data set contains the locations of public roadside parking spaces in the northeastern part of Hanover Linden-Nord. As a sample data set, it explicitly does not provide a complete, accurate or correct representation of the conditions! It was collected and processed as part of the 5GAPS research project on September 22nd and October 6th 2022 as a basis for further analysis and in particular as input for simulation studies.

Vehicle Detections

Based on the mapping methodology of Bock et al. (2015) and processing of Leichter et al. (2021), the utilization was determined using vehicle detections in segmented 3D point clouds. The corresponding point clouds were collected by driving over the area on two half-days using a LiDAR mobile mapping system, resulting in several hours between observations. Accordingly, these are only a few sample observations. The trips are made in such a way that combined they cover a synthetic day from about 8-20 clock.

The collected point clouds were georeferenced, processed, and automatically segmented semantically (see Leichter et al., 2021). To automatically extract cars, those points with car labels were clustered by observation epoch and bounding boxes were estimated for the clusters as a representation of car instances. The boxes serve both to filter out unrealistically small and large objects, and to rudimentarily complete the vehicle footprint that may not be fully captured from all sides.

https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/807618b6-5c38-4456-88a1-cb47500081ff/download/detection_map.png" alt="Overview map of detected vehicles" title="Overview map of detected vehicles"> Figure 1: Overview map of detected vehicles

Parking Areas

The public parking areas were digitized manually using aerial images and the detected vehicles in order to exclude irregular parking spaces as far as possible. They were also tagged as to whether they were aligned parallel to the road and assigned to a use at the time of recording, as some are used for construction sites or outdoor catering, for example. Depending on the intended use, they can be filtered individually.

https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/16b14c61-d1d6-4eda-891d-176bdd787bf5/download/parking_area_example.png" alt="Example parking area occupation pattern" title="Visualization of example parking areas on top of an aerial image [by LGLN]"> Figure 2: Visualization of example parking areas on top of an aerial image [by LGLN]

Parking Occupancy

For modelling the parking occupancy, single slots are sampled as center points every 5 m from the parking areas. In this way, they can be integrated into a street/routing graph, for example, as prepared in Wage et al. (2023). Own representations can be generated from the parking area and vehicle detections. Those parking points were intersected with the vehicle boxes to identify occupancy at the respective epochs.

https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/ca0b97c8-2542-479e-83d7-74adb2fc47c0/download/datenpub-bays.png" alt="Overview map of parking slots' average load" title="Overview map of parking slots' average load"> Figure 3: Overview map of average parking lot load

However, unoccupied spaces cannot be determined quite as trivially the other way around, since no detected vehicle can result just as from no measurement/observation. Therefore, a parking space is only recorded as unoccupied if a vehicle was detected at the same time in the neighborhood on the same parking lane and therefore it can be assumed that there is a measurement.

To close temporal gaps, interpolations were made by hour for each parking slot, assuming that between two consecutive observations with an occupancy the space was also occupied in between - or if both times free also free in between. If there was a change, this is indicated by a proportional value. To close spatial gaps, unobserved spaces in the area are drawn randomly from the ten closest occupation patterns around.

This results in an exemplary occupancy pattern of a synthetic day. Depending on the application, the value could be interpreted as occupancy probability or occupancy share.

https://data.uni-hannover.de/dataset/0945cd36-6797-44ac-a6bd-b7311f0f96bc/resource/184a1f75-79ab-4d0e-bb1b-8ed170678280/download/occupation_example.png" alt="Example parking area occupation pattern" title="Example parking area occupation pattern"> Figure 4: Example parking area occupation pattern

References

  • F. Bock, D. Eggert and M. Sester (2015): On-street Parking Statistics Using LiDAR Mobile Mapping, 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 2015, pp. 2812-2818. https://doi.org/10.1109/ITSC.2015.452
  • A. Leichter, U. Feuerhake, and M. Sester (2021): Determination of Parking Space and its Concurrent Usage Over Time Using Semantically Segmented Mobile Mapping Data, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B2-2021, 185–192. https://doi.org/10.5194/isprs-archives-XLIII-B2-2021-185-2021
  • O. Wage, M. Heumann, and L. Bienzeisler (2023): Modeling and Calibration of Last-Mile Logistics to Study Smart-City Dynamic Space Management Scenarios. In 1st ACM SIGSPATIAL International Workshop on Sustainable Mobility (SuMob ’23), November 13, 2023, Hamburg, Germany. ACM, New York, NY, USA, 4 pages. https://doi.org/10.1145/3615899.3627930
Search
Clear search
Close search
Google apps
Main menu