This is a seamless bare earth digital elevation model (DEM) created from lidar terrain elevation data for the Commonwealth of Massachusetts. It represents the elevation of the surface with vegetation and structures removed. The spatial resolution of the map is 1 meter. The elevation of each 1-meter square cell was linearly interpolated from classified lidar-derived point data.This version of the DEM stores the elevation values as integers. The native VALUE field represents the elevation above/below sea level in meters. MassGIS added a FEET field to the VAT (value attribute table) to store the elevation in feet as calculated by multiplying VALUE x 3.28084.Dates of lidar data used in this DEM range from 2010-2015. The overlapping lidar projects were adjusted to the same projection and datum and then mosaicked, with the most recent data replacing any older data. Several very small gaps between the project areas were patched with older lidar data where necessary or with models from recent aerial photo acquisitions. See https://www.mass.gov/doc/lidar-project-areas-original/download for an index map.This DEM is referenced to the WGS_1984_Web_Mercator_Auxiliary_Sphere spatial reference.See the MassGIS datalayer page to download the data as a file geodatabase raster dataset.View this service in the Massachusetts Elevation Finder.
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
The i.c.sens Visual-Inertial-LiDAR Dataset is a data set for the evaluation of dead reckoning or SLAM approaches in the context of mobile robotics. It consists of street-level monocular RGB camera images, a front-facing 180° point cloud, angular velocities, accelerations and an accurate ground truth trajectory. In total, we provide around 77 GB of data resulting from a 15 minutes drive, which is split into 8 rosbags of 2 minutes (10 GB) each. Besides, the intrinsic camera parameters and the extrinsic transformations between all sensor coordinate systems are given. Details on the data and its usage can be found in the provided documentation file.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/0ff90ef9-fa61-4ee3-b69e-eb6461abc57b/download/sensor_platform_small.jpg" alt="">
Image credit: Sören Vogel
The data set was acquired in the context of the measurement campaign described in Schoen2018. Here, a vehicle, which can be seen below, was equipped with a self-developed sensor platform and a commercially available Riegl VMX-250 Mobile Mapping System. This Mobile Mapping System consists of two laser scanners, a camera system and a localization unit containing a highly accurate GNSS/IMU system.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/2a1226b8-8821-4c46-b411-7d63491963ed/download/vehicle_small.jpg" alt="">
Image credit: Sören Vogel
The data acquisition took place in May 2019 during a sunny day in the Nordstadt of Hannover (coordinates: 52.388598, 9.716389). The route we took can be seen below. This route was completed three times in total, which amounts to a total driving time of 15 minutes.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/8a570408-c392-4bd7-9c1e-26964f552d6c/download/google_earth_overview_small.png" alt="">
The self-developed sensor platform consists of several sensors. This dataset provides data from the following sensors:
To inspect the data, first start a rosmaster and launch rviz using the provided configuration file:
roscore & rosrun rviz rviz -d icsens_data.rviz
Afterwards, start playing a rosbag with
rosbag play icsens-visual-inertial-lidar-dataset-{number}.bag --clock
Below we provide some exemplary images and their corresponding point clouds.
https://data.uni-hannover.de/dataset/0bcea595-0786-44f6-a9e2-c26a779a004b/resource/dc1563c0-9b5f-4c84-b432-711916cb204c/download/combined_examples_small.jpg" alt="">
R. Voges, C. S. Wieghardt, and B. Wagner, “Finding Timestamp Offsets for a Multi-Sensor System Using Sensor Observations,” Photogrammetric Engineering & Remote Sensing, vol. 84, no. 6, pp. 357–366, 2018.
R. Voges and B. Wagner, “RGB-Laser Odometry Under Interval Uncertainty for Guaranteed Localization,” in Book of Abstracts of the 11th Summer Workshop on Interval Methods (SWIM 2018), Rostock, Germany, Jul. 2018.
R. Voges and B. Wagner, “Timestamp Offset Calibration for an IMU-Camera System Under Interval Uncertainty,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, Oct. 2018.
R. Voges and B. Wagner, “Extrinsic Calibration Between a 3D Laser Scanner and a Camera Under Interval Uncertainty,” in Book of Abstracts of the 12th Summer Workshop on Interval Methods (SWIM 2019), Palaiseau, France, Jul. 2019.
R. Voges, B. Wagner, and V. Kreinovich, “Efficient Algorithms for Synchronizing Localization Sensors Under Interval Uncertainty,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 1–11, 2020.
R. Voges, B. Wagner, and V. Kreinovich, “Odometry under Interval Uncertainty: Towards Optimal Algorithms, with Potential Application to Self-Driving Cars and Mobile Robots,” Reliable Computing (Interval Computations), vol. 27, no. 1, pp. 12–20, 2020.
R. Voges and B. Wagner, “Set-Membership Extrinsic Calibration of a 3D LiDAR and a Camera,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, Oct. 2020, accepted.
R. Voges, “Bounded-Error Visual-LiDAR Odometry on Mobile Robots Under Consideration of Spatiotemporal Uncertainties,” PhD thesis, Gottfried Wilhelm Leibniz Universität, 2020.
The Dauphin County, PA 2016 QL2 LiDAR project called for the planning, acquisition, processing and derivative products of LIDAR data to be collected at a nominal pulse spacing (NPS) of 0.7 meters. Project specifications are based on the U.S. Geological Survey National Geospatial Program Base LIDAR Specification, Version 1.2. The data was developed based on a horizontal projection/datum of NAD83 (2011) State Plane Pennsylvania South Zone, US survey feet; NAVD1988 (Geoid 12B), US survey feet. LiDAR data was delivered in RAW flight line swath format, processed to create Classified LAS 1.4 Files formatted to 711 individual 5,000-foot x 5,000-foot tiles. Tile names use the following naming schema: "YYYYXXXXPAd" where YYYY is the first 3 characters of the tile's upper left corner Y-coordinate, XXXX - the first 4 characters of the tile's upper left corner X-coordinate, PA = Pennsylvania, and d = 'N' for North or 'S' for South. Corresponding 2.5-foot gridded hydro-flattened bare earth raster tiled DEM files and intensity image files were created using the same 5,000-foot x 5,000-foot schema. Hydro-flattened breaklines were produced in Esri file geodatabase format. Continuous 2-foot contours were produced in Esri file geodatabase format. Ground Conditions: LiDAR collection began in Spring 2016, while no snow was on the ground and rivers were at or below normal levels. In order to post process the LiDAR data to meet task order specifications, Quantum Spatial established a total of 84 control points (24 calibration control points and 60 QC checkpoints). These were used to calibrate the LIDAR to known ground locations established throughout the project area.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset provides a dataset of high resolution image-grade LiDAR SLAM in .bag format.
BY USING THIS WEBSITE OR THE CONTENT THEREIN, YOU AGREE TO THE TERMS OF USE.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this study, we present an extensive evaluation of state-of-the-art YOLO object detection architectures for identifying snow poles in LiDAR-derived imagery captured under challenging Nordic conditions. Building upon our previous work on the SnowPole Detection dataset [1] and our LiDAR–GNSS-based localization framework [2], we expand the benchmark to include six YOLO models—YOLOv5s, YOLOv7-tiny, YOLOv8n, YOLOv9t, YOLOv10n, and YOLOv11n—evaluated across multiple input modalities. Specifically, we assess single-channel modalities (Reflectance, Signal, Near-Infrared) and six pseudo-color combinations derived by mapping these channels to RGB representations. Each model’s performance is quantified using Precision, Recall, mAP@50, mAP@50–95, and GPU inference latency. To facilitate systematic comparison, we define a composite Rank Score that integrates detection accuracy and real-time performance in a weighted formulation. Experimental results show that YOLOv9t consistently achieves the highest detection accuracy, while YOLOv11n provides the best trade-off between accuracy and inference speed, making it a promising candidate for real-time applications on embedded platforms. Among input modalities, pseudo-color combinations—particularly those fusing Near-Infrared, Signal, and Reflectance channels—outperformed single modalities across most configurations, achieving the highest Rank Scores and mAP metrics. Therefore, we recommend using multimodal LiDAR representations such as Combination 4 and Combination 5 to maximize detection robustness in practical deployments. All datasets, benchmarking code, and trained models are publicly avail- able to support reproducibility and further research through our GitHub repository (a).
References [1] Durga Prasad Bavirisetti, Gabriel Hanssen Kiss, Petter Arnesen, Hanne Seter, Shaira Tabassum, and Frank Lindseth. Snowpole detection: A comprehensive dataset for detection and localization using lidar imaging in nordic winter conditions. Data in Brief, 59:111403, 2025. [2] Durga Prasad Bavirisetti, Gabriel Hanssen Kiss, and Frank Lindseth. A pole detection and geospatial localization framework using lidar-gnss data fusion. In 2024 27th International Conference on Information Fusion (FUSION), pages 1–8. IEEE, 2024. (a) https://github.com/MuhammadIbneRafiq/Extended-evaluation-snowpole-lidar-dataset
This shaded relief image was generated from the lidar-based bare-earth digital elevation model (DEM). A shaded relief image provides an illustration of variations in elevation using artificial shadows. Based on a specified position of the sun, areas that would be in sunlight are highlighted and areas that would be in shadow are shaded. In this instance, the position of the sun was assumed to be 45 degrees above the northwest horizon.The shaded relief image shows areas that are not in direct sunlight as shadowed. It does not show shadows that would be cast by topographic features onto the surrounding surface.Using ERDAS IMAGINE, a 3X3 neighborhood around each pixel in the DEM was analyzed, and a comparison was made between the sun's position and the angle that each pixel faces. The pixel was then assigned a value between -1 and +1 to represent the amount of light reflected. Negative numbers and zero values represent shadowed areas, and positive numbers represent sunny areas. In ArcGIS Desktop 10.7.1, the image was converted to a JPEG 2000 format with values from 0 (black) to 255 (white).See the MassGIS datalayer page to download the data as a JPEG 2000 image file.View this service in the Massachusetts Elevation Finder.MassGIS has also published a Lidar Shaded Relief tile service (cache) hosted in ArcGIS Online.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data was collected by the Geological Survey Ireland, the Department of Culture, Heritage and the Gaeltacht, the Discovery Programme, the Heritage Council, Transport Infrastructure Ireland, New York University, the Office of Public Works and Westmeath County Council. All data formats are provided as GeoTIFF rasters but are at different resolutions. Data resolution varies depending on survey requirements. Resolutions for each organisation are as follows: GSI – 1m DCHG/DP/HC - 0.13m, 0.14m, 1m NY – 1m TII – 2m OPW – 2m WMCC - 0.25m Both a DTM and DSM are raster data. Raster data is another name for gridded data. Raster data stores information in pixels (grid cells). Each raster grid makes up a matrix of cells (or pixels) organised into rows and columns. The grid cell size varies depending on the organisation that collected it. GSI data has a grid cell size of 1 meter by 1 meter. This means that each cell (pixel) represents an area of 1 meter squared.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The EarthScope Northern California Lidar project acquired high resolution airborne laser swath mapping imagery along major active faults as part of the EarthScope Facility project funded by the National Science Foundation (NSF). Between this project and the previously conducted B4 project, also funded by NSF, the entire San Andreas fault system has now been imaged with high resolution airborne lidar, along with many other important geologic features. EarthScope is funded by NSF and conducted in partnership with the USGS and NASA. GeoEarthScope is a component of EarthScope that includes the acquisition of aerial and satellite imagery and geochronology. EarthScope is managed at UNAVCO. Please use the following language to acknowledge EarthScope Lidar: This material is based on services provided to the Plate Boundary Observatory by NCALM (http://www.ncalm.org). PBO is operated by UNAVCO for EarthScope (http://www.earthscope.org) and supported by the National Science Foundation (No. EAR-0350028 and EAR-0732947).
The Tas Imagery and LiDAR Program Index shows the planning and progress of current and future capture of aerial imagery and LiDAR data procured through the Tasmanian Imagery Program. The Tas Imagery and LiDAR Program Index shows the planning and progress of current and future capture of aerial imagery and LiDAR data procured through the Tasmanian Imagery Program.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview
The Korea University Camera-LIDAR (KUCL) dataset contains images and point clouds acquired in indoor and outdoor environments for various applications (e.g., calibration of rigid-body transformation between camera and LIDAR) in robotics and computer vision communities.
Setup
The images were taken using a Point Grey Ladybug5 (specifications) camera and point clouds were acquired with a Velodyne VLP-16 LIDAR (specifications). We rigidly mounted both sensors on the sensor frame during the overall data acquisition. Each pair of images and point clouds was discretely acquired while maintaining the sensor system standing still to reduce time-synchronization problems.
Description
Each dataset (zip file) is organized as follows:
We also provide MATLAB functions projecting point cloud onto spherical panorama and pinhole images. Before running the following functions, please unzip the dataset file ('indoor.zip' or 'outdoor.zip') under the main directory.
The rigid-body transformation between the Ladybug5 and the VLP-16 in each function is acquired using our edge-based Camera-LIDAR calibration method with Gaussian Mixture Model (GMM). For the details, please refer to our paper (https://doi.org/10.1002/rob.21893).
Citation
Please cite the following paper when using this dataset in your work.
License information
The KUCL dataset is released under a Creative Commons Attribution 4.0 International License, CC BY 4.0
Contact Information
If you have any issues about the KUCL dataset, please contact us at kangjae07@gmail.com.
ARRA Supplemental Deliverable, Task 8: Digital Data Series Thermal Infrared Information Layer for Oregon (contains all or portions of 42120H6-Ana River; 43120C5-Christmas Lake; 43120C6-Crack In The Ground; 42120H5-Diablo Peak; 43118B1-Dowell Butte; 43118B2-Duck Creek Butte; 43120A7-Egli Rim; 43120B5-Fandango Canyon; 43118A3-Folly Farm; 43121C1-Fort Rock; 43120C4-Fossil Lake; 43121A1-Hager Mountain; 43118A3-Lambing Canyon; 43121C2-McCarty Butte; 43121B2-Oatman Flat; 43120A5-Saint Patrick Mountain; 43120C3-Sand Rock; 43120A6-Sheeplick Draw; 43121B2-Silver Lake; 42120H7-Summer Lake; 43120B8-Tuff Butte; 43118A1-Turnbull Peak) , Release-1 (TIRILO-1) contains thermal infrared intensity images, image-frames rectified, native image frames, and thermal infrared mosaics. Lidar ascii point data are available in LAS format; DEMs provided as bare-earth, highest-hit; and accompaning metadata. Other files include Shp files of 7.5 minute USGS quadrangles of Oregon, 1/100th USGS quadrangles of Oregon. All data are format specific to ESRI format - data must be viewed using specialty software capable of viewing .shp, geotif, and ESRI grid formats. Customers are responsible for sending DOGAMI a blank 400 GB portable external hard drive storage, USB 2.0 Interface. Fee $150.
This raster dataset contains 1-meter lidar-derived imagery of 7.5 minute quadrangles in karst areas of Puerto Rico and was created using geographic information systems (GIS) software. Lidar-derived elevation data, acquired in 2018, were used to create a 1-meter resolution working digital elevation model (DEM). To create this imagery, a hillshade was applied and a topographic position index (TPI) raster was calculated. These two rasters were uploaded into GlobalMapper, where the TPI raster was made partially transparent and overlaid the hillshade DEM. The resulting image was exported to create these 1-meter resolution lidar-derived images. The data is projected in North America Datum (NAD) 1983 (2011) UTM Zone 19N.
The LiDAR Index was created to illustrate the extents of LiDAR imagery and data currently Existing or In the Progress or Planned for the Department of Water and Environmental Regulation (DWER). Each area is delineated by a polygon with attributes denoting its general area coverage, status, file location, Contractor and availability of metadata. Exists various datasets with varying degrees of accuracy, coverage and access. DWER custodial datasets can be purchased by external entities by contacting the Department of Water and Environmental Regulation.
DurLAR is a high-fidelity 128-channel 3D LiDAR dataset with panoramic ambient (near infrared) and reflectivity imagery for multi-modal autonomous driving applications. Compared to existing autonomous driving task datasets, DurLAR has the following novel features:
High vertical resolution LiDAR with 128 channels, which is twice that of any existing datasets, full 360 degree depth, range accuracy to ±2 cm at 20-50m.
Ambient illumination (near infrared) and reflectivity panoramic imagery are made available in the Mono16 format (2048 × 128 resolution), with this being only dataset to make this provision.
No rolling shutter effect, as our flash LiDAR captures all 128 channels simultaneously.
Ambient illumination data is recorded via an on-board lux meter, which is again not available in previous datasets.
High-fidelity GNSS/INS available via an onboard OxTS navigation unit operating at 100 Hz and receiving position and timing data from multiple GNSS con-stellations in addition to GPS.
KITTI data format adopted as the de facto dataset format such that it can be parsed using both the DurLAR development kit and existing KITTI-compatible tools.
Diversity over repeated locations such that the dataset has been collected under diverse environmental and weather conditions over the same driving route with additional variations in the time of day relative to environmental conditions.
Sensor placement
LiDAR: Ouster OS1-128 LiDAR sensor with 128 channels vertical resolution
Stereo Camera: Carnegie Robotics MultiSense S21 stereo camera with grayscale, colour, and IR enhanced imagers, 2048x1088 @ 2MP resolution
GNSS/INS: OxTS RT3000v3 global navigation satellite and inertial navigation system, supporting localization from GPS, GLONASS, BeiDou, Galileo, PPP and SBAS constellations
Lux Meter: Yocto Light V3, a USB ambient light sensor (lux meter), measuring ambient light up to 100,000 lux
This dataset contains png images from the NCAR REAL Lidar Imagery dataset. The dataset is from the TREX period from 20060314 to 20060501 and contains both RHI and PPI images. When ordering or browsing data, be aware of the following data gap. There is no data for the following days: 20060424-28
Original Product: These lidar data are processed Classified LAS 1.4 files, formatted to 654 individual 1000 m x 1000 m tiles; used to create intensity images, 3D breaklines, and hydro-flattened DEMs as necessary.
Original Dataset Geographic Extent: 4 counties (Alameda, Marin, San Francisco, San Mateo) in California, covering approximately 53 total square miles.
Original Dataset Descriptio...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary Table. Metadata for the 23 LiDAR surveys used to create a temporally- and spatially-averaged digital elevation model of nesting beach in the Florida Panhandle. Used in Ware et al. (2021) Exposure of loggerhead sea turtle nests to waves in the Florida Panhandle.
The intensity values of the Green and NIR LiDAR laser returns from the New York City Topobathymetric LiDAR dataset. Collected between 05/03/17 and 07/26/17.
These lidar data were collected between January 21st and January 27th, 2010, in response to the January 12th magnitude 7.0 Haiti earthquake. The data collection was performed by the Center for Imaging Science at Rochester Institute of Technology (RIT) and Kucera International under sub-contract to ImageCat, Inc., and funded by the Global Facility for Disaster Recovery and Recovery (GFDRR) hosted at the World Bank. All data are available in the public domain. More information about these data can be found at the RIT Information Products Laboratory for Emergency Response (IPLER) 2010 Haiti Earthquake page.
This is a seamless bare earth digital elevation model (DEM) created from lidar terrain elevation data for the Commonwealth of Massachusetts. It represents the elevation of the surface with vegetation and structures removed. The spatial resolution of the map is 1 meter. The elevation of each 1-meter square cell was linearly interpolated from classified lidar-derived point data.This version of the DEM stores the elevation values as integers. The native VALUE field represents the elevation above/below sea level in meters. MassGIS added a FEET field to the VAT (value attribute table) to store the elevation in feet as calculated by multiplying VALUE x 3.28084.Dates of lidar data used in this DEM range from 2010-2015. The overlapping lidar projects were adjusted to the same projection and datum and then mosaicked, with the most recent data replacing any older data. Several very small gaps between the project areas were patched with older lidar data where necessary or with models from recent aerial photo acquisitions. See https://www.mass.gov/doc/lidar-project-areas-original/download for an index map.This DEM is referenced to the WGS_1984_Web_Mercator_Auxiliary_Sphere spatial reference.See the MassGIS datalayer page to download the data as a file geodatabase raster dataset.View this service in the Massachusetts Elevation Finder.