Homepage | GitHub
LiDARs are one of the main sensors used for autonomous driving applications, providing accurate depth estimation regardless of lighting conditions. However, they are severely affected by adverse weather conditions such as rain, snow, and fog.
This dataset provides semantic labels for a subset of the Road Spray dataset, which contains scenes of vehicles traveling at different speeds on wet surfaces, creating a trailing spray effect. We provide semantic labels for over 200 dynamic scenes, labeling each point in the LiDAR point clouds as background (road, vegetation, buildings, ...), foreground (moving vehicles), and noise (spray, LiDAR artifacts).
The SemanticSpray dataset contains scenes in wet surface conditions captured by Camera, LiDAR, and Radar.
The following label types are provided:
Camera: 2D Boxes
LiDAR: 3D Boxes, Semantic Labels
Radar: Semantic Labels
🤖 Robo3D - The SemanticKITTI-C Benchmark SemanticKITTI-C is an evaluation benchmark heading toward robust and reliable 3D semantic segmentation in autonomous driving. With it, we probe the robustness of 3D segmentors under out-of-distribution (OoD) scenarios against corruptions that occur in the real-world environment. Specifically, we consider natural corruptions happen in the following cases:
Adverse weather conditions, such as fog, wet ground, and snow; External disturbances that are caused by motion blur or result in LiDAR beam missing; Internal sensor failure, including crosstalk, possible incomplete echo, and cross-sensor scenarios.
SemanticKITTI-C is part of the Robo3D benchmark. Visit our homepage to explore more details.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Ablation experimental results of the model on the “Cityscape to Foggy” dataset.
SemanticSTF is an adverse-weather point cloud dataset that provides dense point-level annotations and allows to study 3DSS under various adverse weather conditions. It contains 2,076 scans captured by a Velodyne HDL64 S3D LiDAR sensor from STF that cover various adverse weather conditions including 694 snowy, 637 dense-foggy, 631 light-foggy, and 114 rainy (all rainy LiDAR scans in STF).
MUSES offers 2500 multi-modal scenes, evenly distributed across various combinations of weather conditions (clear, fog, rain, and snow) and types of illumination (daytime, nighttime). Each image includes high-quality 2D pixel-level panoptic annotations and class-level and novel instance-level uncertainty annotations. Further, each adverse-condition image has a corresponding image of the same scene taken under clear-weather, daytime conditions. The annotation process for MUSES utilizes all available sensor data, allowing the annotators to also reliably label degraded image regions that are still discernible in other modalities. This results in better pixel coverage in the annotations and creates a more challenging evaluation setup.
The dataset provides public benchmarks for:
Panoptic segmentation Uncertainty-aware panoptic segmentation Semantic segmentation Object detection
Sensor modalities included:
Frame camera (RGB) MEMS lidar FMCW radar HD event camera IMU/GNSS sensor
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Xogta Qaybinta Shayga Dareenka Fog waa hantida muhiimka ah ee goobta dareenka fog, isku darka sawirada DOTA ee xogta furan iyo ilo internet oo dheeri ah. Xallinta u dhaxaysa 451 Ă— 839 ilaa 6573 Ă— 3727 pixels ee sawirada caadiga ah iyo ilaa 25574 Ă— 15342 pixels ee sawirada waaweyn ee aan la jarin, xogtan waxa ka mid ah qaybo kala duwan sida garoommada ciyaaraha, baabuurta, iyo maxkamadaha ciyaaraha, dhamaantood waxa la sharraxay tusaale ahaan iyo qaybinta semantic.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Internet of Things (IoT) devices are growing constantly in numbers, being forecasted to reach 27 billions in 2025. With such a large number of connected devices, the energy consumption concerns are a major priority for the upcoming years. Cloud / edge / fog computing are critically associated with IoT devices as enablers for data communication and coordination among devices. In this paper, we look at the distribution of semantic reasoning between different IoT devices and define a new class of reasoning, multi-step reasoning that can be associated at the level of the edge or fog node in the context of an IoT cloud / edge / fog computing topology. We conduct an experiment based on synthetic datasets to evaluate the performance of multi-step reasoning in terms of power consumption and other metrics. Overall we found that multi-step reasoning can help in reducing computation time and energy consumption on IoT devices in presence of larger datasets.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Qaybta Dareemida Fog ee Xogta Qaybta Dareenka Fog waa ururin gaar ah oo loogu talagalay barta dareemaha fog, oo ka kooban sawirro dayax-gacmeed oo cabbirkoodu sareeyo oo laga keenay intarneedka, cabbirkooduna u dhexeeyo 10752 x 10240 ilaa 12470 x 13650 pixels. Xog-ururintan waxaa loogu talagalay qaybinta semantic, oo leh qoraallo daboolaya sifooyin kala duwan oo dabiici ah iyo kuwa dad-sameeyey sida dhismayaal, kaymo, biyo-biyoodyo, waddooyin, iyo dhul-beereed.
The application of visual tasks such as object detection and semantic segmentation in the field of rail transit is becoming increasingly widespread. However, most existing visual systems are based on image design in clear environments, and the problem of degraded images is inevitable in actual rail transit scenes. For example, trains may encounter harsh weather conditions (such as fog and rain) and low light environments such as tunnels and nighttime during their year-round operation, which significantly reduces the clarity and recognizability of images, directly affecting the performance of advanced visual tasks such as object detection and semantic segmentation. This study focuses on the low visibility problem caused by adverse weather conditions (such as fog and rainfall) and weak light conditions in rail transit scenarios, aiming to improve the quality of degraded images through image enhancement technology and enhance the robustness and reliability of visual systems in complex environments. Through targeted image enhancement methods, research aims to restore the detailed information of degraded images, enhance their recognizability, provide higher quality input data for visual tasks in the field of rail transit, and optimize overall system performance.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SynthRSF Dataset - Part 1 of 2
Contents
SynthRSF (Parts 1, 2):
26,893 photorealistic image pairs (noisy and ground truth).
14 3D scenes set in various environmental (rural/urban), contextual (indoor/outdoor) and lighting conditions (day/night).
Created using Unreal 5.2 engine.
SynthRSF-MM expansion:
13,800 additional pairs are accompanied by:
16-bit depth maps.
Pixel-accurate object annotations for 41 object classes.
Overview
SynthRSF (Synthetic with Rain, Snow, uniform and non-uniform Fog) dataset is introduced for training and evaluating adverse weather image denoising models as well as use in object detection, semantic segmentation, and depth estimation models.
SynthRSF addresses a gap in synthetic datasets for adverse weather conditions, contributing significantly more photorealistic data compared to common 2D layered noise datasets, as well as additional modalities.
Applications include autonomous driving, surveillance, robotics, computer-assisted search-and-rescue.
We introduce ACDC, the Adverse Conditions Dataset with Correspondences for training and testing semantic segmentation methods on adverse visual conditions. It comprises a large set of 4006 images which are evenly distributed between fog, nighttime, rain, and snow. Each adverse-condition image comes with a high-quality fine pixel-level semantic annotation, a corresponding image of the same scene taken under normal conditions and a binary mask that distinguishes between intra-image regions of clear and uncertain semantic content.
ACDC supports two tasks: 1. standard semantic segmentation 2. uncertainty-aware semantic segmentation
RaidaR is a rich annotated image dataset of rainy street scenes. RaidaR consists of 58,542 real rainy images containing several rain-induced artifacts: fog, droplets, road reflections, etc. 5,000/3,658 images were carefully semantic/instance segmentated, respectively.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Homepage | GitHub
LiDARs are one of the main sensors used for autonomous driving applications, providing accurate depth estimation regardless of lighting conditions. However, they are severely affected by adverse weather conditions such as rain, snow, and fog.
This dataset provides semantic labels for a subset of the Road Spray dataset, which contains scenes of vehicles traveling at different speeds on wet surfaces, creating a trailing spray effect. We provide semantic labels for over 200 dynamic scenes, labeling each point in the LiDAR point clouds as background (road, vegetation, buildings, ...), foreground (moving vehicles), and noise (spray, LiDAR artifacts).
The SemanticSpray dataset contains scenes in wet surface conditions captured by Camera, LiDAR, and Radar.
The following label types are provided:
Camera: 2D Boxes
LiDAR: 3D Boxes, Semantic Labels
Radar: Semantic Labels