12 datasets found
  1. P

    Data from: SemanticSpray Dataset Dataset

    • paperswithcode.com
    Updated Jul 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). SemanticSpray Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/semanticspray-dataset
    Explore at:
    Dataset updated
    Jul 12, 2023
    Description

    Homepage | GitHub

    LiDARs are one of the main sensors used for autonomous driving applications, providing accurate depth estimation regardless of lighting conditions. However, they are severely affected by adverse weather conditions such as rain, snow, and fog.

    This dataset provides semantic labels for a subset of the Road Spray dataset, which contains scenes of vehicles traveling at different speeds on wet surfaces, creating a trailing spray effect. We provide semantic labels for over 200 dynamic scenes, labeling each point in the LiDAR point clouds as background (road, vegetation, buildings, ...), foreground (moving vehicles), and noise (spray, LiDAR artifacts).

    The SemanticSpray dataset contains scenes in wet surface conditions captured by Camera, LiDAR, and Radar.

    The following label types are provided:

    Camera: 2D Boxes

    LiDAR: 3D Boxes, Semantic Labels

    Radar: Semantic Labels

  2. P

    SemanticKITTI-C Dataset

    • paperswithcode.com
    Updated Apr 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lingdong Kong; Youquan Liu; Xin Li; Runnan Chen; Wenwei Zhang; Jiawei Ren; Liang Pan; Kai Chen; Ziwei Liu (2023). SemanticKITTI-C Dataset [Dataset]. https://paperswithcode.com/dataset/semantickitti-c
    Explore at:
    Dataset updated
    Apr 3, 2023
    Authors
    Lingdong Kong; Youquan Liu; Xin Li; Runnan Chen; Wenwei Zhang; Jiawei Ren; Liang Pan; Kai Chen; Ziwei Liu
    Description

    🤖 Robo3D - The SemanticKITTI-C Benchmark SemanticKITTI-C is an evaluation benchmark heading toward robust and reliable 3D semantic segmentation in autonomous driving. With it, we probe the robustness of 3D segmentors under out-of-distribution (OoD) scenarios against corruptions that occur in the real-world environment. Specifically, we consider natural corruptions happen in the following cases:

    Adverse weather conditions, such as fog, wet ground, and snow; External disturbances that are caused by motion blur or result in LiDAR beam missing; Internal sensor failure, including crosstalk, possible incomplete echo, and cross-sensor scenarios.

    SemanticKITTI-C is part of the Robo3D benchmark. Visit our homepage to explore more details.

  3. Ablation experimental results of the model on the “Cityscape to Foggy”...

    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhengyun Fang; Hongbin Wang; Shilin Li; Yi Hu; Xingbo Han (2023). Ablation experimental results of the model on the “Cityscape to Foggy” dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0270356.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Zhengyun Fang; Hongbin Wang; Shilin Li; Yi Hu; Xingbo Han
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Ablation experimental results of the model on the “Cityscape to Foggy” dataset.

  4. P

    SemanticSTF Dataset

    • paperswithcode.com
    Updated May 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aoran Xiao; Jiaxing Huang; Weihao Xuan; Ruijie Ren; Kangcheng Liu; Dayan Guan; Abdulmotaleb El Saddik; Shijian Lu; Eric Xing (2024). SemanticSTF Dataset [Dataset]. https://paperswithcode.com/dataset/semanticstf
    Explore at:
    Dataset updated
    May 1, 2024
    Authors
    Aoran Xiao; Jiaxing Huang; Weihao Xuan; Ruijie Ren; Kangcheng Liu; Dayan Guan; Abdulmotaleb El Saddik; Shijian Lu; Eric Xing
    Description

    SemanticSTF is an adverse-weather point cloud dataset that provides dense point-level annotations and allows to study 3DSS under various adverse weather conditions. It contains 2,076 scans captured by a Velodyne HDL64 S3D LiDAR sensor from STF that cover various adverse weather conditions including 694 snowy, 637 dense-foggy, 631 light-foggy, and 114 rainy (all rainy LiDAR scans in STF).

  5. P

    MUSES: MUlti-SEnsor Semantic perception dataset Dataset

    • paperswithcode.com
    Updated Jul 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tim Brödermann; David Bruggemann; Christos Sakaridis; Kevin Ta; Odysseas Liagouris; Jason Corkill; Luc van Gool (2024). MUSES: MUlti-SEnsor Semantic perception dataset Dataset [Dataset]. https://paperswithcode.com/dataset/muses-multi-sensor-semantic-perception
    Explore at:
    Dataset updated
    Jul 22, 2024
    Authors
    Tim Brödermann; David Bruggemann; Christos Sakaridis; Kevin Ta; Odysseas Liagouris; Jason Corkill; Luc van Gool
    Description

    MUSES offers 2500 multi-modal scenes, evenly distributed across various combinations of weather conditions (clear, fog, rain, and snow) and types of illumination (daytime, nighttime). Each image includes high-quality 2D pixel-level panoptic annotations and class-level and novel instance-level uncertainty annotations. Further, each adverse-condition image has a corresponding image of the same scene taken under clear-weather, daytime conditions. The annotation process for MUSES utilizes all available sensor data, allowing the annotators to also reliably label degraded image regions that are still discernible in other modalities. This results in better pixel coverage in the annotations and creates a more challenging evaluation setup.

    The dataset provides public benchmarks for:

    Panoptic segmentation Uncertainty-aware panoptic segmentation Semantic segmentation Object detection

    Sensor modalities included:

    Frame camera (RGB) MEMS lidar FMCW radar HD event camera IMU/GNSS sensor

  6. s

    Xogta Qaybinta Shayga Dareenka Fog

    • so.shaip.com
    json
    Updated Dec 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Xogta Qaybinta Shayga Dareenka Fog [Dataset]. https://so.shaip.com/offerings/remote-sensing-aerial-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 6, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Xogta Qaybinta Shayga Dareenka Fog waa hantida muhiimka ah ee goobta dareenka fog, isku darka sawirada DOTA ee xogta furan iyo ilo internet oo dheeri ah. Xallinta u dhaxaysa 451 Ă— 839 ilaa 6573 Ă— 3727 pixels ee sawirada caadiga ah iyo ilaa 25574 Ă— 15342 pixels ee sawirada waaweyn ee aan la jarin, xogtan waxa ka mid ah qaybo kala duwan sida garoommada ciyaaraha, baabuurta, iyo maxkamadaha ciyaaraha, dhamaantood waxa la sharraxay tusaale ahaan iyo qaybinta semantic.

  7. f

    Multi-Step Reasoning for IoT Devices

    • figshare.com
    txt
    Updated Feb 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jose Miguel Blanco; Bruno Rossi (2023). Multi-Step Reasoning for IoT Devices [Dataset]. http://doi.org/10.6084/m9.figshare.19493996.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Feb 8, 2023
    Dataset provided by
    figshare
    Authors
    Jose Miguel Blanco; Bruno Rossi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Internet of Things (IoT) devices are growing constantly in numbers, being forecasted to reach 27 billions in 2025. With such a large number of connected devices, the energy consumption concerns are a major priority for the upcoming years. Cloud / edge / fog computing are critically associated with IoT devices as enablers for data communication and coordination among devices. In this paper, we look at the distribution of semantic reasoning between different IoT devices and define a new class of reasoning, multi-step reasoning that can be associated at the level of the edge or fog node in the context of an IoT cloud / edge / fog computing topology. We conduct an experiment based on synthetic datasets to evaluate the performance of multi-step reasoning in terms of power consumption and other metrics. Overall we found that multi-step reasoning can help in reducing computation time and energy consumption on IoT devices in presence of larger datasets.

  8. s

    Dareenka Fog ee Muuqaalka Qaybinta Xogta

    • so.shaip.com
    json
    Updated Dec 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Dareenka Fog ee Muuqaalka Qaybinta Xogta [Dataset]. https://so.shaip.com/offerings/remote-sensing-aerial-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 6, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Qaybta Dareemida Fog ee Xogta Qaybta Dareenka Fog waa ururin gaar ah oo loogu talagalay barta dareemaha fog, oo ka kooban sawirro dayax-gacmeed oo cabbirkoodu sareeyo oo laga keenay intarneedka, cabbirkooduna u dhexeeyo 10752 x 10240 ilaa 12470 x 13650 pixels. Xog-ururintan waxaa loogu talagalay qaybinta semantic, oo leh qoraallo daboolaya sifooyin kala duwan oo dabiici ah iyo kuwa dad-sameeyey sida dhismayaal, kaymo, biyo-biyoodyo, waddooyin, iyo dhul-beereed.

  9. S

    R2Net: Enhancement of railway severe weather images based on reinforcement...

    • scidb.cn
    Updated Apr 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lou Zongzhi; Xiao Liyao; Ma Jiahui; Huang Zhixiang; Guo Tian; Li Hong (2025). R2Net: Enhancement of railway severe weather images based on reinforcement learning [Dataset]. http://doi.org/10.57760/sciencedb.23728
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 18, 2025
    Dataset provided by
    Science Data Bank
    Authors
    Lou Zongzhi; Xiao Liyao; Ma Jiahui; Huang Zhixiang; Guo Tian; Li Hong
    Description

    The application of visual tasks such as object detection and semantic segmentation in the field of rail transit is becoming increasingly widespread. However, most existing visual systems are based on image design in clear environments, and the problem of degraded images is inevitable in actual rail transit scenes. For example, trains may encounter harsh weather conditions (such as fog and rain) and low light environments such as tunnels and nighttime during their year-round operation, which significantly reduces the clarity and recognizability of images, directly affecting the performance of advanced visual tasks such as object detection and semantic segmentation. This study focuses on the low visibility problem caused by adverse weather conditions (such as fog and rainfall) and weak light conditions in rail transit scenarios, aiming to improve the quality of degraded images through image enhancement technology and enhance the robustness and reliability of visual systems in complex environments. Through targeted image enhancement methods, research aims to restore the detailed information of degraded images, enhance their recognizability, provide higher quality input data for visual tasks in the field of rail transit, and optimize overall system performance.

  10. Z

    SynthRSF (Part 1) - A Novel Photorealistic Synthetic Dataset for Adverse...

    • data.niaid.nih.gov
    Updated Jan 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gkika, Ioanna (2025). SynthRSF (Part 1) - A Novel Photorealistic Synthetic Dataset for Adverse Weather Condition Denoising [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14512390
    Explore at:
    Dataset updated
    Jan 8, 2025
    Dataset provided by
    Centre for Research and Technology Hellas
    Zarpalas, Dimitrios
    Vanian, Vazgken
    Konstantoudakis, Konstantinos
    Information Technologies Institute
    Kanlis, Angelos
    Karavarsamis, Sotiris
    Gkika, Ioanna
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    SynthRSF Dataset - Part 1 of 2

    Contents

    SynthRSF (Parts 1, 2):

    26,893 photorealistic image pairs (noisy and ground truth).

    14 3D scenes set in various environmental (rural/urban), contextual (indoor/outdoor) and lighting conditions (day/night).

    Created using Unreal 5.2 engine.

    SynthRSF-MM expansion:

    13,800 additional pairs are accompanied by:

    16-bit depth maps.

    Pixel-accurate object annotations for 41 object classes.

    Overview

    SynthRSF (Synthetic with Rain, Snow, uniform and non-uniform Fog) dataset is introduced for training and evaluating adverse weather image denoising models as well as use in object detection, semantic segmentation, and depth estimation models.

    SynthRSF addresses a gap in synthetic datasets for adverse weather conditions, contributing significantly more photorealistic data compared to common 2D layered noise datasets, as well as additional modalities.

    Applications include autonomous driving, surveillance, robotics, computer-assisted search-and-rescue.

  11. P

    ACDC (Adverse Conditions Dataset with Correspondences) Dataset

    • paperswithcode.com
    Updated Dec 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). ACDC (Adverse Conditions Dataset with Correspondences) Dataset [Dataset]. https://paperswithcode.com/dataset/acdc-adverse-conditions-dataset-with
    Explore at:
    Dataset updated
    Dec 6, 2023
    Description

    We introduce ACDC, the Adverse Conditions Dataset with Correspondences for training and testing semantic segmentation methods on adverse visual conditions. It comprises a large set of 4006 images which are evenly distributed between fog, nighttime, rain, and snow. Each adverse-condition image comes with a high-quality fine pixel-level semantic annotation, a corresponding image of the same scene taken under normal conditions and a binary mask that distinguishes between intra-image regions of clear and uncertain semantic content.

    ACDC supports two tasks: 1. standard semantic segmentation 2. uncertainty-aware semantic segmentation

  12. O

    RaidaR(RaidaR: A Rich Annotated Image Dataset of Rainy Street Scenes)

    • opendatalab.com
    zip
    Updated Nov 29, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jilin University (2022). RaidaR(RaidaR: A Rich Annotated Image Dataset of Rainy Street Scenes) [Dataset]. https://opendatalab.com/OpenDataLab/RaidaR
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 29, 2022
    Dataset provided by
    Jilin University
    Simon Fraser University
    Beihang University
    Description

    RaidaR is a rich annotated image dataset of rainy street scenes. RaidaR consists of 58,542 real rainy images containing several rain-induced artifacts: fog, droplets, road reflections, etc. 5,000/3,658 images were carefully semantic/instance segmentated, respectively.

  13. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2023). SemanticSpray Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/semanticspray-dataset

Data from: SemanticSpray Dataset Dataset

Related Article
Explore at:
Dataset updated
Jul 12, 2023
Description

Homepage | GitHub

LiDARs are one of the main sensors used for autonomous driving applications, providing accurate depth estimation regardless of lighting conditions. However, they are severely affected by adverse weather conditions such as rain, snow, and fog.

This dataset provides semantic labels for a subset of the Road Spray dataset, which contains scenes of vehicles traveling at different speeds on wet surfaces, creating a trailing spray effect. We provide semantic labels for over 200 dynamic scenes, labeling each point in the LiDAR point clouds as background (road, vegetation, buildings, ...), foreground (moving vehicles), and noise (spray, LiDAR artifacts).

The SemanticSpray dataset contains scenes in wet surface conditions captured by Camera, LiDAR, and Radar.

The following label types are provided:

Camera: 2D Boxes

LiDAR: 3D Boxes, Semantic Labels

Radar: Semantic Labels

Search
Clear search
Close search
Google apps
Main menu