7 datasets found
  1. Data from: BRIGHT: A globally distributed multimodal building damage...

    • zenodo.org
    zip
    Updated Jan 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hongruixuan Chen; Hongruixuan Chen; Jian Song; Olivier Dietrich; Olivier Dietrich; Clifford Broni-Bediako; Clifford Broni-Bediako; Weihao Xuan; Weihao Xuan; Junjue Wang; Junjue Wang; Xinlei Shao; Xinlei Shao; Wei Yimin; Junshi Xia; Junshi Xia; Cuiling Lan; Cuiling Lan; Konrad Schindler; Konrad Schindler; Naoto Yokoya; Naoto Yokoya; Jian Song; Wei Yimin (2025). BRIGHT: A globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response [Dataset]. http://doi.org/10.5281/zenodo.14619798
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 13, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Hongruixuan Chen; Hongruixuan Chen; Jian Song; Olivier Dietrich; Olivier Dietrich; Clifford Broni-Bediako; Clifford Broni-Bediako; Weihao Xuan; Weihao Xuan; Junjue Wang; Junjue Wang; Xinlei Shao; Xinlei Shao; Wei Yimin; Junshi Xia; Junshi Xia; Cuiling Lan; Cuiling Lan; Konrad Schindler; Konrad Schindler; Naoto Yokoya; Naoto Yokoya; Jian Song; Wei Yimin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 9, 2025
    Description

    Overview

    BRIGHT is the first open-access, globally distributed, event-diverse multimodal dataset specifically curated to support AI-based disaster response. It covers five types of natural disasters and two types of man-made disasters across 12 regions worldwide, with a particular focus on developing countries. About 4,500 paired optical and SAR images containing over 350,000 building instances in BRIGHT, with a spatial resolution between 0.3 and 1 meters, provides detailed representations of individual buildings, making it ideal for precise damage assessment.

    IEEE GRSS Data Fusion Contest 2025

    BRIGHT also serves as the official dataset of IEEE GRSS DFC 2025 Track II.

    Please download dfc25_track2_trainval.zip and unzip it. It contains training images & labels and validation images.

    Benchmark code related to the DFC 2025 can be found at this Github repo.

    The official leaderboard is located on the Codalab-DFC2025-Track II page.

    Paper & Reference

    Details of BRIGHT can be refer to our paper.

    If BRIGHT is useful to research, please kindly consider cite our paper

    @article{chen2025bright,
       title={BRIGHT: A globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response}, 
       author={Hongruixuan Chen and Jian Song and Olivier Dietrich and Clifford Broni-Bediako and Weihao Xuan and Junjue Wang and Xinlei Shao and Yimin Wei and Junshi Xia and Cuiling Lan and Konrad Schindler and Naoto Yokoya},
       journal={arXiv preprint arXiv:2501.06019},
       year={2025},
       url={https://arxiv.org/abs/2501.06019}, 
    }

    License

    Label data of BRIGHT are provided under the same license as the optical images, which varies with different events.

    With the exception of two events, Hawaii-wildfire-2023 and La Palma-volcano eruption-2021, all optical images are from Maxar Open Data Program, following CC-BY-NC-4.0 license. The optical images related to Hawaii-wildifire-2023 are from High-Resolution Orthoimagery project of NOAA Office for Coastal Management. The optical images related to La Palma-volcano eruption-2021 are from IGN (Spain) following CC-BY 4.0 license.

    The SAR images of BRIGHT is provided by Capella Open Data Gallery and Umbra Space Open Data Program, following CC-BY-4.0 license.

  2. R

    Damage Assessment _ Od (method 4) Dataset

    • universe.roboflow.com
    zip
    Updated Nov 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohammad Amin Zandi (2024). Damage Assessment _ Od (method 4) Dataset [Dataset]. https://universe.roboflow.com/mohammad-amin-zandi-csto4/damage-assessment-_-od-method-4-rfzze
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 9, 2024
    Dataset authored and provided by
    Mohammad Amin Zandi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Building NkZI OtwB Qct9 91zE Q0iP VXWQ Building NkZI OtwB U1It EayD WPSw Bounding Boxes
    Description

    Damage Assessment _ OD (Method 4)

    ## Overview
    
    Damage Assessment _ OD (Method 4) is a dataset for object detection tasks - it contains Building NkZI OtwB Qct9 91zE Q0iP VXWQ Building NkZI OtwB U1It EayD WPSw annotations for 412 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. Data from: 2024 Noto Peninsula Earthquake Building Damage Visual Assesment

    • zenodo.org
    bin
    Updated Feb 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ruben Vescovo; Bruno Adriano; Erick Mas; Sesa Wiguna; Ayumu Mizutani; Chia Yee Ho; Jorge Morales; Xuanyan Dong; Shin Ishii; Yudai Ezaki; Kazuki Wako; Satoshi Tanaka; Shunichi Koshimura; Ruben Vescovo; Bruno Adriano; Erick Mas; Sesa Wiguna; Ayumu Mizutani; Chia Yee Ho; Jorge Morales; Xuanyan Dong; Shin Ishii; Yudai Ezaki; Kazuki Wako; Satoshi Tanaka; Shunichi Koshimura (2025). 2024 Noto Peninsula Earthquake Building Damage Visual Assesment [Dataset]. http://doi.org/10.5281/zenodo.14650106
    Explore at:
    binAvailable download formats
    Dataset updated
    Feb 26, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Ruben Vescovo; Bruno Adriano; Erick Mas; Sesa Wiguna; Ayumu Mizutani; Chia Yee Ho; Jorge Morales; Xuanyan Dong; Shin Ishii; Yudai Ezaki; Kazuki Wako; Satoshi Tanaka; Shunichi Koshimura; Ruben Vescovo; Bruno Adriano; Erick Mas; Sesa Wiguna; Ayumu Mizutani; Chia Yee Ho; Jorge Morales; Xuanyan Dong; Shin Ishii; Yudai Ezaki; Kazuki Wako; Satoshi Tanaka; Shunichi Koshimura
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Noto Peninsula
    Description
    The database is stored as a GeoPackage Noto_Peninsula_Damage_X_Y.gpkg (where X and Y are version values).
    A single layer (vX.Y) with table entries for contents (features) and geometries (MultiPolygon) is used to store the building footprints, record detail are given in the table below.
    A total of 140,208 entries with associated geometries are recorded in the dataset.
    The original vector layer containing the \texttt{geometry} and the s_fid attribute was sourced from GSI's basic map information download portal (https://fgd.gsi.go.jp/download/mapGis.php); the following GSI mesh tiles were considered:
    553645, 553646, 553647, 553655, 553656, 553657, 553665, 553666, 553667, 553675, 553676, 553677, 553740, 553750, 553760, 553770, 553771, 553772, 563605, 563606, 563607, 563617, 563700, 563701, 563702, 563710, 563711, 563712, 563721, 563722.
    Each tile corresponds to a an aggregated archive FG-GML-nnnnnn-ALL-YYYYMMDD.zip, where \texttt{nnnnnn} corresponds to the mesh tile number and YYYYMMDD is the date of the last update) of xml vector features.
    In our assessment we only consider the FG-GML-nnnnnn-BldA-YYYYMMDD-0001.xml files which contain the building polygons.
    The dataset uses coordinate reference system (CRS) EPSG:4326 (WGS 84).
    AttributeType | LengthDescription
    fidInt64Unique identifier for the building.
    s_fidString|80Serial feature identifier from the original xml file; manual when manually added.
    damageInt8Damage class
    sourceString|30Oblique source number from KKC inventory (KKC, 2024) (where available);
    damage_valInt8Damage class after technical validation.
    municipalityString|20Municipality name from e-Stat (Ministry of Internal Affairs and Communications, 2024).
    confString|10Confidence level of the assessment as per figure 6 based on oblique coverage.
    geometryMultiPolygonVector geometry of the building footprint.
  4. RescueNet

    • kaggle.com
    Updated Mar 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yaroslav Chyrko (2023). RescueNet [Dataset]. https://www.kaggle.com/datasets/yaroslavchyrko/rescuenet
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 13, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Yaroslav Chyrko
    License

    https://cdla.io/permissive-1-0/https://cdla.io/permissive-1-0/

    Description

    RescueNet dataset created by Tashnim Chowdhury, Robin Murphy and Maryam Rahnemoonfar and presented in RescueNet: A High Resolution UAV Semantic Segmentation Benchmark Dataset for Natural Disaster Damage Assessment paper in August of 2021. This dataset is released under the Community Data License Agreement (permissive). It features 4494 post disaster high resolution images (3000x4000) of buildings and landscapes after Hurricane Michael, captured from UAV (Unmanned Aerial Vehicle), namely DJI Mavic Pro and their respective General Truth maps. General Truth maps features 12 classes: background, debris, water, building-no-damage, building-medium-damage, building-major-damage, building-total-destruction, vehicle, road, tree, pool and sand. For better understanding of each class and it's purpouse refer to original work. For models trained on this dataset refer to the Github page. Note: this dataset can also be obtained from the Google drive link that be found on the github page. When downloading from this source as one zip archive, due to the large file size download error (namely Network error and on resume a Forbidden error) might occur. If it persists one should try downloading individual folders as they will be divided into parts that can be downloaded properly.

  5. CAL FIRE Damage Inspection (DINS) Data

    • data.ca.gov
    • data.cnra.ca.gov
    • +4more
    Updated Jun 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CAL FIRE (2025). CAL FIRE Damage Inspection (DINS) Data [Dataset]. https://data.ca.gov/dataset/cal-fire-damage-inspection-dins-data
    Explore at:
    arcgis geoservices rest api, gpkg, zip, geojson, gdb, csv, txt, kml, xlsx, htmlAvailable download formats
    Dataset updated
    Jun 2, 2025
    Dataset provided by
    California Department of Forestry and Fire Protectionhttp://calfire.ca.gov/
    Authors
    CAL FIRE
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This database represents structures impacted by wildland fire that are inside or within 100 meters of the fire perimeter. Information such as structure type, construction features, and some defensible space attributes are determined as best as possible even when the structure is completely destroyed. Some attributes may have a null value when they could not be determined.

    Fire damage and poor access are major limiting factors for damage inspectors. All inspections are conducted using a systematic inspection process, however not all structures impacted by the fire may be identified due to these factors. Therefore, a small margin of error is expected. Two address fields are included in the database. The street number, street name, and street type fields are “field determined.” The inspector inputs this information based on what they see in the field. The Address (parcel) and APN (parcel) fields are added through a spatial join after data collection is complete.

    Additional fields such as Category and Structure Type are based off fields needed in the Incident Status Summary (ICS 209).

    Please review the DINS database dictionary for additional information.


    Damage PercentageDescription
    1-10%Affected Damage
    10-25%Minor Damage
    25-50%Major Damage
    50-100%Destroyed
    No DamageNo Damage
  6. Pretraining data of SkySense++

    • zenodo.org
    bin
    Updated Mar 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kang Wu; Kang Wu (2025). Pretraining data of SkySense++ [Dataset]. http://doi.org/10.5281/zenodo.15010418
    Explore at:
    binAvailable download formats
    Dataset updated
    Mar 18, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Kang Wu; Kang Wu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Mar 9, 2024
    Description

    This repository contains the data description and processing for the paper titled "SkySense++: A Semantic-Enhanced Multi-Modal Remote Sensing Foundation Model for Earth Observation." The code is in here

    📢 Latest Updates

    🔥🔥🔥 Last Updated on 2025.03.14 🔥🔥🔥

    Pretrain Data

    RS-Semantic Dataset

    We conduct semantic-enhanced pretraining on the RS-Semantic dataset, which consists of 13 datasets with pixel-level annotations. Below are the specifics of these datasets.

    DatasetModalitiesGSD(m)SizeCategoriesDownload Link
    Five Billion PixelsGaofen-246800x720024Download
    PotsdamAirborne0.056000x60005Download
    VaihingenAirborne0.052494x20645Download
    DeepglobeWorldView0.52448x24486Download
    iSAIDMultiple Sensors-800x800 to 4000x1300015Download
    LoveDASpaceborne0.31024x10247Download
    DynamicEarthNetWorldView0.31024x10247Download
    Sentinel-2*1032x32
    Sentinel-1*1032x33
    Pastis-MMWorldView0.31024x102418Download
    Sentinel-2*1032x32
    Sentinel-1*1032x33
    C2Seg-ABSentinel-2*10128x12813Download
    Sentinel-1*10128x128
    FLAIRSpot-50.2512x51212Download
    Sentinel-2*1040x40
    DFC20Sentinel-210256x2569Download
    Sentinel-110256x256
    S2-naipNAIP1512x51232Download
    Sentinel-2*1064x64
    Sentinel-1*1064x64
    JL-16Jilin-10.72512x51216Download
    Sentinel-1*1040x40

    * for time-series data.

    EO Benchmark

    We evaluate our SkySense++ on 12 typical Earth Observation (EO) tasks across 7 domains: agriculture, forestry, oceanography, atmosphere, biology, land surveying, and disaster management. The detailed information about the datasets used for evaluation is as follows.

    DomainTask typeDatasetModalitiesGSDImage sizeDownload LinkNotes
    AgricultureCrop classificationGermanySentinel-2*1024x24Download
    ForesetryTree species classificationTreeSatAI-Time-SeriesAirborne,0.2304x304Download
    Sentinel-2*106x6
    Sentinel-1*106x6
    Deforestation segmentationAtlanticSentinel-210512x512Download
    OceanographyOil spill segmentationSOSSentinel-110256x256Download
    AtmosphereAir pollution regression3pollutionSentinel-210200x200Download
    Sentinel-5P2600120x120
    BiologyWildlife detectionKenyaAirborne-3068x4603Download
    Land surveyingLULC mappingC2Seg-BWGaofen-610256x256Download
    Gaofen-310256x256
    Change detectiondsifn-cdGoogleEarth0.3512x512Download
    Disaster managementFlood monitoringFlood-3iAirborne0.05256 × 256Download
    C2SMSFloodsSentinel-2, Sentinel-110512x512Download
    Wildfire monitoringCABUARSentinel-2105490 × 5490Download
    Landslide mappingGVLMGoogleEarth0.31748x1748 ~ 10808x7424Download
    Building damage assessmentxBDWorldView0.31024x1024Download

    * for time-series data.

  7. Thatch Building Survey - Roscommon

    • data-roscoco.opendata.arcgis.com
    • data.gov.ie
    • +2more
    Updated Mar 10, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roscommon County Council (2017). Thatch Building Survey - Roscommon [Dataset]. https://data-roscoco.opendata.arcgis.com/maps/RosCoCo::thatch-building-survey-roscommon
    Explore at:
    Dataset updated
    Mar 10, 2017
    Dataset authored and provided by
    Roscommon County Council
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Description

    This dataset compliments the ‘An Assessment of Thatching Needs in County Roscommon’ survey carried out by Fidelma Mullane, for Roscommon County Council. Published in October 2009. For this study, thirty-one structures with thatch or with previously thatched roofs were investigated. The details of the existing thatch roofs and their condition are noted. Ten further thatched structures were identified and listed for future assessment. This assessment of thatching needs in County Roscommon was funded by Roscommon County Council & the Heritage Council as an action of the County Roscommon Heritage Plan.

    Contact: Roscommon County Council Heritage Officer.Dataset Publisher: Roscommon County Council, Dataset language: English, Spatial Projection: Web Mercator,Date of Creation: 2009Update Frequency: N / A Roscommon County Council provides this information with the understanding that it is not guaranteed to be accurate, correct or complete. Roscommon County Council accepts no liability for any loss or damage suffered by those using this data for any purpose.

  8. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Hongruixuan Chen; Hongruixuan Chen; Jian Song; Olivier Dietrich; Olivier Dietrich; Clifford Broni-Bediako; Clifford Broni-Bediako; Weihao Xuan; Weihao Xuan; Junjue Wang; Junjue Wang; Xinlei Shao; Xinlei Shao; Wei Yimin; Junshi Xia; Junshi Xia; Cuiling Lan; Cuiling Lan; Konrad Schindler; Konrad Schindler; Naoto Yokoya; Naoto Yokoya; Jian Song; Wei Yimin (2025). BRIGHT: A globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response [Dataset]. http://doi.org/10.5281/zenodo.14619798
Organization logo

Data from: BRIGHT: A globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response

Related Article
Explore at:
zipAvailable download formats
Dataset updated
Jan 13, 2025
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Hongruixuan Chen; Hongruixuan Chen; Jian Song; Olivier Dietrich; Olivier Dietrich; Clifford Broni-Bediako; Clifford Broni-Bediako; Weihao Xuan; Weihao Xuan; Junjue Wang; Junjue Wang; Xinlei Shao; Xinlei Shao; Wei Yimin; Junshi Xia; Junshi Xia; Cuiling Lan; Cuiling Lan; Konrad Schindler; Konrad Schindler; Naoto Yokoya; Naoto Yokoya; Jian Song; Wei Yimin
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Time period covered
Jan 9, 2025
Description

Overview

BRIGHT is the first open-access, globally distributed, event-diverse multimodal dataset specifically curated to support AI-based disaster response. It covers five types of natural disasters and two types of man-made disasters across 12 regions worldwide, with a particular focus on developing countries. About 4,500 paired optical and SAR images containing over 350,000 building instances in BRIGHT, with a spatial resolution between 0.3 and 1 meters, provides detailed representations of individual buildings, making it ideal for precise damage assessment.

IEEE GRSS Data Fusion Contest 2025

BRIGHT also serves as the official dataset of IEEE GRSS DFC 2025 Track II.

Please download dfc25_track2_trainval.zip and unzip it. It contains training images & labels and validation images.

Benchmark code related to the DFC 2025 can be found at this Github repo.

The official leaderboard is located on the Codalab-DFC2025-Track II page.

Paper & Reference

Details of BRIGHT can be refer to our paper.

If BRIGHT is useful to research, please kindly consider cite our paper

@article{chen2025bright,
   title={BRIGHT: A globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response}, 
   author={Hongruixuan Chen and Jian Song and Olivier Dietrich and Clifford Broni-Bediako and Weihao Xuan and Junjue Wang and Xinlei Shao and Yimin Wei and Junshi Xia and Cuiling Lan and Konrad Schindler and Naoto Yokoya},
   journal={arXiv preprint arXiv:2501.06019},
   year={2025},
   url={https://arxiv.org/abs/2501.06019}, 
}

License

Label data of BRIGHT are provided under the same license as the optical images, which varies with different events.

With the exception of two events, Hawaii-wildfire-2023 and La Palma-volcano eruption-2021, all optical images are from Maxar Open Data Program, following CC-BY-NC-4.0 license. The optical images related to Hawaii-wildifire-2023 are from High-Resolution Orthoimagery project of NOAA Office for Coastal Management. The optical images related to La Palma-volcano eruption-2021 are from IGN (Spain) following CC-BY 4.0 license.

The SAR images of BRIGHT is provided by Capella Open Data Gallery and Umbra Space Open Data Program, following CC-BY-4.0 license.

Search
Clear search
Close search
Google apps
Main menu