11 datasets found
  1. S

    An open flame and smoke detection dataset for deep learning in remote...

    • scidb.cn
    Updated Aug 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ming Wang; Peng Yue; Liangcun Jiang; Dayu Yu; Tianyu Tuo (2022). An open flame and smoke detection dataset for deep learning in remote sensing based fire detection [Dataset]. http://doi.org/10.57760/sciencedb.j00104.00103
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 2, 2022
    Dataset provided by
    Science Data Bank
    Authors
    Ming Wang; Peng Yue; Liangcun Jiang; Dayu Yu; Tianyu Tuo
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    FASDD is a largest and most generalized Flame And Smoke Detection Dataset for object detection tasks, characterized by the utmost complexity in fire scenes, the highest heterogeneity in feature distribution, and the most significant variations in image size and shape. FASDD serves as a benchmark for developing advanced fire detection models, which can be deployed on watchtowers, drones, or satellites in a space-air-ground integrated observation network for collaborative fire warning. This endeavor provides valuable insights for government decision-making and fire rescue operations. FASDD contains fire, smoke, and confusing non-fire/non-smoke images acquired at different distances (near and far), different scenes (indoor and outdoor), different light intensities (day and night), and from various visual sensors (surveillance cameras, UAVs, and satellites). FASDD consists of three sub-datasets, a Computer Vision (CV) dataset (i.e. FASDD_CV), a Unmanned Aerial Vehicle (UAV) dataset (i.e. FASDD_UAV), and an Remote Sensing (RS) dataset (i.e. FASDD_RS). FASDD comprises 122,634 samples, with 70,581 annotated as positive samples and 52,073 labeled as negative samples. There are 113,154 instances of flame objects and 73,072 instances of smoke objects in the entire dataset. FASDD_CV contains 95,314 samples for general computer vision, while FASDD_UAV consists of 25,097 samples captured by UAV, and FASDD_RS comprises 2,223 samples from satellite imagery. FASDD_CV contains 73,297 fire instances and 53,080 smoke instances. The CV dataset exhibits considerable variation in image size, ranging from 78 to 10,600 pixels in width and 68 to 8,858 pixels in height. The aspect ratios of the images also vary significantly, ranging from 1:6.6 to 1:0.18. FASDD_UAV contains 36,308 fire instances and 17,222 smoke instances, with image aspect ratios primarily distributed between 4:3 and 16:9. In FASDD_RS, there are 2,770 smoke instances and 3,549 flame instances. The sizes of remote sensing images are predominantly around 1,000×1,000 pixels.FASDD is provided in three compressed files: FASDD_CV.zip, FASDD_UAV.zip, and FASDD_RS.zip, which correspond to the CV dataset, the UAV dataset, and the RS dataset, respectively. Additionally, there is a FASDD_RS_SWIR. zip folder storing pseudo-color images for detecting flame objects in remote sensing imagery. Each zip file contains two folders: "images" for storing the source data and "annotations" for storing the labels. The "annotations" folder consists of label files in four formats: YOLO, VOC, COCO, and TDML. The dataset is divided randomly into training, validation, and test sets, with a ratio of 1/2, 1/3, and 1/6, respectively, within each label format. In FASDD_CV, FASDD_UAV, and FASDD_RS, images and their corresponding annotation files have been individually sorted starting from 0. The flame and smoke objects in FASDD are given the labels "fire" and "smoke" for the object detection task, respectively. The names of all images and annotation files are prefixed with "Fire", "Smoke", "FireAndSmoke", and "NeitherFireNorSmoke", representing different categories for scene classification tasks.When using this dataset, please cite the following paper. Thank you very much for your support and cooperation:################################################################################使用数据集请引用对应论文,非常感谢您的关注和支持:Wang, M., Yue, P., Jiang, L., Yu, D., Tuo, T., & Li, J. (2025). An open flame and smoke detection dataset for deep learning in remote sensing based fire detection. Geo-spatial Information Science, 28(2), 511-526.################################################################################

  2. c

    Annotated Fire -Smoke Image Dataset for fire detection Using YOLO.

    • acquire.cqu.edu.au
    • researchdata.edu.au
    zip
    Updated Apr 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shouthiri Partheepan (2025). Annotated Fire -Smoke Image Dataset for fire detection Using YOLO. [Dataset]. http://doi.org/10.25946/28747046.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 14, 2025
    Dataset provided by
    CQUniversity
    Authors
    Shouthiri Partheepan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains 11027 labeled images for the detection of fire and smoke instances in diverse real-world scenarios. The annotations are provided in YOLO format with bounding boxes and class labels for two classes: fire and smoke. The dataset is divided into an 80% training set with 10,090 fire instances and 9724 smoke instances, a 10% Validation set with 1,255 fire and 1,241 smoke instances, and a 10% Test set with 1,255 fire and 1,241 smoke instances. This dataset is suitable for training and evaluating fire and smoke detection models, such as YOLOv8, YOLOv9, and similar deep learning-based frameworks in the context of emergency response, wildfire monitoring, and smart surveillance.

  3. R

    Forest Fire And Smoke Detection Using Uav Imaging_2 Dataset

    • universe.roboflow.com
    zip
    Updated Jul 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AUEBMSC BA MACHINE LEARNING (2023). Forest Fire And Smoke Detection Using Uav Imaging_2 Dataset [Dataset]. https://universe.roboflow.com/auebmsc-ba-machine-learning/forest-fire-and-smoke-detection-using-uav-imaging_2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 18, 2023
    Dataset authored and provided by
    AUEBMSC BA MACHINE LEARNING
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Fire Bounding Boxes
    Description

    Forest Fire And Smoke Detection Using UAV Imaging_2

    ## Overview
    
    Forest Fire And Smoke Detection Using UAV Imaging_2 is a dataset for object detection tasks - it contains Fire annotations for 1,151 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  4. R

    Forest Fire And Smoke Detection Using Uav Imaging_3 Dataset

    • universe.roboflow.com
    zip
    Updated Jul 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AUEBMSC BA MACHINE LEARNING (2023). Forest Fire And Smoke Detection Using Uav Imaging_3 Dataset [Dataset]. https://universe.roboflow.com/auebmsc-ba-machine-learning/forest-fire-and-smoke-detection-using-uav-imaging_3/dataset/9
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 30, 2023
    Dataset authored and provided by
    AUEBMSC BA MACHINE LEARNING
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Fire Bounding Boxes
    Description

    Forest Fire And Smoke Detection Using UAV Imaging_3

    ## Overview
    
    Forest Fire And Smoke Detection Using UAV Imaging_3 is a dataset for object detection tasks - it contains Fire annotations for 4,468 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  5. G

    Drone-Based Fire Detection Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Drone-Based Fire Detection Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/drone-based-fire-detection-market
    Explore at:
    csv, pptx, pdfAvailable download formats
    Dataset updated
    Aug 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Drone-Based Fire Detection Market Outlook



    As per our latest research, the global Drone-Based Fire Detection market size reached USD 1.32 billion in 2024, reflecting the rapid integration of drone technologies in fire detection and management. The market is experiencing robust momentum, driven by increasing wildfire incidents, advancements in drone hardware and analytics, and heightened governmental focus on disaster management. The sector is forecasted to expand at a compound annual growth rate (CAGR) of 18.9% from 2025 to 2033, with the market expected to achieve a value of USD 6.12 billion by 2033. This impressive growth trajectory is primarily attributed to the widespread adoption of drones equipped with advanced sensors and AI-powered analytics for real-time fire detection and monitoring across diverse end-user segments.




    The surge in the Drone-Based Fire Detection market is fueled by a convergence of technological advancements and escalating environmental challenges. Increasing frequency and severity of wildfires, particularly in regions like North America and Australia, have underscored the need for rapid, accurate, and scalable fire detection solutions. Drones, with their ability to cover vast terrains and deliver real-time data, are proving indispensable for early fire identification and containment. Furthermore, the evolution of lightweight, high-resolution thermal and infrared cameras, coupled with AI-driven analytics, has significantly enhanced the precision and reliability of drone-based fire detection systems. These innovations enable authorities to detect hotspots, monitor fire progression, and deploy resources efficiently, ultimately reducing response times and minimizing damage.




    Another critical growth factor for the Drone-Based Fire Detection market is the increasing adoption of drones by government agencies and fire departments worldwide. Regulatory bodies are recognizing the value of unmanned aerial vehicles (UAVs) in augmenting traditional fire monitoring methods, especially in inaccessible or hazardous environments. Initiatives to modernize public safety infrastructures, combined with substantial investments in smart city projects, are catalyzing the deployment of drone-based fire detection solutions in both urban and rural settings. Moreover, the integration of drones into industrial fire safety protocols—particularly in sectors such as oil & gas, manufacturing, and utilities—further amplifies market growth, as these industries seek to safeguard critical assets and ensure regulatory compliance.




    The proliferation of AI and machine learning technologies is transforming the operational landscape of the Drone-Based Fire Detection market. AI-powered analytics platforms can process vast volumes of visual and thermal data captured by drones, enabling automated detection of fire outbreaks, smoke plumes, and abnormal thermal signatures. These capabilities facilitate predictive analytics and risk assessment, empowering stakeholders to implement proactive fire prevention strategies. The convergence of cloud computing, IoT connectivity, and real-time data visualization tools also supports seamless integration of drone-based fire detection systems with broader emergency management frameworks, enhancing situational awareness and collaborative decision-making among first responders, government agencies, and industrial operators.




    From a regional perspective, North America currently dominates the Drone-Based Fire Detection market, accounting for the largest revenue share in 2024, followed closely by Europe and Asia Pacific. The United States, in particular, has witnessed significant investments in drone-based firefighting technologies, driven by recurring wildfire crises in California and other western states. Europe is also witnessing strong growth, propelled by stringent environmental regulations and increasing adoption of smart surveillance technologies. Meanwhile, the Asia Pacific region is emerging as a lucrative market, fueled by rapid urbanization, rising industrialization, and growing awareness of the benefits of drone-based fire detection in countries such as China, Japan, and Australia. Latin America and the Middle East & Africa are gradually embracing these technologies, albeit at a slower pace, as governments and industries recognize the need for advanced fire safety solutions in the face of changing climatic conditions.



    <a href="https://growthmarketrepor

  6. AIDER (Aerial Image Dataset for Emergency Response Applications)

    • zenodo.org
    • data.europa.eu
    zip
    Updated Aug 3, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christos Kyrkou; Christos Kyrkou (2020). AIDER (Aerial Image Dataset for Emergency Response Applications) [Dataset]. http://doi.org/10.5281/zenodo.3888300
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 3, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Christos Kyrkou; Christos Kyrkou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    AIDER (Aerial Image Dataset for Emergency Response applications): The dataset construction involved manually collecting all images for four disaster events, namely Fire/Smoke, Flood, Collapsed Building/Rubble, and Traffic Accidents, as well as one class for the Normal case.

    The aerial images for the disaster events were collected through various online sources (e.g. google images, bing images, youtube, news agencies web sites, etc.) using the keywords ”Aerial View” or ”UAV” or”Drone” and an event such as Fire”,”Earthquake”,”Highway accident”, etc. Images are initially of different sizes but are standardized prior to training. All images where manually inspected to first contain the event that was of interested and then to have the event centered at the image so that any geometric transformations during augmentation would not remove it from the image view. During the data collection process the various disaster events were captured with different resolutions and under various condition with regards to illumination and viewpoint. Finally, to replicate real world scenarios the dataset is imbalanced in the sense that it contains more images from the Normal class.

    This subset includes around 500 images for each disaster class and over 4000 images for the normal class. This makes it an imbalanced classification problem.

    It is advised to further enhance the dataset that random augmentations are probabilistically applied to each image prior to adding it to the batch for training. Specifically there are a number of possible transformations such as geometric (rotations, translations, horizontal axis mirroring, cropping and zooming), as well as image manipulations (illumination changes, color shifting, blurring, sharpening, and shadowing).

  7. R

    Drone Project Dataset

    • universe.roboflow.com
    zip
    Updated May 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    smoke and fire (2025). Drone Project Dataset [Dataset]. https://universe.roboflow.com/smoke-and-fire-5x1by/drone-project-dbxnf/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 19, 2025
    Dataset authored and provided by
    smoke and fire
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Smoke And Fire Bounding Boxes
    Description

    Drone Project

    ## Overview
    
    Drone Project is a dataset for object detection tasks - it contains Smoke And Fire annotations for 210 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
    
  8. Z

    Spatiotemporal evolution of a controlled forest fire near Torre do Pinhão...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Macias, Henrique (2024). Spatiotemporal evolution of a controlled forest fire near Torre do Pinhão (Portugal) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_11453965
    Explore at:
    Dataset updated
    Jun 7, 2024
    Dataset provided by
    Macias, Henrique
    Costa, Rogério
    Moreira, José Manuel Matos
    Ribeiro, Tiago
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Portugal
    Description

    This dataset represents part of the propagation of a controlled forest fire on March 1, 2019, near Torre do Pinhão, Portugal. The data was generated from a 15-minute video captured using a UAV. The video's description, frame selection and segmentation process are available at https://doi.org/10.5281/zenodo.7944963.The data represents the evolution of the burned region divided into 170 slices. Each slice is represented by a source polygon (S), a target polygon (T) and a one-to-one mapping of the vertices of S in T. Each polygon represents the burned region at a given time instant and each slice represents the evolution of the burned region during a time interval. These data are the inputs of interpolation methods to create a continuous representation of the fire spread, even when the original video frames are not good, for instance, due to occlusion of the area of interest by smoke. Figure goodCorrespondences.png shows an example of the correspondences between two polygons and the video presents the evolution of the burned region obtained using a simple linear interpolation method. The dataset was created using a supervised method. The source code and method description are available on gitHub.

    source: the url of the original video (raw data) in zenodo.

    eventData: the date and time of the prescribed forest fire.

    location: the name of the place of the prescribed fire.

    coordinates and coordinatesDMS: the coordinates of the prescribed fire in WSG84. The later represents the coordinates in degrees, minutes and seconds.

    numberOfFrames: the number of frames extracted from the video.

    correspondences: this is a data structure to represent the correspondences between the polygons delimiting the extent of the burned area in frames (1, 2), (2, 3), … , (169, 170). The key is the number of the slice in [1, 170] and the value is a dictionary with the following keys:

    frameNbInVideo_source: the number of the frame corresponding to the source polygon for the slice.

    elapsedTimeInVideo_source: the elapsed time in milliseconds since the beginning of the video.

    frameNbInVideo_target: the number of the frame corresponding to the source polygon for the slice.

    numberOfVertices: the number of vertices of the source and target polygons.

    vertexMappings versus sourceCoords and targetCoords: The correspondences between the source and target vertices in each slice are represented in two distinct but equivalent formats. In data_fmtA, the list sourceCoords holds the coordinates (x,y) of the source vertices and the list targetCoords holds the coordinates of the target vertices. The two lists have the same length and the correspondence is given by the position in the list. In data_fmtB, the correspondences are represented in the list vertexMapings where each entry holds the coordinates of the source and corresponding target vertices.

    Note that the target polygon in slice i and the source polygon in slice i+1 are geometrically identical but they are topologically distinct because the number of vertices differs. This is to ensure that there is a one-to-one correspondence between the vertices of the source and target polygons in each slice. It is up to the vertex correspondence algorithm to add vertices to the source and target polygons to obtain a one-to-one correspondence between those polygons, as described on github. Click here to display a figure with an example of correspondences between the vertices of a source and a target polygons representing the extent of the burned area at two times.

  9. R

    Fire Dataset

    • universe.roboflow.com
    zip
    Updated Sep 14, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    new-workspace-nqyj3 (2021). Fire Dataset [Dataset]. https://universe.roboflow.com/new-workspace-nqyj3/fire-dataset-nsmlo/model/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 14, 2021
    Dataset authored and provided by
    new-workspace-nqyj3
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Fire Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Fire Alert System: The model can be integrated into a real-time monitoring system to recognize early signs of fire incidents, like smoke or flames, thereby automating the fire alert process and allowing for quicker response times.

    2. Drone Surveying: Drones can use this model to detect fire and smoke during forest monitoring tasks. It can help in early detection of forest fires and aid in rapid intervention and prevention of fire spread.

    3. Video Surveillance: The model can be integrated into CCTV systems in sensitive areas such as factories, warehouses or server rooms where early fire detection is critical to prevent extensive damages.

    4. Traffic Management: The system can be used in traffic management systems for recognizing car fires. This could help emergency services to respond rapidly and efficiently to such incidents, improving road safety.

    5. Insurance Claim Verifications: It can be utilized by insurance companies to analyze and validate circumstances and the extent of damage during fire incidents, assisting in the vetting process of insurance claims.

  10. R

    C2a Dataset

    • universe.roboflow.com
    zip
    Updated Aug 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    saint tour (2024). C2a Dataset [Dataset]. https://universe.roboflow.com/saint-tour/c2a-dataset
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 23, 2024
    Dataset authored and provided by
    saint tour
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Human Bounding Boxes
    Description

    For more details, please refer to our paper: Nihal, R. A., et al. "UAV-Enhanced Combination to Application: Comprehensive Analysis and Benchmarking of a Human Detection Dataset for Disaster Scenarios." ICPR 2024 (Accepted), arXiv preprint arXiv (2024).

    and the github repo https://github.com/Ragib-Amin-Nihal/C2A

    We encourage users to cite this paper when using the dataset for their research or applications.

    The C2A (Combination to Application) Dataset is a resource designed to advance human detection in disaster scenarios using UAV imagery. This dataset addresses a critical gap in the field of computer vision and disaster response by providing a large-scale, diverse collection of synthetic images that combine real disaster scenes with human poses.

    Context: In the wake of natural disasters and emergencies, rapid and accurate human detection is crucial for effective search and rescue operations. UAVs (Unmanned Aerial Vehicles) have emerged as powerful tools in these scenarios, but their effectiveness is limited by the lack of specialized datasets for training AI models. The C2A dataset aims to bridge this gap, enabling the development of more robust and accurate human detection systems for disaster response.

    Sources: The C2A dataset is a synthetic combination of two primary sources: 1. Disaster Backgrounds: Sourced from the AIDER (Aerial Image Dataset for Emergency Response Applications) dataset, providing authentic disaster scene imagery. 2. Human Poses: Derived from the LSP/MPII-MPHB (Multiple Poses Human Body) dataset, offering a wide range of human body positions.

    Key Features: - 10,215 high-resolution images - Over 360,000 annotated human instances - 5 human pose categories: Bent, Kneeling, Lying, Sitting, and Upright - 4 disaster scenario types: Fire/Smoke, Flood, Collapsed Building/Rubble, and Traffic Accidents - Image resolutions ranging from 123x152 to 5184x3456 pixels - Bounding box annotations for each human instance

    Inspiration: This dataset was inspired by the pressing need to improve the capabilities of AI-assisted search and rescue operations. By providing a diverse and challenging set of images that closely mimic real-world disaster scenarios, we aim to: 1. Enhance the accuracy of human detection algorithms in complex environments 2. Improve the generalization of models across various disaster types and human poses 3. Accelerate the development of AI systems that can assist first responders and save lives

    Applications: The C2A dataset is designed for researchers and practitioners in: - Computer Vision and Machine Learning - Disaster Response and Emergency Management - UAV/Drone Technology - Search and Rescue Operations - Humanitarian Aid and Crisis Response

    We hope this dataset will inspire innovative approaches to human detection in challenging environments and contribute to the development of technologies that can make a real difference in disaster response efforts.

  11. R

    Flugunfallerkennung Planes Dataset

    • universe.roboflow.com
    zip
    Updated Jan 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    flugunfallerkennung (2023). Flugunfallerkennung Planes Dataset [Dataset]. https://universe.roboflow.com/flugunfallerkennung-zum1j/flugunfallerkennung-planes/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 31, 2023
    Dataset authored and provided by
    flugunfallerkennung
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Planes Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Aviation Safety: The model could be used by aviation authorities and air traffic controllers to monitor runways in real time, identifying different types of aircraft and detecting any signs of smoke or fire instantly for early accident prevention.

    2. Disaster Response: Emergency services could use the dataset to quickly identify plane crashes and forward this information to the relevant authorities, expediting the response time to such incidents.

    3. Insurance Claims: Insurance companies may use this model to verify claims related to aviation accidents by analyzing images of the incident, aiding in determining the type of plane involved and the presence of fire or smoke.

    4. Autonomous Rescue Drones: Companies designing autonomous drones for rescue operations could use this model to help the drones identify crashed planes, especially in difficult terrains or adverse weather conditions.

    5. Training and Simulation: This model could be used for training purposes, helping develop simulation programs that mimic real-world scenarios for emergency service providers, teaching them how to recognize different aircraft types and accident indicators.

  12. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ming Wang; Peng Yue; Liangcun Jiang; Dayu Yu; Tianyu Tuo (2022). An open flame and smoke detection dataset for deep learning in remote sensing based fire detection [Dataset]. http://doi.org/10.57760/sciencedb.j00104.00103

An open flame and smoke detection dataset for deep learning in remote sensing based fire detection

Explore at:
292 scholarly articles cite this dataset (View in Google Scholar)
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Aug 2, 2022
Dataset provided by
Science Data Bank
Authors
Ming Wang; Peng Yue; Liangcun Jiang; Dayu Yu; Tianyu Tuo
License

Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically

Description

FASDD is a largest and most generalized Flame And Smoke Detection Dataset for object detection tasks, characterized by the utmost complexity in fire scenes, the highest heterogeneity in feature distribution, and the most significant variations in image size and shape. FASDD serves as a benchmark for developing advanced fire detection models, which can be deployed on watchtowers, drones, or satellites in a space-air-ground integrated observation network for collaborative fire warning. This endeavor provides valuable insights for government decision-making and fire rescue operations. FASDD contains fire, smoke, and confusing non-fire/non-smoke images acquired at different distances (near and far), different scenes (indoor and outdoor), different light intensities (day and night), and from various visual sensors (surveillance cameras, UAVs, and satellites). FASDD consists of three sub-datasets, a Computer Vision (CV) dataset (i.e. FASDD_CV), a Unmanned Aerial Vehicle (UAV) dataset (i.e. FASDD_UAV), and an Remote Sensing (RS) dataset (i.e. FASDD_RS). FASDD comprises 122,634 samples, with 70,581 annotated as positive samples and 52,073 labeled as negative samples. There are 113,154 instances of flame objects and 73,072 instances of smoke objects in the entire dataset. FASDD_CV contains 95,314 samples for general computer vision, while FASDD_UAV consists of 25,097 samples captured by UAV, and FASDD_RS comprises 2,223 samples from satellite imagery. FASDD_CV contains 73,297 fire instances and 53,080 smoke instances. The CV dataset exhibits considerable variation in image size, ranging from 78 to 10,600 pixels in width and 68 to 8,858 pixels in height. The aspect ratios of the images also vary significantly, ranging from 1:6.6 to 1:0.18. FASDD_UAV contains 36,308 fire instances and 17,222 smoke instances, with image aspect ratios primarily distributed between 4:3 and 16:9. In FASDD_RS, there are 2,770 smoke instances and 3,549 flame instances. The sizes of remote sensing images are predominantly around 1,000×1,000 pixels.FASDD is provided in three compressed files: FASDD_CV.zip, FASDD_UAV.zip, and FASDD_RS.zip, which correspond to the CV dataset, the UAV dataset, and the RS dataset, respectively. Additionally, there is a FASDD_RS_SWIR. zip folder storing pseudo-color images for detecting flame objects in remote sensing imagery. Each zip file contains two folders: "images" for storing the source data and "annotations" for storing the labels. The "annotations" folder consists of label files in four formats: YOLO, VOC, COCO, and TDML. The dataset is divided randomly into training, validation, and test sets, with a ratio of 1/2, 1/3, and 1/6, respectively, within each label format. In FASDD_CV, FASDD_UAV, and FASDD_RS, images and their corresponding annotation files have been individually sorted starting from 0. The flame and smoke objects in FASDD are given the labels "fire" and "smoke" for the object detection task, respectively. The names of all images and annotation files are prefixed with "Fire", "Smoke", "FireAndSmoke", and "NeitherFireNorSmoke", representing different categories for scene classification tasks.When using this dataset, please cite the following paper. Thank you very much for your support and cooperation:################################################################################使用数据集请引用对应论文,非常感谢您的关注和支持:Wang, M., Yue, P., Jiang, L., Yu, D., Tuo, T., & Li, J. (2025). An open flame and smoke detection dataset for deep learning in remote sensing based fire detection. Geo-spatial Information Science, 28(2), 511-526.################################################################################

Search
Clear search
Close search
Google apps
Main menu