8 datasets found
  1. MOBDrone: a large-scale drone-view dataset for man overboard detection

    • zenodo.org
    • explore.openaire.eu
    json, pdf, zip
    Updated Jul 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Donato Cafarelli; Donato Cafarelli; Luca Ciampi; Luca Ciampi; Lucia Vadicamo; Lucia Vadicamo; Claudio Gennaro; Claudio Gennaro; Andrea Berton; Andrea Berton; Marco Paterni; Marco Paterni; Chiara Benvenuti; Mirko Passera; Mirko Passera; Fabrizio Falchi; Fabrizio Falchi; Chiara Benvenuti (2024). MOBDrone: a large-scale drone-view dataset for man overboard detection [Dataset]. http://doi.org/10.5281/zenodo.5996890
    Explore at:
    json, zip, pdfAvailable download formats
    Dataset updated
    Jul 17, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Donato Cafarelli; Donato Cafarelli; Luca Ciampi; Luca Ciampi; Lucia Vadicamo; Lucia Vadicamo; Claudio Gennaro; Claudio Gennaro; Andrea Berton; Andrea Berton; Marco Paterni; Marco Paterni; Chiara Benvenuti; Mirko Passera; Mirko Passera; Fabrizio Falchi; Fabrizio Falchi; Chiara Benvenuti
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset

    The Man OverBoard Drone (MOBDrone) dataset is a large-scale collection of aerial footage images. It contains 126,170 frames extracted from 66 video clips gathered from one UAV flying at an altitude of 10 to 60 meters above the mean sea level. Images are manually annotated with more than 180K bounding boxes localizing objects belonging to 5 categories --- person, boat, lifebuoy, surfboard, wood. More than 113K of these bounding boxes belong to the person category and localize people in the water simulating the need to be rescued.

    In this repository, we provide:

    • 66 Full HD video clips (total size: 5.5 GB)

    • 126,170 images extracted from the videos at a rate of 30 FPS (total size: 243 GB)

    • 3 annotation files for the extracted images that follow the MS COCO data format (for more info see https://cocodataset.org/#format-data):

      • annotations_5_custom_classes.json: this file contains annotations concerning all five categories; please note that class ids do not correspond with the ones provided by the MS COCO standard since we account for two new classes not previously considered in the MS COCO dataset --- lifebuoy and wood

      • annotations_3_coco_classes.json: this file contains annotations concerning the three classes also accounted by the MS COCO dataset --- person, boat, surfboard. Class ids correspond with the ones provided by the MS COCO standard.

      • annotations_person_coco_classes.json: this file contains annotations concerning only the 'person' class. Class id corresponds to the one provided by the MS COCO standard.

    The MOBDrone dataset is intended as a test data benchmark. However, for researchers interested in using our data also for training purposes, we provide training and test splits:

    • Test set: All the images whose filename starts with "DJI_0804" (total: 37,604 images)
    • Training set: All the images whose filename starts with "DJI_0915" (total: 88,568 images)

    More details about data generation and the evaluation protocol can be found at our MOBDrone paper: https://arxiv.org/abs/2203.07973
    The code to reproduce our results is available at this GitHub Repository: https://github.com/ciampluca/MOBDrone_eval
    See also http://aimh.isti.cnr.it/dataset/MOBDrone

    Citing the MOBDrone

    The MOBDrone is released under a Creative Commons Attribution license, so please cite the MOBDrone if it is used in your work in any form.
    Published academic papers should use the academic paper citation for our MOBDrone paper, where we evaluated several pre-trained state-of-the-art object detectors focusing on the detection of the overboard people

    @inproceedings{MOBDrone2021,
    title={MOBDrone: a Drone Video Dataset for Man OverBoard Rescue},
    author={Donato Cafarelli and Luca Ciampi and Lucia Vadicamo and Claudio Gennaro and Andrea Berton and Marco Paterni and Chiara Benvenuti and Mirko Passera and Fabrizio Falchi},
    booktitle={ICIAP2021: 21th International Conference on Image Analysis and Processing},
    year={2021}
    }
    

    and this Zenodo Dataset

    @dataset{donato_cafarelli_2022_5996890,
    author={Donato Cafarelli and Luca Ciampi and Lucia Vadicamo and Claudio Gennaro and Andrea Berton and Marco Paterni and Chiara Benvenuti and Mirko Passera and Fabrizio Falchi},
     title    = {{MOBDrone: a large-scale drone-view dataset for man overboard detection}},
     month    = feb,
     year     = 2022,
     publisher  = {Zenodo},
     version   = {1.0.0},
     doi     = {10.5281/zenodo.5996890},
     url     = {https://doi.org/10.5281/zenodo.5996890}
    }

    Personal works, such as machine learning projects/blog posts, should provide a URL to the MOBDrone Zenodo page (https://doi.org/10.5281/zenodo.5996890), though a reference to our MOBDrone paper would also be appreciated.

    Contact Information

    If you would like further information about the MOBDrone or if you experience any issues downloading files, please contact us at mobdrone[at]isti.cnr.it

    Acknowledgements

    This work was partially supported by NAUSICAA - "NAUtical Safety by means of Integrated Computer-Assistance Appliances 4.0" project funded by the Tuscany region (CUP D44E20003410009). The data collection was carried out with the collaboration of the Fly&Sense Service of the CNR of Pisa - for the flight operations of remotely piloted aerial systems - and of the Institute of Clinical Physiology (IFC) of the CNR - for the water immersion operations.

  2. Z

    Hyperspectral Imaging Dataset for Laser Thermal Ablation Monitoring in Vital...

    • data.niaid.nih.gov
    Updated Dec 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Danilov, Viacheslav (2024). Hyperspectral Imaging Dataset for Laser Thermal Ablation Monitoring in Vital Organs [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10444212
    Explore at:
    Dataset updated
    Dec 14, 2024
    Dataset provided by
    De Landro, Martina
    Danilov, Viacheslav
    Saccomandi, Paola
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Objectives: The objective of the research was to use hyperspectral imaging (HSI) to detect thermal damage induced in vital organs (such as the liver, pancreas, and stomach) during laser thermal therapy. The experimental study was conducted during thermal ablation procedures on live pigs.

    Ethical Approval: The experiments were performed at the Institute for Image Guided Surgery in Strasbourg, France. This experimental study was approved by the local Ethical Committee on Animal Experimentation (ICOMETH No. 38.2015.01.069) and by the French Ministry of Higher Education and Research (protocol №APAFiS-19543-2019030112087889, approved on March 14, 2019). All animals were treated in accordance with the ARRIVE guidelines, the French legislation on the use and care of animals, and the guidelines of the Council of the European Union (2010/63/EU).

    Description: During our experimental study, we used a TIVITA hyperspectral camera to acquire hypercubes of size 640x480x100 voxels, indicating 640x480 pixels for 100 bands, and regular RGB images at each acquisition step. These bands were acquired directly from the hyperspectral camera without additional pre-processing. The hypercube was acquired in approximately 6 seconds and synchronized with the absence of breathing motion using a protocol implemented for animal anesthesia. Polyurethane markers were placed around the target area to serve as references for superimposing the hyperspectral images, which were acquired using target areas selected according to the hyperspectral camera manufacturer's guidelines.

    As part of our investigation, we included hyperspectral cubes from 20 experiments conducted under identical conditions in our study. The hyperspectral cubes were collected in three distinct stages. In the first stage, the cubes were gathered before laparotomy at a temperature of 37°C. In the second stage, we obtained the cubes as the temperature gradually increased from 60°C to 110°C at 10°C intervals. Finally, in the last stage, the cubes were collected after turning off the laser during the post-ablation phase. Thus, we obtained a total of 233 hyperspectral cubes, each consisting of 100 wavelengths, resulting in a dataset of 23,300 two-dimensional images. The temperature changes were recorded, and the “Temperature profile during laser ablation” image illustrates the corresponding profile, highlighting the specific time intervals during which the hyperspectral camera and laser were activated and deactivated. To provide a visual representation of the collected data, we have included several examples of images captured from different organs in the “Examples of ablation areas” figure.

    The raw dataset, comprising 233 hyperspectral cubes of 100 wavelengths each, was transformed into 699 single-channel images using PCA and t-SNE decompositions. These images were then divided into training and test subsets and prepared in the COCO object detection format. This COCO dataset can be used for training and testing different neural networks.

    Access to the Study: Further information about this study, including curated source code, dataset details, and trained models, can be accessed through the following repositories:

    Source code: https://github.com/ViacheslavDanilov/hsi_analysis

    Dataset: https://doi.org/10.5281/zenodo.10444212

    Models: https://doi.org/10.5281/zenodo.10444269

  3. d

    Exploration no. 74072

    • datadiscoverystudio.org
    html
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Exploration no. 74072 [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/9e877d1e997b4673a12288df443f9551/html
    Explore at:
    htmlAvailable download formats
    Description

    Link to the ScienceBase Item Summary page for the item described by this metadata record. Service Protocol: Link to the ScienceBase Item Summary page for the item described by this metadata record. Application Profile: Web Browser. Link Function: information

  4. f

    COCO Panoptic scores on validation and test set for U-Net variants.

    • figshare.com
    xls
    Updated Feb 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yizi Chen; Joseph Chazalon; Edwin Carlinet; Minh Ôn Vũ Ngoc; Clément Mallet; Julien Perret (2024). COCO Panoptic scores on validation and test set for U-Net variants. [Dataset]. http://doi.org/10.1371/journal.pone.0298217.t009
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Feb 15, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Yizi Chen; Joseph Chazalon; Edwin Carlinet; Minh Ôn Vũ Ngoc; Clément Mallet; Julien Perret
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The following parameters are static, and their respective columns are hidden: we use our proposed training configuration, the loss function is the binary cross entropy, no augmentation is performed, DEF selection is performed with Joint Optimization (JO), and we use the Meyer Watershed (MWS) for CSE.

  5. f

    COCO Panoptic scores on validation and test set for study on topological...

    • plos.figshare.com
    xls
    Updated Feb 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yizi Chen; Joseph Chazalon; Edwin Carlinet; Minh Ôn Vũ Ngoc; Clément Mallet; Julien Perret (2024). COCO Panoptic scores on validation and test set for study on topological loss functions. [Dataset]. http://doi.org/10.1371/journal.pone.0298217.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Feb 15, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Yizi Chen; Joseph Chazalon; Edwin Carlinet; Minh Ôn Vũ Ngoc; Clément Mallet; Julien Perret
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The following parameters are static, and their respective columns are hidden: we use the Meyer Watershed (MWS) for CSE and Joint Optimization (JO) for DEF selection, we use our proposed training configuration, no augmentation is performed. For the architectures, * indicates pre-trained variants: the network is trained first using binary cross-entropy, then using a custom loss.

  6. f

    COCO Panoptic scores on validation and test set for transformer...

    • plos.figshare.com
    xls
    Updated Feb 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yizi Chen; Joseph Chazalon; Edwin Carlinet; Minh Ôn Vũ Ngoc; Clément Mallet; Julien Perret (2024). COCO Panoptic scores on validation and test set for transformer architectures. [Dataset]. http://doi.org/10.1371/journal.pone.0298217.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Feb 15, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Yizi Chen; Joseph Chazalon; Edwin Carlinet; Minh Ôn Vũ Ngoc; Clément Mallet; Julien Perret
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The following parameters are static, and their respective columns are hidden: we use the Meyer Watershed (MWS) for CSE and Joint Optimization (JO) for DEF selection, we use our proposed training configuration, the loss function is the binary cross entropy, no augmentation is performed. For the architectures, * indicates pre-trained variants.

  7. f

    COCO Panoptic scores on validation and test set for the training...

    • plos.figshare.com
    xls
    Updated Feb 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yizi Chen; Joseph Chazalon; Edwin Carlinet; Minh Ôn Vũ Ngoc; Clément Mallet; Julien Perret (2024). COCO Panoptic scores on validation and test set for the training configuration study, using a naive connected component labelling for CSE. [Dataset]. http://doi.org/10.1371/journal.pone.0298217.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Feb 15, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Yizi Chen; Joseph Chazalon; Edwin Carlinet; Minh Ôn Vũ Ngoc; Clément Mallet; Julien Perret
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The training configuration from [5] is indicated as “Original”, while our proposed method is indicated as “Proposed”. The following parameters are static, and their respective columns are hidden: the CSE used is a naive connected component labelling ([5] used a grid search to find the best threshold θ for EPM binarization while we use a fixed value of 0.5), the loss function is the binary cross entropy, the best DEF is selected using the protocol of [5], no augmentation is performed. For the architectures, * indicates pre-trained variants.

  8. f

    COCO Panoptic scores on validation and test set for the augmentation study.

    • figshare.com
    xls
    Updated Feb 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yizi Chen; Joseph Chazalon; Edwin Carlinet; Minh Ôn Vũ Ngoc; Clément Mallet; Julien Perret (2024). COCO Panoptic scores on validation and test set for the augmentation study. [Dataset]. http://doi.org/10.1371/journal.pone.0298217.t008
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Feb 15, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Yizi Chen; Joseph Chazalon; Edwin Carlinet; Minh Ôn Vũ Ngoc; Clément Mallet; Julien Perret
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The following parameters are static, and their respective columns are hidden: model architecture is U-Net (trained from scratch), we use the improved training variant, the loss function is the binary cross entropy, the best DEF is selected using joint optimization, and Meyer Watershed (MWS) is used for CSE.

  9. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Donato Cafarelli; Donato Cafarelli; Luca Ciampi; Luca Ciampi; Lucia Vadicamo; Lucia Vadicamo; Claudio Gennaro; Claudio Gennaro; Andrea Berton; Andrea Berton; Marco Paterni; Marco Paterni; Chiara Benvenuti; Mirko Passera; Mirko Passera; Fabrizio Falchi; Fabrizio Falchi; Chiara Benvenuti (2024). MOBDrone: a large-scale drone-view dataset for man overboard detection [Dataset]. http://doi.org/10.5281/zenodo.5996890
Organization logo

MOBDrone: a large-scale drone-view dataset for man overboard detection

Explore at:
json, zip, pdfAvailable download formats
Dataset updated
Jul 17, 2024
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Donato Cafarelli; Donato Cafarelli; Luca Ciampi; Luca Ciampi; Lucia Vadicamo; Lucia Vadicamo; Claudio Gennaro; Claudio Gennaro; Andrea Berton; Andrea Berton; Marco Paterni; Marco Paterni; Chiara Benvenuti; Mirko Passera; Mirko Passera; Fabrizio Falchi; Fabrizio Falchi; Chiara Benvenuti
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Dataset

The Man OverBoard Drone (MOBDrone) dataset is a large-scale collection of aerial footage images. It contains 126,170 frames extracted from 66 video clips gathered from one UAV flying at an altitude of 10 to 60 meters above the mean sea level. Images are manually annotated with more than 180K bounding boxes localizing objects belonging to 5 categories --- person, boat, lifebuoy, surfboard, wood. More than 113K of these bounding boxes belong to the person category and localize people in the water simulating the need to be rescued.

In this repository, we provide:

  • 66 Full HD video clips (total size: 5.5 GB)

  • 126,170 images extracted from the videos at a rate of 30 FPS (total size: 243 GB)

  • 3 annotation files for the extracted images that follow the MS COCO data format (for more info see https://cocodataset.org/#format-data):

    • annotations_5_custom_classes.json: this file contains annotations concerning all five categories; please note that class ids do not correspond with the ones provided by the MS COCO standard since we account for two new classes not previously considered in the MS COCO dataset --- lifebuoy and wood

    • annotations_3_coco_classes.json: this file contains annotations concerning the three classes also accounted by the MS COCO dataset --- person, boat, surfboard. Class ids correspond with the ones provided by the MS COCO standard.

    • annotations_person_coco_classes.json: this file contains annotations concerning only the 'person' class. Class id corresponds to the one provided by the MS COCO standard.

The MOBDrone dataset is intended as a test data benchmark. However, for researchers interested in using our data also for training purposes, we provide training and test splits:

  • Test set: All the images whose filename starts with "DJI_0804" (total: 37,604 images)
  • Training set: All the images whose filename starts with "DJI_0915" (total: 88,568 images)

More details about data generation and the evaluation protocol can be found at our MOBDrone paper: https://arxiv.org/abs/2203.07973
The code to reproduce our results is available at this GitHub Repository: https://github.com/ciampluca/MOBDrone_eval
See also http://aimh.isti.cnr.it/dataset/MOBDrone

Citing the MOBDrone

The MOBDrone is released under a Creative Commons Attribution license, so please cite the MOBDrone if it is used in your work in any form.
Published academic papers should use the academic paper citation for our MOBDrone paper, where we evaluated several pre-trained state-of-the-art object detectors focusing on the detection of the overboard people

@inproceedings{MOBDrone2021,
title={MOBDrone: a Drone Video Dataset for Man OverBoard Rescue},
author={Donato Cafarelli and Luca Ciampi and Lucia Vadicamo and Claudio Gennaro and Andrea Berton and Marco Paterni and Chiara Benvenuti and Mirko Passera and Fabrizio Falchi},
booktitle={ICIAP2021: 21th International Conference on Image Analysis and Processing},
year={2021}
}

and this Zenodo Dataset

@dataset{donato_cafarelli_2022_5996890,
author={Donato Cafarelli and Luca Ciampi and Lucia Vadicamo and Claudio Gennaro and Andrea Berton and Marco Paterni and Chiara Benvenuti and Mirko Passera and Fabrizio Falchi},
 title    = {{MOBDrone: a large-scale drone-view dataset for man overboard detection}},
 month    = feb,
 year     = 2022,
 publisher  = {Zenodo},
 version   = {1.0.0},
 doi     = {10.5281/zenodo.5996890},
 url     = {https://doi.org/10.5281/zenodo.5996890}
}

Personal works, such as machine learning projects/blog posts, should provide a URL to the MOBDrone Zenodo page (https://doi.org/10.5281/zenodo.5996890), though a reference to our MOBDrone paper would also be appreciated.

Contact Information

If you would like further information about the MOBDrone or if you experience any issues downloading files, please contact us at mobdrone[at]isti.cnr.it

Acknowledgements

This work was partially supported by NAUSICAA - "NAUtical Safety by means of Integrated Computer-Assistance Appliances 4.0" project funded by the Tuscany region (CUP D44E20003410009). The data collection was carried out with the collaboration of the Fly&Sense Service of the CNR of Pisa - for the flight operations of remotely piloted aerial systems - and of the Institute of Clinical Physiology (IFC) of the CNR - for the water immersion operations.

Search
Clear search
Close search
Google apps
Main menu