100+ datasets found
  1. Drone images and their annotations of grazing cows

    • zenodo.org
    zip
    Updated Apr 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Louise Helary; Louise Helary; Adrien Lebreton; Adrien Lebreton (2024). Drone images and their annotations of grazing cows [Dataset]. http://doi.org/10.5281/zenodo.11048412
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 24, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Louise Helary; Louise Helary; Adrien Lebreton; Adrien Lebreton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is part of the European H2020 project ICAERUS regarding the livestock monitoring use case. More information here: https://icaerus.eu/

    By making available 1385 images and 4941 bounding boxes of cows, this dataset is a major contribution for public and research stakeholders to develop cow detection and counting models.

    This dataset encompasses the following data:

    - Farm_name: name of the farm were the images were collected (Mauron, Derval, Jalogny or Other_farms)

    -----JPGImages: a directory were the images are stored (1385 images, , RGB images taken from 60 ot 100m of altitude; images size of 4000*3000 or 5280*3956, images taken with DJI MAVIC3 E or DJI MAVIC3T).

    -------------DJI_YYYYMMDDHHMM_XXX: a directory by flight containing the images of a unique flight, with the date (YYYYMMDD), the hour in UTC+2 (HHMM), and XXX representing a mission number

    -----PASCAL_VOC1.1: a directory with the annotations of cows at the PASCAL format (4941 bouding boxes of cows)

    -------------DJI_YYYYMMDDHHMM_XXX: a directory by flight containing the annotations, one annotation file referred to an unique image and have the same name except the extension

    -----YOLO_V1: a directory with the annotations of cows at the YOLO format (4941 bouding boxes of cows)

    -------------DJI_YYYYMMDDHHMM_XXX: a directory by flight containing the annotations, one annotation file referred to an unique image and have the same name except the extension

    Nadir images were collected between June 2023 and March 2024 at a constant altitude of 100m or 60m during flight planned with DJI Pilot 2 regarding the take-off position or manually at a precise relative altitude. When they were used, flight planned were designed to cover the entire pasture. It is why, in some flight there is a strong imbalance between images with “cattle” and image with “no cattle”. The file conditions_summary.xlsx detailed the following information for each flight: Relative altitude, Number of images, number of images containing cows, number of cow bounding boxes, Weather, Date, Drone used, Drone camera used, Image size. The Aerial images contain many .exif methadata (drone, camera, altitude etc.).

    These data can be used to train different deep learning models such as Yolo models or to validate pretrained models to detect cows on pastures.

    Other versions of this dataset will be available later in 2024. The authors of the dataset are opened to any collaboration regarding animal counting models.

    If you are interested in sheep counting based on drones’ images and AI, find other available data here: https://doi.org/10.5281/zenodo.10400302

    For more information, please contact: adrien.lebreton@idele.fr

  2. Solar photovoltaic annotations for computer vision related to the...

    • figshare.com
    zip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simiao Ren; Jordan Malof; T. Robert Fetter; Robert Beach; Jay Rineer; Kyle Bradbury (2023). Solar photovoltaic annotations for computer vision related to the "Classification Training Dataset for Crop Types in Rwanda" drone imagery dataset [Dataset]. http://doi.org/10.6084/m9.figshare.18094043.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Simiao Ren; Jordan Malof; T. Robert Fetter; Robert Beach; Jay Rineer; Kyle Bradbury
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Rwanda
    Description

    This dataset contains annotations (i.e. polygons) for solar photovoltaic (PV) objects in the previously published dataset "Classification Training Dataset for Crop Types in Rwanda" published by RTI International (DOI: 10.34911/rdnt.r4p1fr [1]). These polygons are intended to enable the use of this dataset as a machine learning training dataset for solar PV identification in drone imagery. Note that this dataset contains ONLY the solar panel polygon labels and needs to be used with the original RGB UAV imagery “Drone Imagery Classification Training Dataset for Crop Types in Rwanda” (https://mlhub.earth/data/rti_rwanda_crop_type). The original dataset contains UAV imagery (RGB) in .tiff format in six provinces in Rwanda, each with three phases imaged and our solar PV annotation dataset follows the same data structure with province and phase label in each subfolder.Data processing:Please refer to this Github repository for further details: https://github.com/BensonRen/Drone_based_solar_PV_detection. The original dataset is divided into 8000x8000 pixel image tiles and manually labeled with polygons (mainly rectangles) to indicate the presence of solar PV. These polygons are converted into pixel-wise, binary class annotations.Other information:1. The six provinces that UAV imagery came from are: (1) Cyampirita (2) Kabarama (3) Kaberege (4) Kinyaga (5) Ngarama (6) Rwakigarati. These original data collections were staged across 18 phases, each collected a set of imagery from a given Province (each provinces had 3 phases of collection). We have annotated 15 out of 18 phases, with the missing ones being: Kabarama-Phase2, Kaberege-Phase3, and Kinyaga-Phase3 due to data compatibility issues of the unused phases.2. The annotated polygons are transformed into binary maps the size of the image tiles but where each pixel is either 0 or 1. In this case, 0 represents background and 1 represents solar PV pixels. These binary maps are in .png format and each Province/phase set has between 9 and 49 annotation patches. Using the code provided in the above repository, the same image patches can be cropped from the original RGB imagery.3. Solar PV densities vary across the image patches. In total, there were 214 solar PV instances labeled in the 15 phase.Associated publications:“Utilizing geospatial data for assessing energy security: Mapping small solar home systems using unmanned aerial vehicles and deep learning” [https://arxiv.org/abs/2201.05548]This dataset is published under CC-BY-NC-SA-4.0 license. (https://creativecommons.org/licenses/by-nc-sa/4.0/)

  3. R

    Drone Image Dataset

    • universe.roboflow.com
    zip
    Updated Mar 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DRONE IMAGE (2024). Drone Image Dataset [Dataset]. https://universe.roboflow.com/drone-image-7qhur/drone-image-o9g3e
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 18, 2024
    Dataset authored and provided by
    DRONE IMAGE
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Car Motorcycle Lorry Bus Van Bounding Boxes
    Description

    Drone Image

    ## Overview
    
    Drone Image is a dataset for object detection tasks - it contains Car Motorcycle Lorry Bus Van annotations for 1,000 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  4. Manually Annotated Drone Imagery (RGB) Dataset for automatic coastline...

    • zenodo.org
    xml, zip
    Updated Feb 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kamran Anwar Tanwari; Kamran Anwar Tanwari; Paweł Terefenko; Jakub Śledziowski; Jakub Śledziowski; Andrzej Giza; Paweł Terefenko; Andrzej Giza (2025). Manually Annotated Drone Imagery (RGB) Dataset for automatic coastline delineation of Southern Baltic Sea, Poland with polyline annotations (0.1.1) [Dataset]. http://doi.org/10.5281/zenodo.14870761
    Explore at:
    zip, xmlAvailable download formats
    Dataset updated
    Feb 14, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Kamran Anwar Tanwari; Kamran Anwar Tanwari; Paweł Terefenko; Jakub Śledziowski; Jakub Śledziowski; Andrzej Giza; Paweł Terefenko; Andrzej Giza
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Baltic Sea, Poland
    Description

    Overview:

    The Manually Annotated Drone Imagery Dataset (MADRID) consists of hand annotated high resolution RGB images taken in two different types of coasts in Poland, Miedzyzdroje - cliff coast and in Mrzezyno - dune coast in 2022-2023. All images were converted into a uniform format of 1440x2560 pixels, polyline annotated and set into file structure format suited for semantic segmentation tasks (See "Usage" notes below for more details).

    The raw images of our dataset were captured Zenmuse L1 Sensor (RGB) mounted on a DJI Matrice 300 RTK Drone. Total of 4895 images were captured, however the dataset contains 3876 images with each image annotated with coastline. The dataset only include images with coastlines that are visually identifiable with the human eye. For the annotations of the images, CVAT v2.13 open-source software was utilized.

    Usage:

    The compressed RAR file contains two folders train and test. Each folder contains the file that represents the date at which the image was captured in the format of (year, month, day), number of the image and the name of the drone utilized to capture the image. For example, DJI_20220111140051_0051_Zenmuse-L1-mission and DJI_20220111140105_0053_Zenmuse-L1-mission. Additionally, the test folder contains annotations (one per image) which are extracted from the original XML annotation file provided in the CVAT 1.1 image format.

    Archives were compressed using RAR compression. They can be decompressed in a terminal by opening and extracting Madrid_v0.1_Data.zip.

    The subset of the data with the name Madrid_subset_data.zip has been added which contains a small portion of train and test images for purpose of inspecting the dataset without downloading the entire dataset.

    The training images for both training data and testing data are structured as follows.

    Train/
    └── images/
      └── DJI_20220111140051_0051_Zenmuse-L1-mission.JPG
      └── DJI_20220111140105_0053_Zenmuse-L1-mission.JPG
      └── ...
    └── masks/ └── DJI_20220111140051_0051_Zenmuse-L1-mission.PNG └── DJI_20220111140105_0053_Zenmuse-L1-mission.PNG └── ...

    Test/ └── images/ └── DJI_20220111140051_0051_Zenmuse-L1-mission.JPG └── DJI_20220111140105_0053_Zenmuse-L1-mission.JPG └── ...
    └── masks/ └── DJI_20220111140051_0051_Zenmuse-L1-mission.PNG └── DJI_20220111140105_0053_Zenmuse-L1-mission.PNG └── ...

  5. h

    drone-detection-dataset

    • huggingface.co
    Updated Dec 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pathik Prashant Ghugare (2024). drone-detection-dataset [Dataset]. https://huggingface.co/datasets/pathikg/drone-detection-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 4, 2024
    Authors
    Pathik Prashant Ghugare
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Dataset Card for Drone Detection Dataset

    This dataset card describes the processed version of the Drone Detection Dataset, originally curated by Maciej Pawełczyk and Marek Wojtyra, adapted to a COCO-style format for efficient usage in modern deep learning pipelines

      Dataset Details
    
    
    
    
    
      Dataset Description
    

    The Drone Detection Dataset is a real-world object detection dataset for UAV detection tasks. It includes RGB images annotated with bounding boxes in the COCO… See the full description on the dataset page: https://huggingface.co/datasets/pathikg/drone-detection-dataset.

  6. R

    Drone Image Segmentation Dataset

    • universe.roboflow.com
    zip
    Updated Jan 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pegasus (2025). Drone Image Segmentation Dataset [Dataset]. https://universe.roboflow.com/pegasus-iwt6h/drone-image-segmentation-ljxog
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 18, 2025
    Dataset authored and provided by
    Pegasus
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Plants Null Bounding Boxes
    Description

    Drone Image Segmentation

    ## Overview
    
    Drone Image Segmentation is a dataset for object detection tasks - it contains Plants Null annotations for 292 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
    
  7. P

    DroneVehicle Dataset

    • paperswithcode.com
    Updated Mar 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yiming Sun; Bing Cao; Pengfei Zhu; QinGhua Hu (2025). DroneVehicle Dataset [Dataset]. https://paperswithcode.com/dataset/dronevehicle
    Explore at:
    Dataset updated
    Mar 14, 2025
    Authors
    Yiming Sun; Bing Cao; Pengfei Zhu; QinGhua Hu
    Description

    The DroneVehicle dataset consists of a total of 56,878 images collected by the drone, half of which are RGB images, and the resting are infrared images. We have made rich annotations with oriented bounding boxes for the five categories. Among them, car has 389,779 annotations in RGB images, and 428,086 annotations in infrared images, truck has 22,123 annotations in RGB images, and 25,960 annotations in infrared images, bus has 15,333 annotations in RGB images, and 16,590 annotations in infrared images, van has 11,935 annotations in RGB images, and 12,708 annotations in infrared images, and freight car has 13,400 annotations in RGB images, and 17,173 annotations in infrared image. This dataset is available on the download page.

    In DroneVehicle, to annotate the objects at the image boundaries, we set a white border with a width of 100 pixels on the top, bottom, left and right of each image, so that the downloaded image scale is 840 x 712. When training our detection network, we can perform pre-processing to remove the surrounding white border and change the image scale to 640 x 512.

  8. R

    Aerial Maritime Drone Object Detection Dataset - tiled

    • public.roboflow.com
    zip
    Updated Sep 28, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jacob Solawetz (2022). Aerial Maritime Drone Object Detection Dataset - tiled [Dataset]. https://public.roboflow.com/object-detection/aerial-maritime/9
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 28, 2022
    Dataset authored and provided by
    Jacob Solawetz
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    Bounding Boxes of movable-objects
    Description

    Overview

    Drone Example

    This dataset contains 74 images of aerial maritime photographs taken with via a Mavic Air 2 drone and 1,151 bounding boxes, consisting of docks, boats, lifts, jetskis, and cars. This is a multi class problem. This is an aerial object detection dataset. This is a maritime object detection dataset.

    The drone was flown at 400 ft. No drones were harmed in the making of this dataset.

    This dataset was collected and annotated by the Roboflow team, released with MIT license.

    https://i.imgur.com/9ZYLQSO.jpg" alt="Image example">

    Use Cases

    • Identify number of boats on the water over a lake via quadcopter.
    • Boat object detection dataset
    • Aerial Object Detection proof of concept
    • Identify if boat lifts have been taken out via a drone
    • Identify cars with a UAV drone
    • Find which lakes are inhabited and to which degree.
    • Identify if visitors are visiting the lake house via quad copter.
    • Proof of concept for UAV imagery project
    • Proof of concept for maritime project
    • Etc.

    This dataset is a great starter dataset for building an aerial object detection model with your drone.

    Getting Started

    Fork or download this dataset and follow our How to train state of the art object detector YOLOv4 for more. Stay tuned for particular tutorials on how to teach your UAV drone how to see and comprable airplane imagery and airplane footage.

    Annotation Guide

    See here for how to use the CVAT annotation tool that was used to create this dataset.

    About Roboflow

    Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. :fa-spacer: Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:

    Roboflow Wordmark

  9. Z

    Unmanned Aerial Vehicles Dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated Apr 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nicolas Souli (2023). Unmanned Aerial Vehicles Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7477568
    Explore at:
    Dataset updated
    Apr 5, 2023
    Dataset provided by
    Rafael Makrigiorgis
    Panayiotis Kolios
    Nicolas Souli
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Unmanned Aerial Vehicles Dataset:

    The Unmanned Aerial Vehicle (UAV) Image Dataset consists of a collection of images containing UAVs, along with object annotations for the UAVs found in each image. The annotations have been converted into the COCO, YOLO, and VOC formats for ease of use with various object detection frameworks. The images in the dataset were captured from a variety of angles and under different lighting conditions, making it a useful resource for training and evaluating object detection algorithms for UAVs. The dataset is intended for use in research and development of UAV-related applications, such as autonomous flight, collision avoidance and rogue drone tracking and following. The dataset consists of the following images and detection objects (Drone):

        Subset
        Images
        Drone
    
    
        Training
        768
        818
    
    
        Validation
        384
        402
    
    
        Testing
        383
        400
    

    It is advised to further enhance the dataset so that random augmentations are probabilistically applied to each image prior to adding it to the batch for training. Specifically, there are a number of possible transformations such as geometric (rotations, translations, horizontal axis mirroring, cropping, and zooming), as well as image manipulations (illumination changes, color shifting, blurring, sharpening, and shadowing).

    NOTE If you use this dataset in your research/publication please cite us using the following

    Rafael Makrigiorgis, Nicolas Souli, & Panayiotis Kolios. (2022). Unmanned Aerial Vehicles Dataset (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7477569

  10. P

    VisDrone Dataset

    • paperswithcode.com
    Updated Apr 6, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pengfei Zhu; Longyin Wen; Xiao Bian; Haibin Ling; QinGhua Hu (2022). VisDrone Dataset [Dataset]. https://paperswithcode.com/dataset/visdrone
    Explore at:
    Dataset updated
    Apr 6, 2022
    Authors
    Pengfei Zhu; Longyin Wen; Xiao Bian; Haibin Ling; QinGhua Hu
    Description

    VisDrone is a large-scale benchmark with carefully annotated ground-truth for various important computer vision tasks, to make vision meet drones. The VisDrone2019 dataset is collected by the AISKYEYE team at Lab of Machine Learning and Data Mining, Tianjin University, China. The benchmark dataset consists of 288 video clips formed by 261,908 frames and 10,209 static images, captured by various drone-mounted cameras, covering a wide range of aspects including location (taken from 14 different cities separated by thousands of kilometers in China), environment (urban and country), objects (pedestrian, vehicles, bicycles, etc.), and density (sparse and crowded scenes). Note that, the dataset was collected using various drone platforms (i.e., drones with different models), in different scenarios, and under various weather and lighting conditions. These frames are manually annotated with more than 2.6 million bounding boxes of targets of frequent interests, such as pedestrians, cars, bicycles, and tricycles. Some important attributes including scene visibility, object class and occlusion, are also provided for better data utilization.

  11. g

    Cars Drone Detection Dataset

    • gts.ai
    json
    Updated Jun 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Cars Drone Detection Dataset [Dataset]. https://gts.ai/dataset-download/cars-drone-detection-dataset/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Jun 28, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Explore the Cars Drone Detection Dataset featuring high-resolution images (512x512 pixels) with precise car annotations using the Pascal VOC format.

  12. R

    Drone Night Dataset

    • universe.roboflow.com
    zip
    Updated Nov 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sushil chandra (2021). Drone Night Dataset [Dataset]. https://universe.roboflow.com/sushil-chandra/drone-night
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 10, 2021
    Dataset authored and provided by
    Sushil chandra
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Drones Bounding Boxes
    Description

    Drone Night

    ## Overview
    
    Drone Night is a dataset for object detection tasks - it contains Drones annotations for 395 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
    
  13. Z

    MOBDrone: a large-scale drone-view dataset for man overboard detection

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Jul 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrea Berton (2024). MOBDrone: a large-scale drone-view dataset for man overboard detection [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5996889
    Explore at:
    Dataset updated
    Jul 17, 2024
    Dataset provided by
    Chiara Benvenuti
    Lucia Vadicamo
    Marco Paterni
    Andrea Berton
    Fabrizio Falchi
    Mirko Passera
    Luca Ciampi
    Donato Cafarelli
    Claudio Gennaro
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset

    The Man OverBoard Drone (MOBDrone) dataset is a large-scale collection of aerial footage images. It contains 126,170 frames extracted from 66 video clips gathered from one UAV flying at an altitude of 10 to 60 meters above the mean sea level. Images are manually annotated with more than 180K bounding boxes localizing objects belonging to 5 categories --- person, boat, lifebuoy, surfboard, wood. More than 113K of these bounding boxes belong to the person category and localize people in the water simulating the need to be rescued.

    In this repository, we provide:

    66 Full HD video clips (total size: 5.5 GB)

    126,170 images extracted from the videos at a rate of 30 FPS (total size: 243 GB)

    3 annotation files for the extracted images that follow the MS COCO data format (for more info see https://cocodataset.org/#format-data):

    annotations_5_custom_classes.json: this file contains annotations concerning all five categories; please note that class ids do not correspond with the ones provided by the MS COCO standard since we account for two new classes not previously considered in the MS COCO dataset --- lifebuoy and wood

    annotations_3_coco_classes.json: this file contains annotations concerning the three classes also accounted by the MS COCO dataset --- person, boat, surfboard. Class ids correspond with the ones provided by the MS COCO standard.

    annotations_person_coco_classes.json: this file contains annotations concerning only the 'person' class. Class id corresponds to the one provided by the MS COCO standard.

    The MOBDrone dataset is intended as a test data benchmark. However, for researchers interested in using our data also for training purposes, we provide training and test splits:

    Test set: All the images whose filename starts with "DJI_0804" (total: 37,604 images)

    Training set: All the images whose filename starts with "DJI_0915" (total: 88,568 images)

    More details about data generation and the evaluation protocol can be found at our MOBDrone paper: https://arxiv.org/abs/2203.07973 The code to reproduce our results is available at this GitHub Repository: https://github.com/ciampluca/MOBDrone_eval See also http://aimh.isti.cnr.it/dataset/MOBDrone

    Citing the MOBDrone

    The MOBDrone is released under a Creative Commons Attribution license, so please cite the MOBDrone if it is used in your work in any form. Published academic papers should use the academic paper citation for our MOBDrone paper, where we evaluated several pre-trained state-of-the-art object detectors focusing on the detection of the overboard people

    @inproceedings{MOBDrone2021, title={MOBDrone: a Drone Video Dataset for Man OverBoard Rescue}, author={Donato Cafarelli and Luca Ciampi and Lucia Vadicamo and Claudio Gennaro and Andrea Berton and Marco Paterni and Chiara Benvenuti and Mirko Passera and Fabrizio Falchi}, booktitle={ICIAP2021: 21th International Conference on Image Analysis and Processing}, year={2021} }

    and this Zenodo Dataset

    @dataset{donato_cafarelli_2022_5996890, author={Donato Cafarelli and Luca Ciampi and Lucia Vadicamo and Claudio Gennaro and Andrea Berton and Marco Paterni and Chiara Benvenuti and Mirko Passera and Fabrizio Falchi}, title = {{MOBDrone: a large-scale drone-view dataset for man overboard detection}}, month = feb, year = 2022, publisher = {Zenodo}, version = {1.0.0}, doi = {10.5281/zenodo.5996890}, url = {https://doi.org/10.5281/zenodo.5996890} }

    Personal works, such as machine learning projects/blog posts, should provide a URL to the MOBDrone Zenodo page (https://doi.org/10.5281/zenodo.5996890), though a reference to our MOBDrone paper would also be appreciated.

    Contact Information

    If you would like further information about the MOBDrone or if you experience any issues downloading files, please contact us at mobdrone[at]isti.cnr.it

    Acknowledgements

    This work was partially supported by NAUSICAA - "NAUtical Safety by means of Integrated Computer-Assistance Appliances 4.0" project funded by the Tuscany region (CUP D44E20003410009). The data collection was carried out with the collaboration of the Fly&Sense Service of the CNR of Pisa - for the flight operations of remotely piloted aerial systems - and of the Institute of Clinical Physiology (IFC) of the CNR - for the water immersion operations.

  14. Thermal Bridges on Building Rooftops - Hyperspectral (RGB + Thermal +...

    • zenodo.org
    bin, json
    Updated Dec 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zoe Mayer; Zoe Mayer; James Kahn; James Kahn; Yu Hou; Yu Hou; Tobias Beiersdörfer; Nicolas Blumenröhr; Nicolas Blumenröhr; Markus Götz; Markus Götz; Rebekka Volk; Rebekka Volk; Tobias Beiersdörfer (2022). Thermal Bridges on Building Rooftops - Hyperspectral (RGB + Thermal + Height) drone images of Karlsruhe, Germany, with thermal bridge annotations [Dataset]. http://doi.org/10.5281/zenodo.7022736
    Explore at:
    bin, jsonAvailable download formats
    Dataset updated
    Dec 12, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Zoe Mayer; Zoe Mayer; James Kahn; James Kahn; Yu Hou; Yu Hou; Tobias Beiersdörfer; Nicolas Blumenröhr; Nicolas Blumenröhr; Markus Götz; Markus Götz; Rebekka Volk; Rebekka Volk; Tobias Beiersdörfer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Germany, Karlsruhe
    Description

    Overview:

    The dataset of Thermal Bridges on Building Rooftops (TBBR dataset) consists of annotated combined RGB and thermal drone images with a height map. All images were converted to a uniform format of 3000x4000 pixels, aligned, and cropped to 2680x3370 to remove empty borders. See the "Usage" section below for details about the stored formats made available here.

    The raw images for our dataset were recorded with a normal (RGB) and a FLIR-XT2 (thermal) camera on a DJI M600 drone. They show six large building blocks of around 20 buildings per block recorded in the city centre of the German city Karlsruhe east of the market square. Because of a high overlap rate of the images, the same buildings are on average recorded from different angles in different images about 20 times.

    All images were recorded during a drone flight on March 19, 2019 from 7 a.m. to 8 a.m. At this time, temperatures were between 3.78 ° C and 4.97 ° C, humidity between 80% and 98%. There was no rain on the day of the flight, but there was 2.3mm/m² 48 hours beforehand. For recording the thermographic images an emissivity of 1.0 was set. The global radiation during this period was between 38.59 W / m² and 120.86 W / m². No direct sunlight can be seen visually on any of the recordings.

    The dataset contains 926 images with a total of 6,927 annotations of thermal bridges on rooftops, split into train and test subsets with 723 (5,614) and 203 (1,313) images (annotations), respectively. The annotations only include thermal bridges that are visually identifiable with the human eye. Because of the aforementioned image overlap, each thermal bridge is annotated multiple times from different angles.

    For the annotation of the thermal images the image processing program VGG Image Annotator from the Visual Geometry Group, version 2.0.10, was used. The thermal bridge annotations are outlined with polygon shapes. These polygon lines were placed as close as possible but outside the area of significant temperature increase. If a detected thermal bridge was partially covered by another building component located in the foreground, the thermal bridge was also marked across the covering in case of minor coverings. Adjacent thermal bridges, which affect different rooftop components, were annotated separately. For example, a window with poor insulation of the window reveal located in the area of a poorly insulated roof is annotated individually. There is no overlap between annotated areas. While each image contains annotations, they also include thermal bridges present that are not annotated.

    Usage:

    Each compressed archive file represents one of the six flight paths. For the related publication the final path (Flug1_105Media) was used as a hold-out test sample. The archives contain Numpy files (one per image) of shape (2680, 3370, 5), where the final dimension is the colour channel in the format [B, G, R, Thermal, Height].

    Archives were compressed using ZStandard compression. They can be decompressed in a terminal by running e.g.

    tar -I zstd -xvf Flug1_105Media.tar.zst

    these will be decompressed into the file structure:

    images/
    └── Flug1_105Media/
      └── DJI_0004_R.npy
      └── DJI_0006_R.npy
      └── ...

    Corresponding annotations are provided in the COCO JSON format. There is one file for training (Flug1_100Media - Flug1_104Media blocks) and one for test (Flug1_105Media block). They contain a single class (thermal bridge) and expect the folder structure shown below.

    Note: The annotation files contain relative paths to numpy files, in case of problems please convert to absolute paths (i.e. insert the containing directory before each file path in the JSON annotation files).

    We provide the TBBRDet software which includes a dataloader and dataset inspection tools which make use of the Detectron2 and MMDetection libraries.

    We recommend the following folder structure for use:

    ├── train/
    │  ├── Flug1_100-104Media_coco.json
    │  └── images/
    │    ├── Flug1_100Media/
    │    │  ├── DJI_XXXX_R.npy
    │    │  └── ...
    │    ├── ...
    │    └── Flug1_104Media/
    │      ├── DJI_XXXX_R.npy
    │      └── ...
    └── test/
      ├── Flug1_105Media_coco.json
      └── images/
        └── Flug1_105Media/
          ├── DJI_XXXX_R.npy
          └── ...

    Metadata:

    The experimental metadata was structured with the Spatio Temporal Asset Catalog (STAC) specification family. This specification provides a standardized way for describing geospatial assets. It defines related JSON object types of Item, Catalog, and Catalog, extending on Collection as the basis.

    One STAC Collection JSON object provides information about the recorded images and the environmental conditions during recordings. It also contains information about the overall bounding box of the entire area in which images were recorded.

    This object links to related STAC Item JSON objects containing information about the recorded city blocks and the cameras. The objects for the city blocks contain the GeoJSON geometry of the respective block and the
    corresponding bounding box. The objects containing the camera information are based on an existing STAC extension for camera related metadata.

    Metadata of the archived NumPy files for each image was structured using the Data Package schema from the Frictionless Standards. This standard describes a collection of data files. Therefore, metadata about all containerized NumPy files of the six flight paths (Flug1_100Media - Flug1_104Media blocks and Flug1_105Media block) is provided within a JSON-based file.

    Note that camera1 corresponds to the RGB camera and camera2 the thermal.

    FAIR Digital Objects:

    All files are represented in a standardized way as FAIR Digital Objects
    (FAIR DOs)
    to enable machine actionable decisions on the data in spirit of
    the FAIR principles.

    Persistent Identifier (PID):

    Persistent Identifiers (PIDs) are resolvable with the Handle.Net Registry (HNR).

    FilePersistent Identifier (PID)
    Flug1_100-104Media_coco.json21.11152/6ea60288-d895-414e-80c0-26c9fdd662b2
    Flug1_105Media_coco.json21.11152/58d43ddc-5e29-4980-8675-ae579b50a1e2
    Flug1_100.tar.zst21.11152/6858a0b5-cc60-40e9-afef-8c2dd8b35e8e
    Flug1_101.tar.zst21.11152/e670f510-7e00-4d3a-9b90-3bac7a7c069e
    Flug1_102.tar.zst21.11152/3ab9f444-05f6-445e-a691-62fae4021bea
    Flug1_103.tar.zst21.11152/365fd8cf-8e86-41b8-9d0e-b816fdd01d29
    Flug1_104.tar.zst21.11152/041a6111-644a-4617-afb3-3c421a88e8e3
    Flug1_105.tar.zst21.11152/f48bf4e7-3879-4216-8f64-45a060b8f658
    Flug1_100-105_frictionless_standards.json21.11152/7b58b3b5-75eb-4417-ac4d-abe025e159f6
    Flug1_collection_stac_spec.json21.11152/ba370aa3-6422-428c-9ff7-c2ef429df603
    Flug1_100_stac_spec.json21.11152/09cb76fc-b8cb-4116-a22a-68c5bdfa77b0
    Flug1_101_stac_spec.json21.11152/24a55398-b96b-43dd-b0fb-cd8ce302c7ce
    Flug1_102_stac_spec.json21.11152/721234ac-4b5a-4d02-9944-82a08ef2db35
    Flug1_103_stac_spec.json21.11152/ebaeb5bc-0514-47c9-bcd2-98f0253843d8
    Flug1_104_stac_spec.json21.11152/9854677c-77c5-4a0b-916b-57dd9ec20198
    Flug1_105_stac_spec.json21.11152/cfd0fc0e-f5ea-464e-a57f-28e882924860
    Flug1_camera1_stac-spec.json21.11152/976fcf28-f924-4a21-b53d-5d054ad8198d
    Flug1_camera2_stac-spec.json21.11152/37833c54-1d36-42e4-858d-831447122863
  15. TRANSSET: A Drone-Captured Dataset for Enhancing Vehicle Detection and...

    • figshare.com
    zip
    Updated Jun 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tewodros Gebre; Quincy Blackston; Leila Hashemi-Beni (2024). TRANSSET: A Drone-Captured Dataset for Enhancing Vehicle Detection and Traffic Monitoring Algorithms [Dataset]. http://doi.org/10.6084/m9.figshare.26082217.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 23, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Tewodros Gebre; Quincy Blackston; Leila Hashemi-Beni
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    TRANSSET: A Drone-Captured Dataset for Enhancing Vehicle Detection and Traffic Monitoring AlgorithmsThis research introduces TRANSSET, a novel annotated dataset composed of drone-captured video footage aimed at enhancing the development of vehicle detection and traffic monitoring algorithms. The dataset consists of over 4700 high-resolution images from two different locations, annotated in multiple formats, offering a unique perspective on traffic conditions. The TRANSSET dataset provides a significant resource to train machine learning models for aerial traffic surveillance, a field presently constrained by a lack of high-quality data. Consequently, this dataset could greatly assist in advancing more accurate and robust traffic monitoring algorithms.

  16. m

    Dataset of Annotated Rice Panicle Image from Bangladesh

    • data.mendeley.com
    Updated Sep 26, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohammad Rifat Ahmmad rashid (2023). Dataset of Annotated Rice Panicle Image from Bangladesh [Dataset]. http://doi.org/10.17632/ndb6t28xbk.4
    Explore at:
    Dataset updated
    Sep 26, 2023
    Authors
    Mohammad Rifat Ahmmad rashid
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Bangladesh
    Description

    This dataset focuses on drone-based rice panicle detection in Gazipur, Bangladesh, offering valuable visual data to researchers in agricultural studies. Captured using an advanced drone with a 4K resolution camera, the dataset comprises 2193 high-resolution images of rice fields and 5701 images after augmentation. All the images are annotated with precision to aid in automated rice panicle identification. Its main purpose is to support the development of algorithms and systems for critical agricultural tasks like crop monitoring and yield estimation, as well as disease identification and plant health evaluation. The dataset's creation involved extracting frames from drone-recorded video footage and meticulously annotating them with manual and deep learning algorithms using a semi-automatic approach.

  17. Drone videos and images of sheep in various conditions (for computer vision...

    • zenodo.org
    Updated Mar 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adrien Lebreton; Adrien Lebreton; Coline Morin; Estelle NICOLAS; Estelle NICOLAS; Louise Helary; Louise Helary; Coline Morin (2025). Drone videos and images of sheep in various conditions (for computer vision purpose) [Dataset]. http://doi.org/10.5281/zenodo.14967219
    Explore at:
    Dataset updated
    Mar 5, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Adrien Lebreton; Adrien Lebreton; Coline Morin; Estelle NICOLAS; Estelle NICOLAS; Louise Helary; Louise Helary; Coline Morin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is part of the European H2020 project ICAERUS, specifically focused on the livestock monitoring use case. For more information, visit the project website: https://icaerus.eu.

    Objective

    Counting sheep and goats is a significant challenge for farmers managing flocks with hundreds of animals. Our objective is to develop a computer vision-based methodology to count sheep and goats as they pass through a corridor or gate. This approach utilizes low-altitude aerial videos (<15 m) recorded by drones.

    Progress and Enhancements

    Our ongoing efforts include:

    To improve detection models like YOLO, we are enriching the dataset with:

    • Images of Non-White Small Ruminants: Current models struggle with detecting sheep that are not white due to their low frequency in flocks and thus datasets. By including images of brown and dark-colored goats, we aim to enhance model performance.
    • Environmental Diversity: Additional images and videos are being collected under varying conditions:
      • Backgrounds: Concrete, asphalt, grass (of different colors), dirt, etc.
      • Lighting Conditions: Cloudy, sunny, and shaded (e.g., barn shadows).

    This dataset encompasses the following data:

    - Carmejane: a directory encompassing images and videos from a sheep farm in France, Alpes de Haute Provence.

    - Videos: a directory encompassing short drone videos (22 videos ; Drone height: ~ 15 m ; Drone gimbal angle: NADIR and Oblique, Resolution: 3840x2160, FPS: 30 & 60, mostly white sheep, large variety of backgrounds). The videos are orignal or were cut.

    - Images_from_videos: images extracted from the videos at 1 image per seconde (1014 images)

    - Other_images: other images (165 images ;Drone height: ~ 15m ; Drone gimbal angle: NADIR & Oblique; Various Resolution)

    - Mourier: a directory encompassing images and videos from a sheep farm in France, Limousin.

    - Videos: a directory encompassing short drone videos (6 videos ; Drone height: ~ 15-30 m ; Drone gimbal angle: Oblique, Resolution: 3840x2160, FPS: 30, white sheep, various backgrounds). The videos were cut.

    - Images_from_videos: images extracted from the videos at 1 images per seconde (110 images)

    - Other_images: other images (26 images ;Drone height: ~ 0-30m ; Drone gimbal angle: NADIR & Oblique; Various Resolution)

    - Summaries of images and videos

    Future Work

    We are actively annotating the collected images and plan to share some of them upon completion. These enhancements aim to improve detection accuracy for small ruminants in diverse scenarios. New detection models will also be shared on our Github page in the next months.

    Collaboration and Contact

    We welcome collaborations on this topic. For inquiries or further information, please contact:
    Adrien Lebreton
    Email: adrien.lebreton@idele.fr

  18. P

    Embrapa ADD 256 Dataset

    • paperswithcode.com
    Updated Oct 15, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). Embrapa ADD 256 Dataset [Dataset]. https://paperswithcode.com/dataset/embrapa-add-256
    Explore at:
    Dataset updated
    Oct 15, 2021
    Description

    This is a detailed description of the dataset, a data sheet for the dataset as proposed by Gebru et al.

    Motivation for Dataset Creation Why was the dataset created? Embrapa ADD 256 (Apples by Drones Detection Dataset — 256 × 256) was created to provide images and annotation for research on *apple detection in orchards for UAV-based monitoring in apple production.

    What (other) tasks could the dataset be used for? Apple detection in low-resolution scenarios, similar to the aerial images employed here.

    Who funded the creation of the dataset? The building of the ADD256 dataset was supported by the Embrapa SEG Project 01.14.09.001.05.04, Image-based metrology for Precision Agriculture and Phenotyping, and FAPESP under grant (2017/19282-7).

    Dataset Composition What are the instances? Each instance consists of an RGB image and an annotation describing apples locations as circular markers (i.e., presenting center and radius).

    How many instances of each type are there? The dataset consists of 1,139 images containing 2,471 apples.

    What data does each instance consist of? Each instance contains an 8-bits RGB image. Its corresponding annotation is found in the JSON files: each apple marker is composed by its center (cx, cy) and its radius (in pixels), as seen below:

    "gebler-003-06.jpg": [ { "cx": 116, "cy": 117, "r": 10 }, { "cx": 134, "cy": 113, "r": 10 }, { "cx": 221, "cy": 95, "r": 11 }, { "cx": 206, "cy": 61, "r": 11 }, { "cx": 92, "cy": 1, "r": 10 } ],

    Dataset.ipynb is a Jupyter Notebook presenting a code example for reading the data as a PyTorch's Dataset (it should be straightforward to adapt the code for other frameworks as Keras/TensorFlow, fastai/PyTorch, Scikit-learn, etc.)

    Is everything included or does the data rely on external resources? Everything is included in the dataset.

    Are there recommended data splits or evaluation measures? The dataset comes with specified train/test splits. The splits are found in lists stored as JSON files.

    | | Number of images | Number of annotated apples | | --- | --- | --- | |Training | 1,025 | 2,204 | |Test | 114 | 267 | |Total | 1,139 | 2,471 |

    Dataset recommended split.

    Standard measures from the information retrieval and computer vision literature should be employed: precision and recall, F1-score and average precision as seen in COCO and Pascal VOC.

    What experiments were initially run on this dataset? The first experiments run on this dataset are described in A methodology for detection and location of fruits in apples orchards from aerial images by Santos & Gebler (2021).

    Data Collection Process How was the data collected? The data employed in the development of the methodology came from two plots located at the Embrapa’s Temperate Climate Fruit Growing Experimental Station at Vacaria-RS (28°30’58.2”S, 50°52’52.2”W). Plants of the varieties Fuji and Gala are present in the dataset, in equal proportions. The images were taken during December 13, 2018, by an UAV (DJI Phantom 4 Pro) that flew over the rows of the field at a height of 12 m. The images mix nadir and non-nadir views, allowing a more extensive view of the canopies. A subset from the images was random selected and 256 × 256 pixels patches were extracted.

    Who was involved in the data collection process? T. T. Santos and L. Gebler captured the images in field. T. T. Santos performed the annotation.

    How was the data associated with each instance acquired? The circular markers were annotated using the VGG Image Annotator (VIA).

    WARNING: Find non-ripe apples in low-resolution images of orchards is a challenging task even for humans. ADD256 was annotated by a single annotator. So, users of this dataset should consider it a noisy dataset.

    Data Preprocessing What preprocessing/cleaning was done? No preprocessing was applied.

    Dataset Distribution How is the dataset distributed? The dataset is available at GitHub.

    When will the dataset be released/first distributed? The dataset was released in October 2021.

    What license (if any) is it distributed under? The data is released under Creative Commons BY-NC 4.0 (Attribution-NonCommercial 4.0 International license). There is a request to cite the corresponding paper if the dataset is used. For commercial use, contact Embrapa Agricultural Informatics business office.

    Are there any fees or access/export restrictions? There are no fees or restrictions. For commercial use, contact Embrapa Agricultural Informatics business office.

    Dataset Maintenance Who is supporting/hosting/maintaining the dataset? The dataset is hosted at Embrapa Agricultural Informatics and all comments or requests can be sent to Thiago T. Santos (maintainer).

    Will the dataset be updated? There is no scheduled updates.

    If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? Contributors should contact the maintainer by e-mail.

    No warranty The maintainers and their institutions are exempt from any liability, judicial or extrajudicial, for any losses or damages arising from the use of the data contained in the image database.

  19. R

    Drone Image Learning 2 Dataset

    • universe.roboflow.com
    zip
    Updated Oct 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    EGH450 (2022). Drone Image Learning 2 Dataset [Dataset]. https://universe.roboflow.com/egh450/drone-image-learning-2/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 4, 2022
    Dataset authored and provided by
    EGH450
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Objects Bounding Boxes
    Description

    Drone Image Learning 2

    ## Overview
    
    Drone Image Learning 2 is a dataset for object detection tasks - it contains Objects annotations for 1,298 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  20. f

    UAV-based solar photovoltaic detection dataset

    • figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simiao Ren; Jordan Malof; T. Robert Fetter; Robert Beach; Jay Rineer; Kyle Bradbury (2023). UAV-based solar photovoltaic detection dataset [Dataset]. http://doi.org/10.6084/m9.figshare.18093890.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    figshare
    Authors
    Simiao Ren; Jordan Malof; T. Robert Fetter; Robert Beach; Jay Rineer; Kyle Bradbury
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains unmanned aerial vehicle (UAV) imagery (a.k.a. drone imagery) and annotations of solar panel locations captured from controlled flights at various altitudes and speeds across two sites at Duke Forest (Couch field and Blackwood field). In total there are 423 stationary images and corresponding annotations of solar panels within sight, along with 60 videos taken from flying the UAV roughly at either 8 m/s or 14 m/s. In total there are 2,019 solar panel instances annotated.Associated publication:“Utilizing geospatial data for assessing energy security: Mapping small solar home systems using unmanned aerial vehicles and deep learning” [https://arxiv.org/abs/2201.05548]Data processing:Please refer to this Github repository for further details on data management and preprocessing: https://github.com/BensonRen/Drone_based_solar_PV_detection. The two scripts included enable the user to reproduce the experiments in the paper above.Contents:After unzipping the package, there will be 3 directories:1. Train_val_set: Stationary UAV images (.JPG) taken at various altitudes in the Couch field of Duke Forest for training and validation purposes, along with their solar PV annotations (.png)2. Test_set: Stationary UAV images (.JPG) taken at various altitudes in the Blackwood field of Duke Forest for test purposes, along with their solar PV annotations (.png)3. Moving_labeled: Images (img/*.png) capture from videos moving with two speed modes (Sport: 14m/s, Norma: 8m/s) at various altitudes and their solar PV annotations (labels/*.png)For additional details of this dataset, please refer to REAMDE.docx enclosed.Acknowledgments: This dataset was created at the Duke University Energy Initiative in collaboration with the Energy Access Project at Duke and RTI International. We thank the Duke University Energy Data Analytics Ph.D. Student Fellowship Program for their support. We also thank Duke Forest for use of the flight zones for data collection.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Louise Helary; Louise Helary; Adrien Lebreton; Adrien Lebreton (2024). Drone images and their annotations of grazing cows [Dataset]. http://doi.org/10.5281/zenodo.11048412
Organization logo

Drone images and their annotations of grazing cows

Explore at:
zipAvailable download formats
Dataset updated
Apr 24, 2024
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Louise Helary; Louise Helary; Adrien Lebreton; Adrien Lebreton
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This dataset is part of the European H2020 project ICAERUS regarding the livestock monitoring use case. More information here: https://icaerus.eu/

By making available 1385 images and 4941 bounding boxes of cows, this dataset is a major contribution for public and research stakeholders to develop cow detection and counting models.

This dataset encompasses the following data:

- Farm_name: name of the farm were the images were collected (Mauron, Derval, Jalogny or Other_farms)

-----JPGImages: a directory were the images are stored (1385 images, , RGB images taken from 60 ot 100m of altitude; images size of 4000*3000 or 5280*3956, images taken with DJI MAVIC3 E or DJI MAVIC3T).

-------------DJI_YYYYMMDDHHMM_XXX: a directory by flight containing the images of a unique flight, with the date (YYYYMMDD), the hour in UTC+2 (HHMM), and XXX representing a mission number

-----PASCAL_VOC1.1: a directory with the annotations of cows at the PASCAL format (4941 bouding boxes of cows)

-------------DJI_YYYYMMDDHHMM_XXX: a directory by flight containing the annotations, one annotation file referred to an unique image and have the same name except the extension

-----YOLO_V1: a directory with the annotations of cows at the YOLO format (4941 bouding boxes of cows)

-------------DJI_YYYYMMDDHHMM_XXX: a directory by flight containing the annotations, one annotation file referred to an unique image and have the same name except the extension

Nadir images were collected between June 2023 and March 2024 at a constant altitude of 100m or 60m during flight planned with DJI Pilot 2 regarding the take-off position or manually at a precise relative altitude. When they were used, flight planned were designed to cover the entire pasture. It is why, in some flight there is a strong imbalance between images with “cattle” and image with “no cattle”. The file conditions_summary.xlsx detailed the following information for each flight: Relative altitude, Number of images, number of images containing cows, number of cow bounding boxes, Weather, Date, Drone used, Drone camera used, Image size. The Aerial images contain many .exif methadata (drone, camera, altitude etc.).

These data can be used to train different deep learning models such as Yolo models or to validate pretrained models to detect cows on pastures.

Other versions of this dataset will be available later in 2024. The authors of the dataset are opened to any collaboration regarding animal counting models.

If you are interested in sheep counting based on drones’ images and AI, find other available data here: https://doi.org/10.5281/zenodo.10400302

For more information, please contact: adrien.lebreton@idele.fr

Search
Clear search
Close search
Google apps
Main menu