100+ datasets found
  1. m

    Annotated UAV Image Dataset for Object Detection Using LabelImg and Roboflow...

    • data.mendeley.com
    Updated Aug 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anindita Das (2025). Annotated UAV Image Dataset for Object Detection Using LabelImg and Roboflow [Dataset]. http://doi.org/10.17632/fwg6pt6ckd.1
    Explore at:
    Dataset updated
    Aug 21, 2025
    Authors
    Anindita Das
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset consists of drone images that were obtained for agricultural field monitoring to detect weeds and crops through computer vision and machine learning approaches. The images were obtained through high-resolution UAVs and annotated using the LabelImg and Roboflow tool. Each image has a corresponding YOLO annotation file that contains bounding box information and class IDs for detected objects. The dataset includes:

    Original images in .jpg format with a resolution of 585 × 438 pixels.

    Annotation files (.txt) corresponding to each image, following the YOLO format: class_id x_center y_center width height.

    A classes.txt file listing the object categories used in labeling (e.g., Weed, Crop).

    The dataset is intended for use in machine learning model development, particularly for precision agriculture, weed detection, and plant health monitoring. It can be directly used for training YOLOv7 and other object detection models.

  2. Drone images and their annotations of grazing cows

    • zenodo.org
    • data.europa.eu
    zip
    Updated Apr 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Louise Helary; Louise Helary; Adrien Lebreton; Adrien Lebreton (2024). Drone images and their annotations of grazing cows [Dataset]. http://doi.org/10.5281/zenodo.11048412
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 24, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Louise Helary; Louise Helary; Adrien Lebreton; Adrien Lebreton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is part of the European H2020 project ICAERUS regarding the livestock monitoring use case. More information here: https://icaerus.eu/

    By making available 1385 images and 4941 bounding boxes of cows, this dataset is a major contribution for public and research stakeholders to develop cow detection and counting models.

    This dataset encompasses the following data:

    - Farm_name: name of the farm were the images were collected (Mauron, Derval, Jalogny or Other_farms)

    -----JPGImages: a directory were the images are stored (1385 images, , RGB images taken from 60 ot 100m of altitude; images size of 4000*3000 or 5280*3956, images taken with DJI MAVIC3 E or DJI MAVIC3T).

    -------------DJI_YYYYMMDDHHMM_XXX: a directory by flight containing the images of a unique flight, with the date (YYYYMMDD), the hour in UTC+2 (HHMM), and XXX representing a mission number

    -----PASCAL_VOC1.1: a directory with the annotations of cows at the PASCAL format (4941 bouding boxes of cows)

    -------------DJI_YYYYMMDDHHMM_XXX: a directory by flight containing the annotations, one annotation file referred to an unique image and have the same name except the extension

    -----YOLO_V1: a directory with the annotations of cows at the YOLO format (4941 bouding boxes of cows)

    -------------DJI_YYYYMMDDHHMM_XXX: a directory by flight containing the annotations, one annotation file referred to an unique image and have the same name except the extension

    Nadir images were collected between June 2023 and March 2024 at a constant altitude of 100m or 60m during flight planned with DJI Pilot 2 regarding the take-off position or manually at a precise relative altitude. When they were used, flight planned were designed to cover the entire pasture. It is why, in some flight there is a strong imbalance between images with “cattle” and image with “no cattle”. The file conditions_summary.xlsx detailed the following information for each flight: Relative altitude, Number of images, number of images containing cows, number of cow bounding boxes, Weather, Date, Drone used, Drone camera used, Image size. The Aerial images contain many .exif methadata (drone, camera, altitude etc.).

    These data can be used to train different deep learning models such as Yolo models or to validate pretrained models to detect cows on pastures.

    Other versions of this dataset will be available later in 2024. The authors of the dataset are opened to any collaboration regarding animal counting models.

    If you are interested in sheep counting based on drones’ images and AI, find other available data here: https://doi.org/10.5281/zenodo.10400302

    For more information, please contact: adrien.lebreton@idele.fr

  3. Manually Annotated Drone Imagery (RGB) Dataset for automatic coastline...

    • zenodo.org
    xml, zip
    Updated May 22, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kamran Anwar Tanwari; Kamran Anwar Tanwari; Paweł Terefenko; Jakub Śledziowski; Jakub Śledziowski; Andrzej Giza; Paweł Terefenko; Andrzej Giza (2024). Manually Annotated Drone Imagery (RGB) Dataset for automatic coastline delineation of Southern Baltic Sea, Poland with polyline annotations (0.1.0) [Dataset]. http://doi.org/10.5281/zenodo.11237140
    Explore at:
    zip, xmlAvailable download formats
    Dataset updated
    May 22, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Kamran Anwar Tanwari; Kamran Anwar Tanwari; Paweł Terefenko; Jakub Śledziowski; Jakub Śledziowski; Andrzej Giza; Paweł Terefenko; Andrzej Giza
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Poland, Baltic Sea
    Description

    Overview:

    The Manually Annotated Drone Imagery Dataset (MADRID) consists of hand annotated high resolution RGB images taken in two different types of coasts in Poland, Miedzyzdroje - cliff coast and in Mrzezyno - dune coast in 2022-2023. All images were converted into a uniform format of 1440x2560 pixels, polyline annotated and set into file structure format suited for semantic segmentation tasks (See "Usage" notes below for more details).

    The raw images of our dataset were captured Zenmuse L1 Sensor (RGB) mounted on a DJI Matrice 300 RTK Drone. Total of 4895 images were captured, however the dataset contains 3876 images with each image annotated with coastline. The dataset only include images with coastlines that are visually identifiable with the human eye. For the annotations of the images, CVAT v2.13 open-source software was utilized.

    Usage:

    The compressed RAR file contains two folders train and test. Each folder contains the file that represents the date at which the image was captured in the format of (year, month, day), number of the image and the name of the drone utilized to capture the image. For example, DJI_20220111140051_0051_Zenmuse-L1-mission and DJI_20220111140105_0053_Zenmuse-L1-mission. Additionally, the test folder contains annotations (one per image) which are extracted from the original XML annotation file provided in the CVAT 1.1 image format.

    Archives were compressed using RAR compression. They can be decompressed in a terminal by opening and extracting Madrid_v0.1_Data.zip.

    The training images for both training data and testing data are structured as follows.

    Train/
    └── images/
      └── DJI_20220111140051_0051_Zenmuse-L1-mission.JPG
      └── DJI_20220111140105_0053_Zenmuse-L1-mission.JPG
      └── ...
    └── masks/ └── DJI_20220111140051_0051_Zenmuse-L1-mission.PNG └── DJI_20220111140105_0053_Zenmuse-L1-mission.PNG └── ...

    Test/ └── images/ └── DJI_20220111140051_0051_Zenmuse-L1-mission.JPG └── DJI_20220111140105_0053_Zenmuse-L1-mission.JPG └── ...
    └── masks/ └── DJI_20220111140051_0051_Zenmuse-L1-mission.PNG └── DJI_20220111140105_0053_Zenmuse-L1-mission.PNG └── ...

  4. h

    drone-detection-dataset

    • huggingface.co
    Updated Aug 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pathik Prashant Ghugare (2025). drone-detection-dataset [Dataset]. https://huggingface.co/datasets/pathikg/drone-detection-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 29, 2025
    Authors
    Pathik Prashant Ghugare
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Dataset Card for Drone Detection Dataset

    This dataset card describes the processed version of the Drone Detection Dataset, originally curated by Maciej Pawełczyk and Marek Wojtyra, adapted to a COCO-style format for efficient usage in modern deep learning pipelines

      Dataset Details
    
    
    
    
    
      Dataset Description
    

    The Drone Detection Dataset is a real-world object detection dataset for UAV detection tasks. It includes RGB images annotated with bounding boxes in the COCO… See the full description on the dataset page: https://huggingface.co/datasets/pathikg/drone-detection-dataset.

  5. Svamitva Drone Aerial Images

    • kaggle.com
    zip
    Updated Jan 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DeepNets (2025). Svamitva Drone Aerial Images [Dataset]. https://www.kaggle.com/datasets/utkarshsaxenadn/svamitva-drone-aerial-images
    Explore at:
    zip(4785147343 bytes)Available download formats
    Dataset updated
    Jan 30, 2025
    Authors
    DeepNets
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This special dataset was curated for the Smart India Hackathon 2024 and the Indus Hackathon 2025, which included problem statements on extracting features from drone aerial images. The data, provided by the Smart India Hackathon, initially came as ECW files and were converted to TIF files using QGIS and GDAL. These images were then split into patches of size 1024 by 1024 pixels and manually annotated using polygons in Label Studio.

    Originally, 482 images were labeled, and data augmentation increased the dataset to around 1300 files. The original images are from patch 1 to patch 482, while the rest are augmented versions.

    The dataset includes various versions: - Filtered data: Contains only images with buildings. - Binary version: For building footprint extraction using binary segmentation tasks. - Original SIH dataset: Includes the TIF images supplied by the Smart India Hackathon 2024.

    Additionally, script files used to create custom datasets are available. These scripts help with tasks such as data generation, data augmentation, and converting annotations.

    Two models are provided: one that takes 1024 by 1024 pixel inputs and another that takes 512 by 512 pixel inputs. These models are available at different training epochs (e.g., 50, 100, 150, 200 epochs). While the models show satisfactory results, improvements can still be made. The annotations are not perfect but are sufficient for training a good model. We are continuously working on improving the annotations.

    Thank you for using the dataset. It is recommended to review the data structure for better understanding and usage.

    While this dataset marked a major improvement, it is not without flaws. The annotations, completed within a single day, are human-generated and may contain inaccuracies. Despite this, they are sufficient for training an effective model. Future plans include refining the annotations, exploring different model architectures beyond simply adjusting epochs, and continuing to enhance model performance. For now, this dataset serves as the foundation for ongoing research and development.

  6. R

    Aerial Maritime Drone Dataset

    • universe.roboflow.com
    zip
    Updated Sep 28, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jacob Solawetz (2022). Aerial Maritime Drone Dataset [Dataset]. https://universe.roboflow.com/jacob-solawetz/aerial-maritime/model/4
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 28, 2022
    Dataset authored and provided by
    Jacob Solawetz
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    Movable Objects Bounding Boxes
    Description

    Overview

    Drone Example

    This dataset contains 74 images of aerial maritime photographs taken with via a Mavic Air 2 drone and 1,151 bounding boxes, consisting of docks, boats, lifts, jetskis, and cars. This is a multi class problem. This is an aerial object detection dataset. This is a maritime object detection dataset.

    The drone was flown at 400 ft. No drones were harmed in the making of this dataset.

    This dataset was collected and annotated by the Roboflow team, released with MIT license.

    https://i.imgur.com/9ZYLQSO.jpg" alt="Image example">

    Use Cases

    • Identify number of boats on the water over a lake via quadcopter.
    • Boat object detection dataset
    • Aerial Object Detection proof of concept
    • Identify if boat lifts have been taken out via a drone
    • Identify cars with a UAV drone
    • Find which lakes are inhabited and to which degree.
    • Identify if visitors are visiting the lake house via quad copter.
    • Proof of concept for UAV imagery project
    • Proof of concept for maritime project
    • Etc.

    This dataset is a great starter dataset for building an aerial object detection model with your drone.

    Getting Started

    Fork or download this dataset and follow our How to train state of the art object detector YOLOv4 for more. Stay tuned for particular tutorials on how to teach your UAV drone how to see and comprable airplane imagery and airplane footage.

    Annotation Guide

    See here for how to use the CVAT annotation tool that was used to create this dataset.

    About Roboflow

    Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. :fa-spacer: Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:

    Roboflow Wordmark

  7. KIIT-MiTA

    • kaggle.com
    zip
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sudip Chakrabarty (2025). KIIT-MiTA [Dataset]. https://www.kaggle.com/datasets/sudipchakrabarty/kiit-mita
    Explore at:
    zip(130492648 bytes)Available download formats
    Dataset updated
    Jan 7, 2025
    Authors
    Sudip Chakrabarty
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Drone Images for Military Object Detection This dataset contains 1,700 high-resolution images captured by drones, annotated in the YOLO format for object detection tasks. The images span 7 distinct classes, including:

    • Artilary
    • Missile
    • Radar
    • M. Rocket Launcher
    • Soldier
    • Tank
    • Vehicle

    Key Features: Suitable for military and security-focused research. Includes detailed bounding box annotations for each object class. Ideal for training YOLO-based models or other object detection architectures.

    Usage: This dataset is intended for educational and research purposes only. Commercial use is strictly prohibited. If used, proper attribution to the creator is required.

    Inspiration: This dataset was created to facilitate research in defense and security applications, particularly for identifying and tracking military objects in drone-captured imagery.

    Contributors

  8. Songdo Vision: Vehicle Annotations from High-Altitude BeV Drone Imagery in a...

    • zenodo.org
    • data-staging.niaid.nih.gov
    bin, txt, zip
    Updated Sep 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robert Fonod; Robert Fonod; Haechan Cho; Haechan Cho; Hwasoo Yeo; Hwasoo Yeo; Nikolas Geroliminis; Nikolas Geroliminis (2025). Songdo Vision: Vehicle Annotations from High-Altitude BeV Drone Imagery in a Smart City [Dataset]. http://doi.org/10.5281/zenodo.13828408
    Explore at:
    bin, txt, zipAvailable download formats
    Dataset updated
    Sep 10, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Robert Fonod; Robert Fonod; Haechan Cho; Haechan Cho; Hwasoo Yeo; Hwasoo Yeo; Nikolas Geroliminis; Nikolas Geroliminis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Oct 4, 2022 - Oct 7, 2022
    Area covered
    Songdo-dong
    Description

    Overview

    The Songdo Vision dataset provides high-resolution (4K, 3840×2160 pixels) RGB images annotated with categorized axis-aligned bounding boxes (BBs) for vehicle detection from a high-altitude bird’s-eye view (BeV) perspective. Captured over Songdo International Business District, South Korea, this dataset consists of 5,419 annotated video frames, featuring approximately 300,000 vehicle instances categorized into four classes:

    • Car (including vans and light-duty vehicles)
    • Bus
    • Truck
    • Motorcycle

    This dataset can serve as a benchmark for aerial vehicle detection, supporting research and real-world applications in intelligent transportation systems, traffic monitoring, and aerial vision-based mobility analytics. It was developed in the context of a multi-drone experiment aimed at enhancing geo-referenced vehicle trajectory extraction.

    📌 Citation: If you use this dataset in your work, kindly acknowledge it by citing the following article:

    Robert Fonod, Haechan Cho, Hwasoo Yeo, Nikolas Geroliminis (2025). Advanced computer vision for extracting georeferenced vehicle trajectories from drone imagery, Transportation Research Part C: Emerging Technologies, vol. 178, 105205. DOI: 10.1016/j.trc.2025.105205.

    🔗 Related dataset: For precisely georeferenced vehicle trajectories extracted from the same large-scale multi-drone experiment, see Songdo Traffic: 10.5281/zenodo.13828384.

    Motivation

    Publicly available datasets for aerial vehicle detection often exhibit limitations such as:

    • Non-BeV perspectives with varying angles and distortions
    • Inconsistent annotation quality, with loose or missing bounding boxes
    • Lower-resolution imagery, reducing detection accuracy, particularly for smaller vehicles
    • Lack of annotation detail, especially for motorcycles in dense urban scenes with complex backgrounds

    To address these challenges, Songdo Vision provides high-quality human-annotated bounding boxes, with machine learning assistance used to enhance efficiency and consistency. This ensures accurate and reliable ground truth for training and evaluating detection models.

    Dataset Composition

    The dataset is randomly split into training (80%) and test (20%) subsets:

    SubsetImagesCarBusTruckMotorcycleTotal Vehicles
    Train4,335195,5397,03011,7792,963217,311
    Test1,08449,5081,7593,05280555,124

    A subset of 5,274 frames was randomly sampled from drone video sequences, while an additional 145 frames were carefully selected to represent challenging cases, such as motorcycles at pedestrian crossings, in bicycle lanes, near traffic light poles, and around other distinctive road markers where they may blend into the urban environment.

    Data Collection

    The dataset was collected as part of a collaborative multi-drone experiment conducted by KAIST and EPFL in Songdo, South Korea, from October 4–7, 2022.

    • A fleet of 10 drones monitored 20 busy intersections, executing advanced flight plans to optimize coverage.
    • 4K (3840×2160) RGB video footage was recorded at 29.97 FPS from altitudes of 140–150 meters.
    • Each drone flew 10 sessions per day, covering peak morning and afternoon periods.
    • The experiment resulted in 12TB of 4K raw video data.

    More details on the experimental setup and data processing pipeline are available in [1].

    Bounding Box Annotations & Formats

    Annotations were generated using a semi-automated object detection annotation process in Azure ML Studio, leveraging machine learning-assisted bounding box detection with human verification to ensure precision.

    Each annotated frame includes categorized, axis-aligned bounding boxes, stored in three widely-used formats:

    1. COCO JSON format

    • Single annotation file per dataset subset (i.e., one for training, one for testing).
    • Contains metadata such as image dimensions, bounding box coordinates, and class labels.
    • Example snippet:
    {
     "images": [{"id": 1, "file_name": "0001.jpg", "width": 3840, "height": 2160}],
     "annotations": [{"id": 1, "image_id": 1, "category_id": 2, "bbox": [500, 600, 200, 50], "area": 10000, "iscrowd": 0}],
     "categories": [
      {"id": 1, "name": "car"}, {"id": 2, "name": "bus"},
      {"id": 3, "name": "truck"}, {"id": 4, "name": "motorcycle"}
     ]
    }

    2. YOLO TXT format

    • One annotation file per image, following the format:
    • Bounding box values are normalized to [0,1], with the origin at the top-left corner.
    • Example snippet:
    0 0.52 0.63 0.10 0.05 # Car bounding box
    2 0.25 0.40 0.15 0.08 # Truck bounding box

    3. Pascal VOC XML format

    • One annotation file per image, structured in XML.
    • Contains image properties and absolute pixel coordinates for each bounding box.
    • Example snippet:

    File Structure

    The dataset is provided as two compressed archives:

    1. Training Data (train.zip, 12.91 GB)

    train/
    │── coco_annotations.json # COCO format
    │── images/
    │  ├── 0001.jpg
    │  ├── ...
    │── labels/
    │  ├── 0001.txt # YOLO format
    │  ├── 0001.xml # Pascal VOC format
    │  ├── ...

    2. Testing Data (test.zip, 3.22 GB)

    test/
    │── coco_annotations.json
    │── images/
    │  ├── 00027.jpg
    │  ├── ...
    │── labels/
    │  ├── 00027.txt
    │  ├── 00027.xml
    │  ├── ...

    Additional Files

    • README.md – Dataset documentation (this description)
    • LICENSE.txt – Creative Commons Attribution 4.0 License
    • names.txt – Class names (one per line)
    • data.yaml – Example YOLO configuration file for training/testing

    Acknowledgments

    In addition to the funding sources listed in the metadata, the creators express their gratitude to Artem Vasilev for his dedicated efforts in data annotation. We also thank the research teams of Prof. Simon Oh (Korea University) and Prof. Minju Park (Hannam University) for their assistance during the data collection campaign, including the provision of drone equipment and student support.

    Citation & Attribution

    Preferred Citation: If you use Songdo Vision for any purpose, whether academic research, commercial applications, open-source projects, or benchmarking efforts, please cite our accompanying article [1]:

    Robert Fonod, Haechan Cho, Hwasoo Yeo, Nikolas Geroliminis (2025). Advanced computer vision for extracting georeferenced vehicle trajectories from drone imagery, Transportation Research Part C: Emerging Technologies, vol. 178, 105205. DOI: 10.1016/j.trc.2025.105205

    BibTeX entry:

    @article{fonod2025advanced,
     title = {Advanced computer vision for extracting georeferenced vehicle trajectories from drone imagery}, 
     author = {Fonod, Robert and Cho, Haechan and Yeo, Hwasoo and Geroliminis, Nikolas},
    journal = {Transportation Research Part C: Emerging

  9. R

    Drone Images Project Dataset

    • universe.roboflow.com
    zip
    Updated Feb 16, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DL (2025). Drone Images Project Dataset [Dataset]. https://universe.roboflow.com/dl-09e0u/drone-images-project
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 16, 2025
    Dataset authored and provided by
    DL
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Image Bounding Boxes
    Description

    Drone Images Project

    ## Overview
    
    Drone Images Project is a dataset for object detection tasks - it contains Image annotations for 8,626 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  10. Drone Detection

    • kaggle.com
    zip
    Updated Nov 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simarpreet Singh (2024). Drone Detection [Dataset]. https://www.kaggle.com/datasets/cybersimar08/drone-detection/code
    Explore at:
    zip(523683916 bytes)Available download formats
    Dataset updated
    Nov 26, 2024
    Authors
    Simarpreet Singh
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset consists of images featuring drones with annotated bounding boxes around each drone. The bounding boxes provide precise localization information, enabling detection and tracking of drones within various backgrounds and environments. Each image includes metadata such as bounding box coordinates, image resolution, and labels identifying the drone objects. This dataset is suitable for training and evaluating computer vision models for object detection tasks, particularly in applications like surveillance, drone detection, and autonomous tracking.

  11. Z

    Data from: SynDroneVision: A Synthetic Dataset for Image-Based Drone...

    • data-staging.niaid.nih.gov
    Updated Nov 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lenhard, Tamara R.; Weinmann, Andreas; Franke, Kai; Koch, Tobias (2024). SynDroneVision: A Synthetic Dataset for Image-Based Drone Detection [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_13360115
    Explore at:
    Dataset updated
    Nov 13, 2024
    Dataset provided by
    Darmstadt University of Applied Sciences
    Deutsches Zentrum für Luft- und Raumfahrt e. V. (DLR)
    Authors
    Lenhard, Tamara R.; Weinmann, Andreas; Franke, Kai; Koch, Tobias
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract

    Developing robust drone detection systems is often constrained by the limited availability of large-scale annotated training data and the high costs associated with real-world data collection. However, synthetic data presents a promising and cost-effective solution to overcome this issue. Therefore, we present SynDroneVision, a synthetic dataset specifically designed for RGB-based drone detection in surveillance applications. Featuring diverse backgrounds, lighting conditions, and drone models, SynDroneVision offers a comprehensive training foundation for deep learning algorithms. To evaluate the dataset's effectiveness, we perform a comparative analysis across a selection of recent YOLO detection models. Our findings demonstrated that SynDroneVision is a valuable resource for real-world data enrichment, achieving notable enhancements in model performance and robustness, while significantly reducing the time and costs of real-world data acquisition.

    Paper

    Accepted for publication at the 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV2025)!

    SynDroneVision is presented in the upcoming paper SynDroneVision: A Synthetic Dataset for Image-Based Drone Detection by Tamara R. Lenhard, Andreas Weinmann, Kai Franke, and Tobias Koch. This work is accepted and will be published in the Proceedings of the 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV2025).

    For early access, the preprint is currently available on ArXiv: https://arxiv.org/abs/2411.05633v1

    Dataset Details

    SynDroneVision comprises a total of 140,038 annotaed RGB images (131,238 for training, 8,800 for validation, and 4,000 for testing), featuring a resolution of 2560x1489 pixels. All images are recorded in a sequential manner using Unreal Engine 5.0 in combination with Colosseum. Apart from drone images, SynDroneVision also includes ~7% of background images (i.e., imag frames without drone instances).

    Annotation Format: Annotations (bounding boxes) are provided via text files according to the YOLO standard format

    Here, and represent the normalized coordinates of the bounding box center, while and denote the normalized bounding box wisth and height. In SynDroneVision, is always 0, indicating the drone class.

    Download

    The SynDroneVision dataset offers around 900 GB of data dedicated to image-based drone detection. To facilitate the download process, we have partitioned the dataset into smaller sections. Specifically, we have divided the training data into 10 segments, organized by sequences.

    Annotations are available below, with image data accessible via the following links:

    Dataset Split Sequences File Name Link Size (GB)

    Training Set Seq. 001 - 009 images_train_seq001-009.zip Training images PART 1 57

    Seq. 010 - 018 images_train_seq010-018.zip Trainng images PART 2 95.4

    Seq. 019 - 027 images_train_seq019-027.zip Training images PART 3 96.2

    Seq. 028 - 035 images_train_seq028-035.zip Training images PART 4 83.9

    Seq. 036 - 043 images_train_seq036-043.zip Training images PART 5 77.1

    Seq. 044 - 050 images_train_seq044-050.zip Training images PART 6 84.7

    Seq. 051 - 056 images_train_seq051-056.zip Training images PART 7 86.8

    Seq. 057 - 065 images_train_seq057-065.zip Training images PART 8 86.2

    Seq. 066 - 070 images_train_seq066-070.zip Training images PART 9 75.7

    Seq. 071 - 073 images_train_seq071-073.zip Training images PART 10 38.5

    Validation Set Seq. 001 - 073 images_val.zip Validation images 55.2

    Test Set Seq. 001 - 073 images_test.zip Test images 26.5

    Citation

    If you find SynDroneVision helpful in your research, we kindly ask that you cite the associated preprint. Below is the citation in BibTeX format for your convenience:

    BibTeX:

    @inproceedings{Lenhard:2024, title={{SynDroneVision: A Synthetic Dataset for Image-Based Drone Detection}}, author={Lenhard, Tamara R. and Weinmann, Andreas and Franke, Kai and Koch, Tobias}, year={2024}, url={https://arxiv.org/abs/2411.05633}}

    SynDroneVision uses Unreal® Engine. Unreal® is a trademark or registered trademark of Epic Games, Inc. in the United States of America and elsewhere.

  12. g

    Cars Drone Detection Dataset

    • gts.ai
    json
    Updated Jun 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Cars Drone Detection Dataset [Dataset]. https://gts.ai/dataset-download/cars-drone-detection-dataset/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Jun 28, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Explore the Cars Drone Detection Dataset featuring high-resolution images (512x512 pixels) with precise car annotations using the Pascal VOC format.

  13. Drone detection

    • kaggle.com
    zip
    Updated Jan 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BanderaStepan (2025). Drone detection [Dataset]. https://www.kaggle.com/banderastepan/drone-detection
    Explore at:
    zip(4394613214 bytes)Available download formats
    Dataset updated
    Jan 26, 2025
    Authors
    BanderaStepan
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This is a fully synthetic dataset that I created with Blender and ancient black magic. All annotations in this dataset are made automatically, which means they are absolutely accurate.
    All images are 640x640 pixels in size. Each image contains only one drone.
    The annotation format is as follows:
    <class_id center_x center_y width height>
    There is only one class in the dataset, which is a drone. Therefore, the class_id is always 0. The name of a label file corresponds to the name of its image.
    The dataset contains such drones as:

    • shahed - 131
    • shahed - 136
    • orlan - 10
    • Techyon
    • mavic 3
    • zala - 42116E
    • Lancet
    • mojahed
    • superCam
    • zala - 42104M
    • granat-4
    • granat - 2
    • granat - 1
    • forpost
  14. Drone videos and images of sheep in various conditions (for computer vision...

    • zenodo.org
    • data.europa.eu
    Updated Mar 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adrien Lebreton; Adrien Lebreton; Coline Morin; Estelle NICOLAS; Estelle NICOLAS; Louise Helary; Louise Helary; Coline Morin (2025). Drone videos and images of sheep in various conditions (for computer vision purpose) [Dataset]. http://doi.org/10.5281/zenodo.14967219
    Explore at:
    Dataset updated
    Mar 5, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Adrien Lebreton; Adrien Lebreton; Coline Morin; Estelle NICOLAS; Estelle NICOLAS; Louise Helary; Louise Helary; Coline Morin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is part of the European H2020 project ICAERUS, specifically focused on the livestock monitoring use case. For more information, visit the project website: https://icaerus.eu.

    Objective

    Counting sheep and goats is a significant challenge for farmers managing flocks with hundreds of animals. Our objective is to develop a computer vision-based methodology to count sheep and goats as they pass through a corridor or gate. This approach utilizes low-altitude aerial videos (<15 m) recorded by drones.

    Progress and Enhancements

    Our ongoing efforts include:

    To improve detection models like YOLO, we are enriching the dataset with:

    • Images of Non-White Small Ruminants: Current models struggle with detecting sheep that are not white due to their low frequency in flocks and thus datasets. By including images of brown and dark-colored goats, we aim to enhance model performance.
    • Environmental Diversity: Additional images and videos are being collected under varying conditions:
      • Backgrounds: Concrete, asphalt, grass (of different colors), dirt, etc.
      • Lighting Conditions: Cloudy, sunny, and shaded (e.g., barn shadows).

    This dataset encompasses the following data:

    - Carmejane: a directory encompassing images and videos from a sheep farm in France, Alpes de Haute Provence.

    - Videos: a directory encompassing short drone videos (22 videos ; Drone height: ~ 15 m ; Drone gimbal angle: NADIR and Oblique, Resolution: 3840x2160, FPS: 30 & 60, mostly white sheep, large variety of backgrounds). The videos are orignal or were cut.

    - Images_from_videos: images extracted from the videos at 1 image per seconde (1014 images)

    - Other_images: other images (165 images ;Drone height: ~ 15m ; Drone gimbal angle: NADIR & Oblique; Various Resolution)

    - Mourier: a directory encompassing images and videos from a sheep farm in France, Limousin.

    - Videos: a directory encompassing short drone videos (6 videos ; Drone height: ~ 15-30 m ; Drone gimbal angle: Oblique, Resolution: 3840x2160, FPS: 30, white sheep, various backgrounds). The videos were cut.

    - Images_from_videos: images extracted from the videos at 1 images per seconde (110 images)

    - Other_images: other images (26 images ;Drone height: ~ 0-30m ; Drone gimbal angle: NADIR & Oblique; Various Resolution)

    - Summaries of images and videos

    Future Work

    We are actively annotating the collected images and plan to share some of them upon completion. These enhancements aim to improve detection accuracy for small ruminants in diverse scenarios. New detection models will also be shared on our Github page in the next months.

    Collaboration and Contact

    We welcome collaborations on this topic. For inquiries or further information, please contact:
    Adrien Lebreton
    Email: adrien.lebreton@idele.fr

  15. Z

    Unmanned Aerial Vehicles Dataset

    • data.niaid.nih.gov
    • zenodo.org
    • +1more
    Updated Apr 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rafael Makrigiorgis; Nicolas Souli; Panayiotis Kolios (2023). Unmanned Aerial Vehicles Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7477568
    Explore at:
    Dataset updated
    Apr 5, 2023
    Dataset provided by
    KIOS Research and Innovation Center of Excellence, University of Cyprus
    Authors
    Rafael Makrigiorgis; Nicolas Souli; Panayiotis Kolios
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Unmanned Aerial Vehicles Dataset:

    The Unmanned Aerial Vehicle (UAV) Image Dataset consists of a collection of images containing UAVs, along with object annotations for the UAVs found in each image. The annotations have been converted into the COCO, YOLO, and VOC formats for ease of use with various object detection frameworks. The images in the dataset were captured from a variety of angles and under different lighting conditions, making it a useful resource for training and evaluating object detection algorithms for UAVs. The dataset is intended for use in research and development of UAV-related applications, such as autonomous flight, collision avoidance and rogue drone tracking and following. The dataset consists of the following images and detection objects (Drone):

        Subset
        Images
        Drone
    
    
        Training
        768
        818
    
    
        Validation
        384
        402
    
    
        Testing
        383
        400
    

    It is advised to further enhance the dataset so that random augmentations are probabilistically applied to each image prior to adding it to the batch for training. Specifically, there are a number of possible transformations such as geometric (rotations, translations, horizontal axis mirroring, cropping, and zooming), as well as image manipulations (illumination changes, color shifting, blurring, sharpening, and shadowing).

    NOTE If you use this dataset in your research/publication please cite us using the following

    Rafael Makrigiorgis, Nicolas Souli, & Panayiotis Kolios. (2022). Unmanned Aerial Vehicles Dataset (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7477569

  16. Solar photovoltaic annotations for computer vision related to the...

    • figshare.com
    zip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simiao Ren; Jordan Malof; T. Robert Fetter; Robert Beach; Jay Rineer; Kyle Bradbury (2023). Solar photovoltaic annotations for computer vision related to the "Classification Training Dataset for Crop Types in Rwanda" drone imagery dataset [Dataset]. http://doi.org/10.6084/m9.figshare.18094043.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Simiao Ren; Jordan Malof; T. Robert Fetter; Robert Beach; Jay Rineer; Kyle Bradbury
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Rwanda
    Description

    This dataset contains annotations (i.e. polygons) for solar photovoltaic (PV) objects in the previously published dataset "Classification Training Dataset for Crop Types in Rwanda" published by RTI International (DOI: 10.34911/rdnt.r4p1fr [1]). These polygons are intended to enable the use of this dataset as a machine learning training dataset for solar PV identification in drone imagery. Note that this dataset contains ONLY the solar panel polygon labels and needs to be used with the original RGB UAV imagery “Drone Imagery Classification Training Dataset for Crop Types in Rwanda” (https://mlhub.earth/data/rti_rwanda_crop_type). The original dataset contains UAV imagery (RGB) in .tiff format in six provinces in Rwanda, each with three phases imaged and our solar PV annotation dataset follows the same data structure with province and phase label in each subfolder.Data processing:Please refer to this Github repository for further details: https://github.com/BensonRen/Drone_based_solar_PV_detection. The original dataset is divided into 8000x8000 pixel image tiles and manually labeled with polygons (mainly rectangles) to indicate the presence of solar PV. These polygons are converted into pixel-wise, binary class annotations.Other information:1. The six provinces that UAV imagery came from are: (1) Cyampirita (2) Kabarama (3) Kaberege (4) Kinyaga (5) Ngarama (6) Rwakigarati. These original data collections were staged across 18 phases, each collected a set of imagery from a given Province (each provinces had 3 phases of collection). We have annotated 15 out of 18 phases, with the missing ones being: Kabarama-Phase2, Kaberege-Phase3, and Kinyaga-Phase3 due to data compatibility issues of the unused phases.2. The annotated polygons are transformed into binary maps the size of the image tiles but where each pixel is either 0 or 1. In this case, 0 represents background and 1 represents solar PV pixels. These binary maps are in .png format and each Province/phase set has between 9 and 49 annotation patches. Using the code provided in the above repository, the same image patches can be cropped from the original RGB imagery.3. Solar PV densities vary across the image patches. In total, there were 214 solar PV instances labeled in the 15 phase.Associated publications:“Utilizing geospatial data for assessing energy security: Mapping small solar home systems using unmanned aerial vehicles and deep learning” [https://arxiv.org/abs/2201.05548]This dataset is published under CC-BY-NC-SA-4.0 license. (https://creativecommons.org/licenses/by-nc-sa/4.0/)

  17. I

    Dataset: Breaking the barrier of human-annotated training data for...

    • databank.illinois.edu
    Updated Dec 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sebastian Varela; Andrew Leakey (2024). Dataset: Breaking the barrier of human-annotated training data for machine-learning-aided plant research using aerial imagery [Dataset]. http://doi.org/10.13012/B2IDB-8462244_V2
    Explore at:
    Dataset updated
    Dec 12, 2024
    Authors
    Sebastian Varela; Andrew Leakey
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    U.S. Department of Energy (DOE)
    Description

    This dataset supports the implementation described in the manuscript "Breaking the Barrier of Human-Annotated Training Data for Machine-Learning-Aided Biological Research Using Aerial Imagery." It comprises UAV aerial imagery used to execute the code available at https://github.com/pixelvar79/GAN-Flowering-Detection-paper. For detailed information on dataset usage and instructions for implementing the code to reproduce the study, please refer to the GitHub repository.

  18. Aerial Semantic Drone Dataset

    • kaggle.com
    zip
    Updated May 25, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lalu Erfandi Maula Yusnu (2021). Aerial Semantic Drone Dataset [Dataset]. https://www.kaggle.com/nunenuh/semantic-drone
    Explore at:
    zip(4362163368 bytes)Available download formats
    Dataset updated
    May 25, 2021
    Authors
    Lalu Erfandi Maula Yusnu
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Aerial Semantic Drone Dataset

    The Semantic Drone Dataset focuses on semantic understanding of urban scenes for increasing the safety of autonomous drone flight and landing procedures. The imagery depicts more than 20 houses from nadir (bird's eye) view acquired at an altitude of 5 to 30 meters above the ground. A high-resolution camera was used to acquire images at a size of 6000x4000px (24Mpx). The training set contains 400 publicly available images and the test set is made up of 200 private images.

    This dataset is taken from https://www.kaggle.com/awsaf49/semantic-drone-dataset. We remove and add files and information that we needed for our research purpose. We create our tiff files with a resolution of 1200x800 pixel in 24 channel with each channel represent classes that have been preprocessed from png files label. We reduce the resolution and compress the tif files with tiffile python library.

    If you have any problem with tif dataset that we have been modified you can contact nunenuh@gmail.com and gaungalif@gmail.com.

    This dataset was a copy from the original dataset (link below), we provide and add some improvement in the semantic data and classes. There are the availability of semantic data in png and tiff format with a smaller size as needed.

    Semantic Annotation

    The images are labelled densely using polygons and contain the following 24 classes:

    unlabeled paved-area dirt grass gravel water rocks pool vegetation roof wall window door fence fence-pole person dog car bicycle tree bald-tree ar-marker obstacle conflicting

    Directory Structure and Files

    > images
    > labels/png
    > labels/tiff
     - class_to_idx.json
     - classes.csv
     - classes.json
     - idx_to_class.json
    

    Included Data

    • 400 training images in jpg format can be found in "aerial_semantic_drone/images"
    • Dense semantic annotations in png format can be found in "aerial_semantic_drone/labels/png"
    • Dense semantic annotations in tiff format can be found in "aerial_semantic_drone/labels/tiff"
    • Semantic class definition in csv format can be found in "aerial_semantic_drone/classes.csv"
    • Semantic class definition in json can be found in "aerial_semantic_drone/classes.json"
    • Index to class name file can be found in "aerial_semantic_drone/idx_to_class.json"
    • Class name to index file can be found in "aerial_semantic_drone/idx_to_class.json"

    Contact

    aerial@icg.tugraz.at

    Citation

    If you use this dataset in your research, please cite the following URL: www.dronedataset.icg.tugraz.at

    License

    The Drone Dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:

    That the dataset comes "AS IS", without express or implied warranty. Although every effort has been made to ensure accuracy, we (Graz University of Technology) do not accept any responsibility for errors or omissions. That you include a reference to the Semantic Drone Dataset in any work that makes use of the dataset. For research papers or other media link to the Semantic Drone Dataset webpage.

    That you do not distribute this dataset or modified versions. It is permissible to distribute derivative works in as far as they are abstract representations of this dataset (such as models trained on it or additional annotations that do not directly include any of our data) and do not allow to recover the dataset or something similar in character. That you may not use the dataset or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain. That all rights not expressly granted to you are reserved by us (Graz University of Technology).

  19. VisDrone Dataset

    • kaggle.com
    zip
    Updated Jun 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Banuprasad B (2025). VisDrone Dataset [Dataset]. https://www.kaggle.com/datasets/banuprasadb/visdrone-dataset/data
    Explore at:
    zip(2251268022 bytes)Available download formats
    Dataset updated
    Jun 28, 2025
    Authors
    Banuprasad B
    License

    http://www.gnu.org/licenses/agpl-3.0.htmlhttp://www.gnu.org/licenses/agpl-3.0.html

    Description

    The VisDrone dataset is a large-scale visual object detection and tracking benchmark captured by drones. Developed by the AISKYEYE team at Tianjin University, it aims to facilitate research in computer vision tasks such as object detection, object tracking, and crowd analysis in aerial imagery.

    The dataset consists of high-resolution images and videos collected using drones flying over urban and suburban environments across various cities in China. These scenes include pedestrians, vehicles, bicycles, and other common objects, captured under different lighting conditions, angles, and motion patterns.

    The dataset has been modified to include only the image data and labels in YOLO format. The original annotation files have been removed, and object labels were converted using provided scripts(from Ultralytics) to be compatible with YOLO-based object detection models.

  20. UAV-based solar photovoltaic detection dataset

    • figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simiao Ren; Jordan Malof; T. Robert Fetter; Robert Beach; Jay Rineer; Kyle Bradbury (2023). UAV-based solar photovoltaic detection dataset [Dataset]. http://doi.org/10.6084/m9.figshare.18093890.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Simiao Ren; Jordan Malof; T. Robert Fetter; Robert Beach; Jay Rineer; Kyle Bradbury
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains unmanned aerial vehicle (UAV) imagery (a.k.a. drone imagery) and annotations of solar panel locations captured from controlled flights at various altitudes and speeds across two sites at Duke Forest (Couch field and Blackwood field). In total there are 423 stationary images and corresponding annotations of solar panels within sight, along with 60 videos taken from flying the UAV roughly at either 8 m/s or 14 m/s. In total there are 2,019 solar panel instances annotated.Associated publication:“Utilizing geospatial data for assessing energy security: Mapping small solar home systems using unmanned aerial vehicles and deep learning” [https://arxiv.org/abs/2201.05548]Data processing:Please refer to this Github repository for further details on data management and preprocessing: https://github.com/BensonRen/Drone_based_solar_PV_detection. The two scripts included enable the user to reproduce the experiments in the paper above.Contents:After unzipping the package, there will be 3 directories:1. Train_val_set: Stationary UAV images (.JPG) taken at various altitudes in the Couch field of Duke Forest for training and validation purposes, along with their solar PV annotations (.png)2. Test_set: Stationary UAV images (.JPG) taken at various altitudes in the Blackwood field of Duke Forest for test purposes, along with their solar PV annotations (.png)3. Moving_labeled: Images (img/*.png) capture from videos moving with two speed modes (Sport: 14m/s, Norma: 8m/s) at various altitudes and their solar PV annotations (labels/*.png)For additional details of this dataset, please refer to REAMDE.docx enclosed.Acknowledgments: This dataset was created at the Duke University Energy Initiative in collaboration with the Energy Access Project at Duke and RTI International. We thank the Duke University Energy Data Analytics Ph.D. Student Fellowship Program for their support. We also thank Duke Forest for use of the flight zones for data collection.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Anindita Das (2025). Annotated UAV Image Dataset for Object Detection Using LabelImg and Roboflow [Dataset]. http://doi.org/10.17632/fwg6pt6ckd.1

Annotated UAV Image Dataset for Object Detection Using LabelImg and Roboflow

Explore at:
Dataset updated
Aug 21, 2025
Authors
Anindita Das
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The dataset consists of drone images that were obtained for agricultural field monitoring to detect weeds and crops through computer vision and machine learning approaches. The images were obtained through high-resolution UAVs and annotated using the LabelImg and Roboflow tool. Each image has a corresponding YOLO annotation file that contains bounding box information and class IDs for detected objects. The dataset includes:

Original images in .jpg format with a resolution of 585 × 438 pixels.

Annotation files (.txt) corresponding to each image, following the YOLO format: class_id x_center y_center width height.

A classes.txt file listing the object categories used in labeling (e.g., Weed, Crop).

The dataset is intended for use in machine learning model development, particularly for precision agriculture, weed detection, and plant health monitoring. It can be directly used for training YOLOv7 and other object detection models.

Search
Clear search
Close search
Google apps
Main menu