Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Unmanned aerial vehicles (UAVs) have become increasingly popular in recent years for both commercial and recreational purposes. Regrettably, the security of people and infrastructure is also clearly threatened by this increased demand. To address the current security challenge, much research has been carried out and several innovations have been made. Many faults still exist, however, including type or range detection failures and the mistaken identification of other airborne objects (for example, birds). A standard dataset that contains photos of drones and birds and on which the model might be trained for greater accuracy is needed to conduct experiments in this field. The supplied dataset is crucial since it will help train the model, giving it the ability to learn more accurately and make better decisions. The dataset that is being presented is comprised of a diverse range of images of birds and drones in motion. Pexel website's images and videos have been used to construct the dataset. Images were obtained from the frames of the recordings that were acquired, after which they were segmented and augmented with a range of circumstances. This would improve the machine-learning model's detection accuracy while increasing dataset training. The dataset has been formatted according to the YOLOv7 PyTorch specification. The test, train, and valid folders are contained within the given dataset. These folders each feature a plaintext file that corresponds to an associated image. Relevant metadata regarding the discovered object is described in the plaintext file. Images and labels are the two subfolders that constitute the folders. The collection consists of 20,925 images of birds and drones. The images have a 640 x 640 pixel resolution and are stored in JPEG format.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset was created by aggregating images and classes from multiple sources to form a diverse and robust collection for research and development purposes. The dataset includes the following contributions:
Bird and Drone Classes: Source: Project DDrone - DatasetDrone
Contribution: Images and classes for both bird and drone categories were extracted and included.
Main Birds Dataset: Source: Main Birds Dataset
Contribution: An additional 1600 bird images were incorporated to enhance the dataset's variety and richness.
Bird Images: Source: Bird Dataset by Yesmin
Contribution: A total of 996 bird images were integrated into the dataset.
These images were integrated to ensure balanced classes, providing a comprehensive collection of images focusing on birds and drones. This makes the dataset suitable for various applications such as object detection, classification, and machine learning model training.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
YOLO-based Segmented Dataset for Drone vs. Bird Detection for Deep and Machine Learning Algorithms
Unmanned aerial vehicles (UAVs), or drones, have witnessed a sharp rise in both commercial and recreational use, but this surge has brought about significant security concerns. Drones, when misidentified or undetected, can pose risks to people, infrastructure, and air traffic, especially when confused with other airborne objects, such as birds. To overcome this challenge, accurate detection systems are essential. However, a reliable dataset for distinguishing between drones and birds has been lacking, hindering the progress of effective models in this field.
This dataset is designed to fill this gap, enabling the development and fine-tuning of models to better identify drones and birds in various environments. The dataset comprises a diverse collection of images, sourced from Pexel’s website, representing birds and drones in motion. These images were captured from video frames and are segmented, augmented, and pre-processed to simulate different environmental conditions, enhancing the model's training process.
Formatted in accordance with the YOLOv7 PyTorch specification, the dataset is organized into three folders: Test, Train, and Valid. Each folder contains two sub-folders—*Images* and Labels—with the Labels folder including the associated metadata in plaintext format. This metadata provides valuable information about the detected objects within each image, allowing the model to accurately learn and detect drones and birds in varying circumstances. The dataset contains a total of 20,925 images, all with a resolution of 640 x 640 pixels in JPEG format, providing comprehensive training and validation opportunities for machine learning models.
Test Folder: Contains 889 images (both drone and bird images). The folder has sub-categories marked as BT (Bird Test Images) and DT (Drone Test Images).
Train Folder: With a total of 18,323 images, this folder includes both drone and bird images, also categorized as BT and DT.
Valid Folder: Consisting of 1,740 images, the images in this folder are similarly categorized into BT and DT.
This dataset is essential for training more accurate models that can differentiate between drones and birds in real-time applications, thereby improving the reliability of drone detection systems for enhanced security and efficiency.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Drone Vs Bird Detection is a dataset for object detection tasks - it contains Drones Birds annotations for 9,852 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Indian Intelligence wants to detect Birds or Drones present in the sky. Identifying If there is a Bird Or Drone present in the sky .
This dataset is divided into train, test and validation split. The Bird images are Scrapped and Drones images are taken from another dataset.
This dataset can help someone to classify between drones or Birds.
If you like the dataset please upvote it
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Drone Vs Bird is a dataset for object detection tasks - it contains Drone 8KqJ annotations for 787 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
## Overview
Drone Vs Bird Object Detection2 is a dataset for object detection tasks - it contains Drone Bird annotations for 3,954 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set contains radar measurements on birds, humans and six different drones with a total of 75868 samples.
The sensor was a frequency modulated continuous wave (FMCW) radar operating at 77 GHz with a mechanically scanning antenna.
The 'ReadMe.txt' file contains a detailed description of the data.
The data set is used in [1] where only FM-sweeps corresponding to azimuth index 54 to 203 are used (out of the provided 256), or 150 sweeps.
When using this data set please refer to:
[1] A. Karlsson, M. Jansson and M. Hämäläinen, "Model-Aided Drone Classification Using Convolutional Neural Networks," 2022 IEEE Radar Conference (RadarConf22), 2022, pp. 1-6, doi: 10.1109/RadarConf2248738.2022.9764194.
The data in version 1.0 and 2.0 is identical apart from the format, ".mat" in 1.0 and ".npy" in 2.0
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Detection Bird And Drone is a dataset for object detection tasks - it contains Drones annotations for 993 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
arnobIsWorst/vip-cup-drone-and-bird dataset hosted on Hugging Face and contributed by the HF Datasets community
Main datasetThis dataset contains the true number of birds for each colony, as well as the ground counts, RPA-derived manual counts and RPA-derived semi-automated counts for each colony. This dataset was used for all analyses.MASTER_AllCountData.csvRPA-derived imagery of bird coloniesCompressed folder containing photographs of each colony (n=10) at each sample height (n=4).Colony_imagery.zipSemi-automated aerial image counting approach - archived source code and datasetThis is a DOI for the source code and dataset used for the semi-automated aerial image counting approach which has been archived on figshare and released under a Creative Commons Attribution 4.0 International License.
The current version of the code is available via Bitbucket (https://bitbucket.org/trungtpham/bird-detection).R script for analysesScript used to analyse ground counts, RPA-derived manual counts and RPA-derived semi-automated counts.MASTER_analysisScript_v1.4.R
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The results are based on the measurements conducted on small drones and a bionic bird using a 60 GHz millimeter wave radar
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Bird + Drone is a dataset for object detection tasks - it contains Drone annotations for 262 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
https://cdla.dev/permissive-1-0/https://cdla.dev/permissive-1-0/
Monitoring of protected areas to curb illegal activities like poaching is a monumental task. Real-time data acquisition has become easier with advances in unmanned aerial vehicles (UAVs) and sensors like TIR cameras, which allow surveillance at night when poaching typically occurs. However, it is still a challenge to accurately and quickly process large amounts of the resulting TIR data. The Benchmarking IR Dataset for Surveillance with Aerial Intelligence (BIRDSAI, pronounced “bird’s-eye”) is a long-wave thermal infrared (TIR) dataset containing nighttime images of animals and humans in Southern Africa. The dataset allows for testing of automatic detection and tracking of humans and animals with both real and synthetic videos, in order to protect animals in the real world. There are 48 real aerial TIR videos and 124 synthetic aerial TIR videos (generated with AirSim), for a total of 62k and 100k images, respectively. Tracking information is provided for each of the animals and humans in these videos. We break these into labels of animals or humans, and also provide species information when possible, including for elephants, lions, and giraffes. We also provide information about noise and occlusion for each bounding box.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
1. The use of a drone to count the flock sizes of 33 species of waterbirds during the breeding and non-breeding periods was investigated.
2. In 96% of 343 cases, drone counting was successful. 18.8% of non-breeding birds and 3.6% of breeding birds exhibited adverse reactions: the former birds were flushed, whereas the latter attempted to attack the drone.
3. The automatic counting of birds was best done with ImageJ/Fiji microbiology software – the average counting rate was 100 birds in 64 seconds.
4. Machine learning using neural network algorithms proved to be an effective and quick way of counting birds – 100 birds in 7 seconds. However, the preparation of images and machine learning time is time-consuming, so this method is recommended only for large data sets and large bird assemblages.
5. The responsible study of wildlife using a drone should only be carried out by persons experienced in the biology and behaviour of the target animals.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
We used drones to capture images of mixed-species blackbird (Icteridae) flocks damaging sunflower (Helianthus annuus) in North Dakota. Images consisted of an airborne flock with a sky background, creating a strong color contrast for automated bird detection and counting. We analyzed imagery from 60 flights using ImageJ and obtained automated counts. We also asked 20 biologists, ranging from 0-25 years of self-reported experience, to provide estimates for all 60 images in a single sitting. They were asked to make quick estimates (5-10 seconds per photo), to count birds within the sky background, and to zoom in on photos when needed. Photos were randomly ordered and all biologists were given photos in the same order. This study was implemented between September 2021 through October 2022 in multiple counties in North Dakota, USA where blackbird damage to sunflowers is prevalent. This data publication contains the data and R code used to analyze these data as well as the 60 drone images.We designed the study to evaluate the role of flock size, biologist experience, and photo order on the ability of a biologist to make estimates close to the automated count.For more information about this study and these data, see Duttenhefner and Klug (2025).
These data were published on 01/22/2025. On 03/12/2025, the metadata was updated to include reference to newly published article.
Low quality vehicle object detection, as seen by a drone flying around the area. Vehicles include: - Private cars - Pick up trucks - Tractors - Tanks
Images are very low quality, screenshots generated with ffmpeg from a collection of short videos, taken from https://data.kitware.com/#collection/611e77a42fa25629b9daceba/folder/611e78892fa25629b9dacf8d
Images include both positive and negative examples (i.e. images where the geographical area is same but no cars are present)
Images manually tagged by me in Label-Studio. Not all screen captures were used, some were eliminated due to extremely low quality (even compared to this dataset), and some were close in time with no noticeable change to the objects in the image.
Update Frequency: Unless more low-quality are available, likely never, and similar high-quality images will be posted in a different dataset.
Dataset Citing: This is the aerial video subset of the VIRAT Video Data collection. Per DARPA DISTAR Case 15919, 28 July 2010, these videos are Distribution A: Public Release, Distribution Unlimited. inproceedings{oh2011large, title={A large-scale benchmark dataset for event recognition in surveillance video}, author={Oh, Sangmin and Hoogs, Anthony and Perera, Amitha and Cuntoor, Naresh and Chen, Chia-Chih and Lee, Jong Taek and Mukherjee, Saurajit and Aggarwal, JK and Lee, Hyungtae and Davis, Larry and others}, booktitle={CVPR 2011}, pages={3153--3160}, year={2011}, organization={IEEE} }
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Birds Vs Drones is a dataset for object detection tasks - it contains Bird Drone annotations for 335 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
On February 8, 2021, Deception Island Chinstrap penguin colonies were photographed during the PiMetAn Project XXXIV Spanish Antarctic campaign using unmanned aerial vehicles (UAV) at a height of 30m. From the obtained imagery, a training dataset for penguin detection from aerial perspective was generated.
The penguin species is the Chinstrap penguin (Pygoscelis antarcticus).
The dataset consists of three folders: "train", containing 531 images, intended for model training; "valid", containing 50 images, intended for model validation; and "test", containing 25 images, intended for model testing. In each of the three folders, an additional .csv file is located, containing labels (x,y positions and class names for every penguin in the images), annotated in Tensorflow Object Detection format.
There is only one annotation class: Penguin.
All 606 images are 224x224 px in size, and 96 dpi.
The following augmentation was applied to create 3 versions of each source image: * Random shear of between -18° to +18° horizontally and -11° to +11° vertically
This dataset was annotated and exported via www.roboflow.com
The model Faster R-CNN64 with ResNet-101 backbone was used to perform object detection tasks. Training and evaluation tasks were performed using the TensorFlow 2.0 machine learning platform by Google.
These data detail the results of an investigation into the impact of drones and on-foot approaches on the behaviour of the endangered Grey Crowned Crane (Balearica regulorum). In total, 313 drone flights and 56 on-foot approaches were conducted over three different Grey Crowned Crane group types - pairs (110 flights, 26 on-foot), families (66 flights, 7 on-foot), and flocks (110 flights, 23 on-foot). Response data describe the number of birds exhibiting a particular behaviour (1 - no behaviour change, 2 - heads raised to observe surroundings, 3 - wings raised, 4 -moving away, and 5 - flying away) based on a photograph taken during the approach. Predictor data include the distance between the drone or on-foot observer and the bird grouping and a description of the group type. The number of individuals in the group can be inferred from the response data., Experiment 1: Monitoring method comparison experiment Trial observations included recording the behavioural cues of GCC groupings (pairs, families, and flocks) in response to either of the two monitoring methods (on-foot, drone) across various distances and flight heights. Behavioural cues were categorised as follows: no behaviour change (1), heads raised to observe surroundings (2), wings raised (3), moving away (4), and flying away (5) (Figure 1). All trial observations were undertaken by the same observer and care was taken to wear similarly coloured clothing for each of the trials. On-foot monitoring Upon locating a GCC grouping, the observer approached the group at a constant walking speed of approximately 1 m.s-1, making a reasonable effort not to disturb the grouping (e.g., avoiding noises and sudden movements). Observations were noted at the start of each trial, every 10 – 15th step thereafter, and again if any change in GCC behaviour was observed. Each observation included meas..., , # Data from: drones as a tool to study and monitor endangered Grey Crowned Cranes (Balaerica regulorum): behavioural responses and recommended guidelines
This README file was generated on 2024-02-27 by Stuart Demmer.
GENERAL INFORMATION
Title of Dataset: Drones as a tool to study and monitor endangered Grey Crowned Cranes (Balearica regulorum): Behavioural responses and recommended guidelines
Author Information A. Principal Investigator Contact Information Name: Carmen Rosa Demmer Institution: University of South Africa Address: Howick, KZN South Africa Email: carmenrdemmer@gmail.com
B. Associate Contact Information Name: Stuart Demmer Institution: NA Address: Howick, KZN South Africa Email: stuart.demmer@gmail.com
B. Supervisor Contact Information Name: Trevor McIntyre Institution: University of South Africa Address: Pretoria, GAU South Africa ...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Unmanned aerial vehicles (UAVs) have become increasingly popular in recent years for both commercial and recreational purposes. Regrettably, the security of people and infrastructure is also clearly threatened by this increased demand. To address the current security challenge, much research has been carried out and several innovations have been made. Many faults still exist, however, including type or range detection failures and the mistaken identification of other airborne objects (for example, birds). A standard dataset that contains photos of drones and birds and on which the model might be trained for greater accuracy is needed to conduct experiments in this field. The supplied dataset is crucial since it will help train the model, giving it the ability to learn more accurately and make better decisions. The dataset that is being presented is comprised of a diverse range of images of birds and drones in motion. Pexel website's images and videos have been used to construct the dataset. Images were obtained from the frames of the recordings that were acquired, after which they were segmented and augmented with a range of circumstances. This would improve the machine-learning model's detection accuracy while increasing dataset training. The dataset has been formatted according to the YOLOv7 PyTorch specification. The test, train, and valid folders are contained within the given dataset. These folders each feature a plaintext file that corresponds to an associated image. Relevant metadata regarding the discovered object is described in the plaintext file. Images and labels are the two subfolders that constitute the folders. The collection consists of 20,925 images of birds and drones. The images have a 640 x 640 pixel resolution and are stored in JPEG format.