Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Unmanned Aerial Vehicles Dataset:
The Unmanned Aerial Vehicle (UAV) Image Dataset consists of a collection of images containing UAVs, along with object annotations for the UAVs found in each image. The annotations have been converted into the COCO, YOLO, and VOC formats for ease of use with various object detection frameworks. The images in the dataset were captured from a variety of angles and under different lighting conditions, making it a useful resource for training and evaluating object detection algorithms for UAVs. The dataset is intended for use in research and development of UAV-related applications, such as autonomous flight, collision avoidance and rogue drone tracking and following. The dataset consists of the following images and detection objects (Drone):
Subset
Images
Drone
Training
768
818
Validation
384
402
Testing
383
400
It is advised to further enhance the dataset so that random augmentations are probabilistically applied to each image prior to adding it to the batch for training. Specifically, there are a number of possible transformations such as geometric (rotations, translations, horizontal axis mirroring, cropping, and zooming), as well as image manipulations (illumination changes, color shifting, blurring, sharpening, and shadowing).
NOTE If you use this dataset in your research/publication please cite us using the following
Rafael Makrigiorgis, Nicolas Souli, & Panayiotis Kolios. (2022). Unmanned Aerial Vehicles Dataset (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7477569
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The MSDI (Manchester Surface Drone Imagery) is a geo-imagery registration dataset.
The dataset consists of
a. 446 downward-facing drone images.
b. 89 forward-facing(45-degree) drone images.
c. 64 forward-facing (0-degree) drone images.
d. parameter matrix of drone camera and transformation matrix.
e. checkerboard images for camera calibration.
This dataset is collected by Mochuan Zhan for his MSC project: Registration of UAV Imagery to Aerial and Satellite Imagery
in the University of Manchester (2021/9 - 2022/9) which is supervised by Dr.Terence Patrick Morley. This project aims at
developing a system that could perform efficient UAV visual localization through image registration based on local feature
detectors and the technique of high-throughput computing.
Notice:
The corresponding satellite image from Google Map and Bing Map could be obtained by my program, Link:
https://doi.org/10.5281/zenodo.6977652
A Ground Control Point (GCP) selector is provided for user to select GCP and create file with small effort.
By registrating corresponding images, users could evaluate the performance of their registration techniques.
The Imagery contains images of 8 Areas in Manchester:
- Manchester Aquatics Center 80
- Manchester ASDA 76
- Manchester Bussiness School 37
- Manchester Energy Center 71
- Manchester Holy Name Church 58
- Manchester Hulme Park 47
- Manchester Hulme Part(0-degree) 64
- Manchester Hulme Part(45-degree) 89
- Manchester Metropolitan university 29
- Manchester Museum 48
Device Information:
- Drone brand: Parrot
- Drone model: Parrot Anafi
Software Information:
- Pix4DCapture
- FreeFlight6
Flight parameters:
- Height 100m
- Speed 5m/s
- overlap low
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This study introduces a dataset of crop imagery captured during the 2022 growing season in the Eastern Kazakhstan region. The images were acquired using a multispectral camera mounted on an unmanned aerial vehicle (DJI Phantom 4). The agricultural land, encompassing 27 hectares and cultivated with wheat, barley, and soybean, was subjected to five aerial multispectral photography sessions throughout the growing season. This facilitated thorough monitoring of the most important phenological stages of crop development in the experimental design, which consisted of 27 plots, each covering one hectare. The collected imagery underwent enhancement and expansion, integrating a sixth band that embodies the normalized difference vegetation index (NDVI) values, in conjunction with the original five multispectral bands (Red, Green, Blue, Infrared, and Near Infrared). This amplification enables a more effective evaluation of vegetation health and growth, rendering the enriched dataset a valuable resource for the progression and validation of crop monitoring and yield prediction models, as well as for the exploration of precision agriculture methodologies.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains 1962 .jpg images of drones for use in image classification (or however you deem fit). PROS: All images have been cleaned, cropped, duplicates removed, bad quality removed etc. CONS: Varying sizes.
I have used this dataset with various other ones to train a VGG-16 model in flying object detection and discrimination.
The data are UAV (Unmanned Aerial Vehicle) and individual tree ground measurements collected from 2 citrus rootstock trials at the U.S. Horticultural Research Laboratory Picos Road farm site, Ft. Pierce, Florida, USA located at 27.437115254946757, -80.42786069428246. The trees in both trials were Valencia sweet orange scion grafted onto various rootstock selections and varieties. The trials are designated as Valencia 5-16 and Valencia 17-28, which indicate the row numbers used for each trial. Valencia 5-16 includes 648 trees and Valencia 17-28 includes 643 trees. The ground data was taken for the 5-16 and 17-28 trials in 2020 and 2021, respectively. The UAV images were taken twice the same day, 5/12/2021, once each under partially sunny (images 27-176) and overcast conditions (images 177-327). A single flight of rows 5-28 for each condition captured both trials. Some of the images under the partially sunny condition show tree shadows when the sun was not obscured behind a cloud, whereas the images under the overcast condition flight have uniform lighting and no sun shadows. Each image is notated to designate the flight condition. For example, the image labeled DJI_0033_R5-R28_Valencia_sunny.JPG was taken from the partially sunny flight and the image labeled DJI_0183_R5-R28_Valencia_overcast.JPG was taken from the overcast flight. The UAV images were taken using a DJI Phantom 4 Pro drone using a side-overlap of 80% and a forward-overlap of 80% of the flight lines. The images are suitable for orthorectification. The images were red-green-blue (sRGB) in a 3:2 format with 5472 x 3648 pixels. The dataset is included in one folder that contains 305 files – 301 image files, 2 Excel spreadsheets (one for each trial) that contain the planting plan and ground measures, and 2 images with the rows and tree spaces labeled. The 2 labeled images are composite images constructed from the 150 images from the overcast set and were created to label rows and tree space numbers. The composite image is useful for general orientation and matching the individual trees to the ground data and other post-processing image analyses.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
UAV Vehicle Images is a dataset for object detection tasks - it contains 5 annotations for 2,160 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Dataset associated with paper titled "Maize Tassel Detection from UAV Imagery Using Deep Learning" published on the journal of Frontiers in Robotics and AI (DOI: 10.3389/frobt.2021.600410).
The timing of flowering plays a critical role in determining the productivity of agricultural crops. If the crops flower too early, the crop would mature before the end of the growing season, losing the opportunity to capture and use large amounts of light energy. If the crops flower too late, the crop may be killed by the change of seasons before it is ready to harvest. Maize flowering is one of the most important periods where even small amounts of stress can significantly alter yield. In this work, we developed and compared two methods for automatic tassel detection based on the imagery collected from an unmanned aerial vehicle, using deep learning models. The first approach was a customized framework for tassel detection based on convolutional neural network (TD-CNN). The other method was a state-of-the-art object detection technique of the faster region-based CNN (Faster R-CNN), serving as baseline detection accuracy. The evaluation criteria for tassel detection were customized to correctly reflect the needs of tassel detection in an agricultural setting. Although detecting thin tassels in the aerial imagery is challenging, our results showed promising accuracy: the TD-CNN had an F1 score of 95.9% and the Faster R-CNN had 97.9% F1 score. More CNN-based model structures can be investigated in the future for improved accuracy, speed, and generalizability on aerial-based tassel detection.
The dataset including raw images and labelled images are here. The code are available on Dr. Aziza Alzadjali's GitHub account "azizanajeeb" (github.com).
Human life is precious and in the event of any unfortunate occurrence, highest efforts are made to safeguard it. To provide timely aid or undertake extraction of humans in distress, it is critical to accurately locate them. There has been an increased usage of drones to detect and track humans in such situations. Drones are used to capture high resolution images after natural and manmade disasters. It is possible to find survivors from drone feed, but that requires manual analysis. This is a time taking process and is prone to human errors. This model is capable of detecting humans by looking at drone imagery and can draw bounding boxes around their exact location. Deep learning models are highly capable of learning complex semantics and can produce superior results. Use this deep learning model to automate the task of detection, reducing time and effort required significantly.Licensing requirementsArcGIS Desktop – ArcGIS Image Analyst extension for ArcGIS ProArcGIS Enterprise – ArcGIS Image Server with raster analytics configuredArcGIS Online – ArcGIS Image for ArcGIS OnlineUsing the modelFollow the guide to use the model. Before using this model, ensure that the supported deep learning libraries are installed. For more details, check Deep Learning Libraries Installer for ArcGIS.Note: Deep learning is computationally intensive, and a powerful GPU is recommended to process large datasets.InputHigh resolution (1-5 cm) individual drone images or an orthomosaic.OutputFeature class containing detected humansApplicable geographiesThe model is expected to work well in coastal areas of Africa but can also be tried in other areas.Model architectureThis model uses the FasterRCNN model architecture implemented in ArcGIS API for Python.Accuracy metricsThis model has an average precision score of 72.8 percent for humans and 67.1 for possibly a human class.Limitations • This model has a tendency to maximize detection of humans and errs towards producing false positives. • It has been noticed that a few features get missed when a cluster of features is reported.Sample results
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
MH-SoyaHealthVision is a comprehensive dataset developed for integrated crop health assessment in soybean farming. It combines ground-level leaf images and UAV-captured images from soyabean fields of Maharashtra region, enabling a holistic approach towards disease and pest attack detection. The leaf image dataset includes high-resolution visuals of soybean leaves affected by diseases such as rust, mosaic virus, septoria brown spot, and frog-eye leaf spot, along with pest damage caused by caterpillars and semiloopers. Complementing this, the UAV dataset provides large-scale aerial perspectives of soybean fields, capturing patterns of rust, mosaic virus, and pest attack infestations. The inclusion of UAV technology in this dataset is crucial for precision agriculture, as drones facilitate highly accurate, targeted spraying of pesticides. The combined approaches of ground-level and aerial imagery make MH-SoyaHealthVision a valuable resource for developing machine learning and deep learning models for disease detection and classification. This dataset aims to contribute towards improved crop health monitoring, enabling early intervention strategies and enhancing productivity in soybean farming. The dataset comprises a total of 5,680 images, divided into two parts: First part includes Soybean Leaf Image Dataset, categorized into six folders out of which one is"Healthy," next four representing different types of diseases, and last is of pest attack. Second part includes Soybean UAV Image Dataset, categorized into four folders out of which one is "Healthy," two folders representing diseases, and remaining one for pest attack.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The folder 'rice_data' containing the training dataset and validation dataset.All the UAV imagery were in the folder: 'rice_data\image': DJI_0087_0.png DJI_0087_1.png ......All the Ground Truth images was in the folder: 'rice_data\image': DJI_0087_0.png DJI_0087_1.png ......The Ground Truth images correspond to the UAV imagery with a same nameThe file names of the training dataset were recorded in the file 'rice_data\train.txt' using the form like: image/DJI_0114_2.png gt_image/DJI_0114_2.png ......Each line in the file 'rice_data\train.txt' represents one sample, where the names of UAV imagery and the Ground Truth image were separated by a blank.The file names of the training dataset was recorded in the file 'rice_data\val.txt' like: image/DJI_0101_2.png gt_image/DJI_0101_2.png ......Each line in the file 'rice_data\val.txt' represents one sample, where the names of the UAV imagery and the Ground Truth image were separated by a blank.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains 74 images of aerial maritime photographs taken with via a Mavic Air 2 drone and 1,151 bounding boxes, consisting of docks, boats, lifts, jetskis, and cars. This is a multi class problem. This is an aerial object detection dataset. This is a maritime object detection dataset.
The drone was flown at 400 ft. No drones were harmed in the making of this dataset.
This dataset was collected and annotated by the Roboflow team, released with MIT license.
https://i.imgur.com/9ZYLQSO.jpg" alt="Image example">
This dataset is a great starter dataset for building an aerial object detection model with your drone.
Fork or download this dataset and follow our How to train state of the art object detector YOLOv4 for more. Stay tuned for particular tutorials on how to teach your UAV drone how to see and comprable airplane imagery and airplane footage.
See here for how to use the CVAT annotation tool that was used to create this dataset.
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. :fa-spacer: Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:
Unmanned Aerial Vehicles (UAV) provide increased access to unique types of urban imagery traditionally not available. Advanced machine learning and computer vision techniques when applied to UAV RGB image data can be used for automated extraction of building asset information and if applied to UAV thermal imagery data can detect potential thermal anomalies. However, these UAV datasets are not easily available to researchers, thereby creating a barrier to accelerating research in this area. To assist researchers with added data to develop machine learning algorithms, we present UAVID3D (Unmanned Aerial Vehicle (UAV) Image Dataset of the Built Environment for 3D reconstruction). The raw images for our dataset were recorded with a Zenmuse XT2 visual (RGB) and a FLIR Tau 2 (thermal, https://flir.netx.net/file/asset/15598/original/) camera on a DJI Mavic 2 pro drone (https://www.dji.com/matrice-200-series). The thermal camera is factory calibrated. All data is organized and structured to comply with FAIR principles, i.e. being findable, accessible, interoperable, and reusable. It is publicly available and can be downloaded from the Zenodo data repository. RGB images were recorded during UAV fly-overs of two different commercial buildings in Northern California. In addition, thermographic images were recorded during 2 subsequent UAV fly-overs of the same two buildings. UAV flights were recorded at flight heights between 60–80 m above ground with a flight speed of 1 m s and contain GPS information. All images were recorded during drone flights on May 10, 2021 between 8:45 am and 10:30 am and on May 19, 2021 between 2:15 pm and 4:30 pm. Outdoor air temperatures on these two days during the flights were between 78 and 83 degree fahrenheit and between 58 and 65 degree fahrenheit respectively. For the RGB flights, UAV path was planned and captured using an orbital flight plan in PIX4D capture at normal flight speed and overlap angle of 10 degree. Thermal images were captured by manual flights approximately 5 m away from each building facade. Due to the high overlap of images, similarities from feature points identified in each image can be extracted to conduct photogrammetry. Photogrammetry allows estimation of the three-dimensional coordinates of points on an object in a generated 3D space involving measurements made on images taken with a high overlap rate. Photogrammetry can be used to create a 3D point cloud model of the recorded region. UAVID3D dataset is a series of compressed archive files totaling 21GB. Useful pipelines to process these images can be found at these two repositories https://github.com/LBNL-ETA/a3dbr, and https://github.com/LBNL-ETA/AutoBFE This work was supported by the Assistant Secretary for Energy Efficiency and Renewable Energy, Building Technologies Program, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. {"references": ["Singh, R., Fernandes, S., Prakash, A. K., Mathew, P., Granderson, J., Snaith, C. & Bergmann, H. (2022). Scaling Building Energy Audits through Machine Learning Methods on Novel Drone Image Data."]}
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Cabo Frio
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains UAV photogrammetric products, UAV LiDAR point clouds, hand-made tree crown segmentation polygons, and georeferenced tree height and stem diameter measurements in carbon sequestration plantations in Quebec, Canada. The products included in this dataset are RGB orthomosaics, LiDAR and photogrammetry point clouds, digital surface models and processing reports. The tree measurements include species, height, stem diameter and location.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
FLAME 3 is the third dataset in the FLAME series of aerial UAV-collected side-by-side multi-spectral wildlands fire imagery (see FLAME 1 and FLAME 2). This set contains a single-burn subset of the larger FLAME 3 dataset focusing specifically on Computer Vision tasks such as fire detection and segmentation.
These data were compiled for assessing how geomorphic changes measured as topographic differences from repeat surveys represent measured and modelled estimates of aeolian sediment transport and dune mobility. Objective(s) of our study were to investigate whether topographic changes can serve as a proxy for aeolian transport and sediment mobility in dunefield environments. This was accomplished by relating topographic changes to modeled and observed estimates of sediment transport and dune mobility over months to decades within a partially vegetated dunefield starved of upwind sediment supplies. We specifically tested if topographic changes measured as net and total volume changes and topographic surface roughness differences provide evidence for intra-annual differences and decadal changes in sediment mobility for dune sand that is either currently bare, vegetated, or biocrust-covered. Lastly, these data were used as a framework for interpreting how aeolian transport and sediment mobility has changed for current land cover types over the preceding four decades. These data represent monthly topographic surveys and in-field sediment transport data collected between February 13, 2020 and December 16, 2020, piloted aerial imagery collected in 1984, 2002, 2009, 2013, and 2021, unoccupied aerial vehicle (UAV) imagery collected in March 2021, classification of land cover, and tabular summaries of topographic changes derived from these datasets. These data were collected between 1984 and 2021 within a small aeolian dunefield near the confluence of the Paria and Colorado Rivers, upstream of Grand Canyon National Park, Arizona. These data were collected by the U.S. Geological Survey. These data can be used to 1) to evaluate how dune surfaces with bare sand, sand with vegetated cover, and sand with biological soil crust cover (biocrust) change on a monthly time scale with differences in wind strength and 2) assess how the dunefield surface changed with vegetation loss and expansion over almost 4 decades. Additionally, these data could be used to assess detailed changes in landscape cover over monthly and decadal time scales.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The global UAV Aerial Survey Services market is experiencing robust growth, driven by increasing demand across diverse sectors. Technological advancements in drone technology, offering higher resolution imagery and improved data processing capabilities, are significantly contributing to this expansion. The market's versatility, providing cost-effective and efficient solutions for various applications, further fuels its growth. Specific sectors like construction, agriculture, and energy are key drivers, utilizing UAV surveys for site mapping, precision agriculture, pipeline inspections, and environmental monitoring. While regulatory hurdles and data security concerns present challenges, the market is overcoming these limitations through the development of standardized operating procedures and robust data encryption techniques. Assuming a conservative CAGR of 15% (a reasonable estimate given the rapid technological advancements and increasing adoption rates in this sector), and a 2025 market size of $2 billion, the market is projected to reach approximately $4.2 Billion by 2033. This substantial growth is further fueled by the increasing affordability and accessibility of UAV technology, enabling more businesses to leverage aerial survey services. The segmentation of the UAV Aerial Survey Services market reveals that orthophoto and oblique image services are widely utilized, catering to diverse application needs. Forestry and agriculture are dominant sectors, with construction, power and energy, and oil & gas industries rapidly adopting this technology. Regional analysis highlights strong growth in North America and Asia-Pacific, driven by significant investments in infrastructure development and agricultural modernization. Europe follows closely, spurred by government initiatives promoting sustainable development and environmental monitoring. The competitive landscape includes both established players like Kokusai Kogyo and Zenrin, and emerging specialized companies, indicating a dynamic and competitive market with potential for further consolidation and innovation. The continued development of advanced data analytics capabilities, integrated with UAV imagery, will create new opportunities and drive market expansion.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset includes drone (Uncrewed Aerial Vehicles, UAV) orthomosaics (RGB, n =2) of Pinus radiata acquired between 2016-2017 in Chile. The resolution (ground sampling distance) of the orthomosaics amounts to approx. 3-4 cm. The orthomosaics are partially labelled (polygon shapefiles) in terms of Pinus cover. Each orthomosaic comes with an AOI (area of interest, polygon shapefile) that indicates the areas where the labelling was performed. Within the extent of this AOI Pinus canopies are assumed to be completely delineated (by visual interpretation).
For visual inspection of the imagery we recommend to generate image pyramids since the image data has a very high spatial resolution.
Details on the dataset are mentioned in the corresponding publication:
Kattenborn, T., Lopatin, J., Förster, M., Braun, A. C., & Fassnacht, F. E. (2019). UAV data as alternative to field sampling to map woody invasive species based on combined Sentinel-1 and Sentinel-2 data. Remote sensing of environment, 227, 61-73.
https://doi.org/10.1016/j.rse.2019.03.025
https://www.sciencedirect.com/science/article/abs/pii/S0034425719301166
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The UAV Data Analysis Platform market is experiencing robust growth, driven by the increasing adoption of unmanned aerial vehicles (UAVs) across diverse sectors like agriculture, construction, and infrastructure monitoring. The market's expansion is fueled by the need for efficient data processing and insightful analytics derived from UAV imagery and sensor data. This demand is further accelerated by advancements in artificial intelligence (AI) and machine learning (ML) technologies, enabling automated data analysis and the generation of actionable insights for improved decision-making. We estimate the 2025 market size to be around $500 million, considering the substantial investments in drone technology and the rising demand for efficient data management solutions. A Compound Annual Growth Rate (CAGR) of 15% is projected from 2025 to 2033, indicating a significant market expansion over the forecast period. This growth trajectory is underpinned by the continuous development of more sophisticated UAVs with enhanced sensor capabilities, leading to a larger volume of data needing analysis. Furthermore, the rising affordability of data analysis platforms and the increasing availability of skilled professionals are contributing factors to this market expansion. However, market growth is not without its challenges. High initial investment costs for both UAVs and sophisticated analysis platforms can act as a barrier to entry for smaller companies. Data security and privacy concerns surrounding the collection and analysis of aerial imagery also present potential restraints. Furthermore, regulatory hurdles and varying standards across different geographies can impede the seamless deployment and operation of UAVs, thus indirectly affecting the market for analysis platforms. The market is segmented by application (agriculture, infrastructure, surveying etc.), deployment (cloud, on-premise), and component (software, hardware). Key players like Topcon Positioning Systems, DroneDeploy, and Percepto are actively shaping the market landscape through continuous innovation and strategic partnerships. The market's future hinges on addressing these challenges while capitalizing on the continuous technological advancements in UAV technology and AI-powered data analytics.
Spatially and temporally high-resolution data was acquired with the aid of multispectral sensors mounted on UAV and a gyrocopter platform for the purpose of classification. The work was part of the research and development project ‘Modern sensors and airborne remote sensing for the mapping of vegetation and hydromorphology along Federal waterways in Germany’ (mDRONES4rivers) in cooperation of the German Federal Institute of Hydrology (BfG), Geocoptix GmbH, Hochschule Koblenz and JB Hyperspectral Devices. Within the project period (2019-2022) data was collected at different sites situated in Germany along the Rivers Rhine and Oder. All published data produced within the project can be found by searching for the keyword ‘mDRONES4rivers’. In this dataset, the following UAS data and metadata of the project site ‘Niederwerth’ (center coordinates [WGS84]: 50.386326°N, 7.613847°E; area: 27 ha) at the Rhine River in Germany is available for download: • Multispectral orthophotos (GeoTiff; 6 bands: B, G, R, Red-Edge, NIR, Flag; camera: Micasense; resolution: 25 cm; abbreviation: MS_RAW) • RGB orthophotos (GeoTiff; 3 bands: R, G, B; camera: Phantom; resolution: 25 cm; abbreviation: PH_ORTHO) • Digital Surface Models (GeoTiff; 1 band; camera: Phantom; resolution: approx. 5 cm; abbreviation : PH_DEM) • associated Technical Reports (PDF; technical metadata concerning data acquisition, and processing using Agisoft Metashape, 1x for multispectral orthophotos, 1x for RGB-orthophotos + digital surface model) The above-mentioned files are provided for download as dataset stored in one directory per season depending on the date of data acquisition (e.g. mDRONES4rivers_NW_2019_01_Winter.zip = projectname_projectsite_year_no.season_name.season). To provide an overview of all files and general background information plus data preview the following files are additionally provided: • Overview table and metadata of the above-mentioned data (xlsx) • Summary (PDF, Detailed description of sensors and data acquisition procedure, 1x for multispectral orthophotos, 1x for RGB-orthophotos + digital surface models) Note: The data was processed with focus on spectral information and not for geodetic purposes. Georeferencing accuracy has not been checked in detail.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Unmanned Aerial Vehicles Dataset:
The Unmanned Aerial Vehicle (UAV) Image Dataset consists of a collection of images containing UAVs, along with object annotations for the UAVs found in each image. The annotations have been converted into the COCO, YOLO, and VOC formats for ease of use with various object detection frameworks. The images in the dataset were captured from a variety of angles and under different lighting conditions, making it a useful resource for training and evaluating object detection algorithms for UAVs. The dataset is intended for use in research and development of UAV-related applications, such as autonomous flight, collision avoidance and rogue drone tracking and following. The dataset consists of the following images and detection objects (Drone):
Subset
Images
Drone
Training
768
818
Validation
384
402
Testing
383
400
It is advised to further enhance the dataset so that random augmentations are probabilistically applied to each image prior to adding it to the batch for training. Specifically, there are a number of possible transformations such as geometric (rotations, translations, horizontal axis mirroring, cropping, and zooming), as well as image manipulations (illumination changes, color shifting, blurring, sharpening, and shadowing).
NOTE If you use this dataset in your research/publication please cite us using the following
Rafael Makrigiorgis, Nicolas Souli, & Panayiotis Kolios. (2022). Unmanned Aerial Vehicles Dataset (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.7477569