Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Applications of convolutional neural network (CNN)-based object detectors in agriculture have been a popular research topic in recent years. However, complicated agricultural environments bring many difficulties for ground truth annotation as well as potential uncertainties for image data quality. Using YOLOv4 as a representation of state-of-the-art object detectors, this study quantified YOLOv4’s sensitivity against artificial image distortions including white noise, motion blur, hue shift, saturation change, and intensity change, and examined the importance of various training dataset attributes based on model classification accuracies, including dataset size, label quality, negative sample presence, image sequence, and image distortion levels. The YOLOv4 model trained and validated on the original datasets failed at 31.91% white noise, 22.05-pixel motion blur, 77.38° hue clockwise shift, 64.81° hue counterclockwise shift, 89.98% saturation decrease, 895.35% saturation increase, 79.80% intensity decrease, and 162.71% intensity increase with 30% mean average precisions (mAPs) for four apple flower bud growth stages. The performance of YOLOv4 decreased with both declining training dataset size and training image label quality. Negative samples and training image sequence did not make a substantial difference in model performance. Incorporating distorted images during training improved the classification accuracies of YOLOv4 models on noisy test datasets by 13 to 390%. In the context of apple flower bud growth-stage classification, except for motion blur, YOLOv4 is sufficiently robust for potential image distortions by white noise, hue shift, saturation change, and intensity change in real life. Training image label quality and training instance number are more important factors than training dataset size. Exposing models to test-image-alike training images is crucial for optimal model classification accuracies. The study enhances understanding of implementing object detectors in agricultural research.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains 8,992 images of Uno cards and 26,976 labeled examples on various textured backgrounds.
This dataset was collected, processed, and released by Roboflow user Adam Crawshaw, released with a modified MIT license: https://firstdonoharm.dev/
https://i.imgur.com/P8jIKjb.jpg" alt="Image example">
Adam used this dataset to create an auto-scoring Uno application:
Fork or download this dataset and follow our How to train state of the art object detector YOLOv4 for more.
See here for how to use the CVAT annotation tool.
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. :fa-spacer: Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains 74 images of aerial maritime photographs taken with via a Mavic Air 2 drone and 1,151 bounding boxes, consisting of docks, boats, lifts, jetskis, and cars. This is a multi class problem. This is an aerial object detection dataset. This is a maritime object detection dataset.
The drone was flown at 400 ft. No drones were harmed in the making of this dataset.
This dataset was collected and annotated by the Roboflow team, released with MIT license.
https://i.imgur.com/9ZYLQSO.jpg" alt="Image example">
This dataset is a great starter dataset for building an aerial object detection model with your drone.
Fork or download this dataset and follow our How to train state of the art object detector YOLOv4 for more. Stay tuned for particular tutorials on how to teach your UAV drone how to see and comprable airplane imagery and airplane footage.
See here for how to use the CVAT annotation tool that was used to create this dataset.
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. :fa-spacer: Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Applications of convolutional neural network (CNN)-based object detectors in agriculture have been a popular research topic in recent years. However, complicated agricultural environments bring many difficulties for ground truth annotation as well as potential uncertainties for image data quality. Using YOLOv4 as a representation of state-of-the-art object detectors, this study quantified YOLOv4’s sensitivity against artificial image distortions including white noise, motion blur, hue shift, saturation change, and intensity change, and examined the importance of various training dataset attributes based on model classification accuracies, including dataset size, label quality, negative sample presence, image sequence, and image distortion levels. The YOLOv4 model trained and validated on the original datasets failed at 31.91% white noise, 22.05-pixel motion blur, 77.38° hue clockwise shift, 64.81° hue counterclockwise shift, 89.98% saturation decrease, 895.35% saturation increase, 79.80% intensity decrease, and 162.71% intensity increase with 30% mean average precisions (mAPs) for four apple flower bud growth stages. The performance of YOLOv4 decreased with both declining training dataset size and training image label quality. Negative samples and training image sequence did not make a substantial difference in model performance. Incorporating distorted images during training improved the classification accuracies of YOLOv4 models on noisy test datasets by 13 to 390%. In the context of apple flower bud growth-stage classification, except for motion blur, YOLOv4 is sufficiently robust for potential image distortions by white noise, hue shift, saturation change, and intensity change in real life. Training image label quality and training instance number are more important factors than training dataset size. Exposing models to test-image-alike training images is crucial for optimal model classification accuracies. The study enhances understanding of implementing object detectors in agricultural research.