Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Dataset Training Yolov4 is a dataset for object detection tasks - it contains Person annotations for 937 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterDarknet's YOLOv4 based trained weights and configuration file for VinBigData Chest X-ray- https://www.kaggle.com/c/vinbigdata-chest-xray-abnormalities-detection And following small object detection steps- https://github.com/AlexeyAB/darknet#how-to-improve-object-detection
The entire training was done on my RTX 1660Ti for 50000 batches and it took 1 week to complete on my system.
https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Uav Yolov4 Train is a dataset for object detection tasks - it contains Uav annotations for 3,589 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains 8,992 images of Uno cards and 26,976 labeled examples on various textured backgrounds.
This dataset was collected, processed, and released by Roboflow user Adam Crawshaw, released with a modified MIT license: https://firstdonoharm.dev/
https://i.imgur.com/P8jIKjb.jpg" alt="Image example">
Adam used this dataset to create an auto-scoring Uno application:
Fork or download this dataset and follow our How to train state of the art object detector YOLOv4 for more.
See here for how to use the CVAT annotation tool.
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. :fa-spacer: Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:

Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Implementing and deploying advanced technologies are principal in improving manufacturing processes, signifying a transformative stride in the industrial sector. Computer vision plays a crucial innovation role during this technological advancement, demonstrating broad applicability and profound impact across various industrial operations. This pivotal technology is not merely an additive enhancement but a revolutionary approach that redefines quality control, automation, and operational efficiency parameters in manufacturing landscapes. By integrating computer vision, industries are positioned to optimize their current processes significantly and spearhead innovations that could set new standards for future industrial endeavors. However, the integration of computer vision in these contexts necessitates comprehensive training programs for operators, given this advanced system’s complexity and abstract nature. Historically, training modalities have grappled with the complexities of understanding concepts as advanced as computer vision. Despite these challenges, computer vision has recently surged to the forefront across various disciplines, attributed to its versatility and superior performance, often matching or exceeding the capabilities of other established technologies. Nonetheless, there is a noticeable knowledge gap among students, particularly in comprehending the application of Artificial Intelligence (AI) within Computer Vision. This disconnect underscores the need for an educational paradigm transcending traditional theoretical instruction. Cultivating a more practical understanding of the symbiotic relationship between AI and computer vision is essential. To address this, the current work proposes a project-based instructional approach to bridge the educational divide. This methodology will enable students to engage directly with the practical aspects of computer vision applications within AI. By guiding students through a hands-on project, they will learn how to effectively utilize a dataset, train an object detection model, and implement it within a microcomputer infrastructure. This immersive experience is intended to bolster theoretical knowledge and provide a practical understanding of deploying AI techniques within computer vision. The main goal is to equip students with a robust skill set that translates into practical acumen, preparing a competent workforce to navigate and innovate in the complex landscape of Industry 4.0. This approach emphasizes the criticality of adapting educational strategies to meet the evolving demands of advanced technological infrastructures. It ensures that emerging professionals are adept at harnessing the potential of transformative tools like computer vision in industrial settings.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Two object detection models using Darknet/YOLOv4 were trained on images of the coral Desmophyllum pertusum from the Kosterhavet National Park. In one of the models, the training image data was amplified using StyleGAN2 generative modeling. The dataset contains 2266 synthetic images with labels and 409 original images of corals used for training the ML model. Included is also the YOLOv4 models and the StyleGAN2 network.
The still images were extracted from raw video data collected using a remotely operated underwater vehicle. 409 JPEG images from the raw video data are provided in 720x576 resolution. In certain images, coordinates visible in the OSD have been cropped. The synthetic images are PNG files in 512x512 resolution. The StyleGAN2 network is included as a serialized pickle file (*.pkl). The object detection models are provided in the .weights format used by the Darknet/YOLOv4 package. Two files are included (trained on original images only, trained on original + synthetic images).
The machine learning software packages used is currently (2022) available on Github: StyleGAN2: https://github.com/NVlabs/stylegan2 YOLOv4: https://github.com/AlexeyAB/darknet
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains 74 images of aerial maritime photographs taken with via a Mavic Air 2 drone and 1,151 bounding boxes, consisting of docks, boats, lifts, jetskis, and cars. This is a multi class problem. This is an aerial object detection dataset. This is a maritime object detection dataset.
The drone was flown at 400 ft. No drones were harmed in the making of this dataset.
This dataset was collected and annotated by the Roboflow team, released with MIT license.
https://i.imgur.com/9ZYLQSO.jpg" alt="Image example">
This dataset is a great starter dataset for building an aerial object detection model with your drone.
Fork or download this dataset and follow our How to train state of the art object detector YOLOv4 for more. Stay tuned for particular tutorials on how to teach your UAV drone how to see and comprable airplane imagery and airplane footage.
See here for how to use the CVAT annotation tool that was used to create this dataset.
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless. :fa-spacer: Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility. :fa-spacer:

Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Hyperparameters of the YOLOv4 network for training on the CBCD dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is related to the paper "Automatic Detection and Recognition Method of Chinese Clay Tiles Based on YOLOv4: A Case Study in Macau". For detailed usage, please refer to our research paper.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Automobile intelligence is the trend for modern automobiles, of which environment perception is the key technology of intelligent automobile research. For autonomous vehicles, the detection of object information, such as vehicles and pedestrians in traffic scenes is crucial to improving driving safety. However, in the actual traffic scene, there are many special conditions such as object occlusion, small objects, and bad weather, which will affect the accuracy of object detection. In this research, the SwinT-YOLOv4 algorithm is proposed for detecting objects in traffic scenes, which is based on the YOLOv4 algorithm. Compared with a Convolutional neural network (CNN), the vision transformer is more powerful at extracting vision features of objects in the image. The CNN-based backbone in YOLOv4 is replaced by the Swin Transformer in the proposed algorithm. The feature-fusing neck and predicting head of YOLOv4 is remained. The proposed model was trained and evaluated in the COCO dataset. Experiments show that our method can significantly improve the accuracy of object detection under special conditions. Equipped with our method, the object detection precision for cars and person is improved by 1.75%, and the detection precision for car and person reach 89.04% and 94.16%, respectively.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This upload contains the following:
Further details, documentation and information on the project can be found in the corresponding Github Repo and publication. If you have any questions concerning these datasets, feel free to contact the corresponding author, Jérôme Rutinowski.
This work is part of the project "Silicon Economy Logistics Ecosystem" which is funded by the German Federal Ministry of Transport and Digital Infrastructure.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Install TAO Toolkit using pip and make sure to pull its docker container with GPU runtime (if using Colab or similar service) otherwise, the operation will fail. It is a relatively large image (21GB) as a result it will a while to download. Training YoloV4 Tiny with widerface dataset using Nvidia TAO toolkit. TAO Yolov4 Tiny requires the input image shape to be a multiple of 32 therefore, the images were resized to 768 x 768 and were also converted to PNG format. Could not find the pretrained… See the full description on the dataset page: https://huggingface.co/datasets/tahirishaq10/widerface_kitti.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Key hyperparameters used in training YOLOv4-Darknet and Faster R-CNN-ResNet50 models, including batch size, learning rate, and optimizer settings applied in sedimentary structure detection tasks.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of YOLOv4 model performance across three data splits, including TP, FP, FN, precision, recall, F1-score, average IoU, and mean average precision (mAP).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Table 7 Detection of infected red blood cells on Dataset A using the original YOLOv4 and modified models
| Modifications | Model | Precision (%) | Recall rate (%) | F1-score (%) | mAP (%) | Training time (h) | Inference time (per image) (ms) | B-FLOPS | Size (MB) |
|---|---|---|---|---|---|---|---|---|---|
| Original | YOLOv4 | 84 | 95 | 89 | 93.87 | 48 | 726.66 | 59.57 | 244.40 |
| Residual block pruning | YOLOv4-RC3 | 84 | 92 | 88 | 91.65 | 35 | 678.53 | 47.59 | 242.40 |
| YOLOv4-RC4 | 83 | 92 | 87 | 92.84 | 37 | 703.82 | 51.21 | 233.20 | |
| YOLOv4-RC5 | 85 | 89 | 87 | 92.47 | 37 | 704.48 | 57.61 | 222.10 | |
| YOLOv4-RC3_4 | 83 | 89 | 86 | 88.09 | 32 | 676.18 | 37.35 | 221.50 | |
| YOLOv4-RC3_5 | 77 | 77 | 77 | 76.56 | 32.5 | 680.01 | 45.64 | 220.4 | |
| Backbone replacement | YOLOv4- ResNet-50L | 70 | 84 | 76 | 79.70 | 28 | 719.50 | 37.33 | 209.30 |
| YOLOv4- ResNet-50 M | 74 | 86 | 80 | 81.43 | 28 | 884.82 | 37.33 | 209.30 |
B-FLOPS Billion floating point operations, F1-SCoRE balance between precision and recall, mAP mean average precision
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The identification and classification of blood cells are essential for diagnosing and managing various haematological conditions. Haematology analysers typically perform full blood counts but often require follow-up tests such as blood smears. Traditional methods like stained blood smears are laborious and subjective. This study explores the application of artificial neural networks for rapid, automated, and objective classification of major blood cell types from unstained brightfield images. The YOLO v4 object detection architecture was trained on datasets comprising erythrocytes, echinocytes, lymphocytes, monocytes, neutrophils, and platelets imaged using a microfluidic flow system. Binary classification between erythrocytes and echinocytes achieved a network F1 score of 86%. Expanding to four classes (erythrocytes, echinocytes, leukocytes, platelets) yielded a network F1 score of 85%, with some misclassified leukocytes. Further separating leukocytes into lymphocytes, monocytes, and neutrophils, while also increasing the dataset and tweaking model parameters resulted in a network F1 score of 84.1%. Most importantly, the neural network’s performance was comparable to that of flow cytometry and haematology analysers when tested on donor samples. These findings demonstrate the potential of artificial intelligence for high-throughput morphological analysis of unstained blood cells, enabling rapid screening and diagnosis. Integrating this approach with microfluidics could streamline conventional techniques and provide a fast automated full blood count with morphological assessment without the requirement for sample handling. Further refinements by training on abnormal cells could facilitate early disease detection and treatment monitoring.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Deep Plastic
Information:
Object Detection Model
Google Colab Links
Note: Click on File and Save Copy in Drive. If you try to edit my file it'll ask you for permission and send me an email. Please make your own copy.
DeepTrash DataSet
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparative performance of YOLOv4-Darknet and Faster R-CNN-ResNet50 across key metrics: precision, recall, F1-score, Average IoU, mAP, and inference time.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Class-wise average precision comparison between YOLOv4-Darknet and Faster R-CNN-ResNet50, including percentage representation in the dataset to contextualize model performance across sedimentary and non-rock classes.
Facebook
Twitterhttps://opendatacommons.org/licenses/dbcl/1-0/https://opendatacommons.org/licenses/dbcl/1-0/
The ABU Robocon 2021 Pot is a collection of labeled RGB images, expertly organized in the YOLOv4 style, providing a unique and valuable resource for researchers and enthusiasts in the field of robotics and computer vision. This dataset comprises 1552 images in labeled split, with 1304 images meticulously marked to precisely identify the pot class, and an additional 322 images in the negativeimages_raw split, which may have been utilized for further experimentation and training. This comprehensive dataset is poised to empower future Robocon contestants and researchers, equipping them with the tools needed to tackle the distinctive challenges presented by the ABU Robocon Pot in the context of object detection and robotic competition.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Dataset Training Yolov4 is a dataset for object detection tasks - it contains Person annotations for 937 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).