Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Train Yolov5 is a dataset for object detection tasks - it contains Sawit annotations for 1,917 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Flir Train Yolov5 is a dataset for object detection tasks - it contains Objects annotations for 2,000 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
## Overview
YoloV5 Train Dataset V2 is a dataset for object detection tasks - it contains Letters annotations for 704 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
This dataset was created by Alexey Poddiachyi
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Yolov5 Custom Training is a dataset for object detection tasks - it contains H annotations for 356 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
This dataset was created by Dino Wun
The dataset includes image files and appropriate annotations to train YOLO v5 detector. It is separated into two versions: 1. with 4 classes only 1. and with all 43 classes
Before training, edit dataset.yaml
file and specify there appropriate path 👇
# The root directory of the dataset
# (!) Update the root path according to your location
path: ..\..\Downloads\ts_yolo_v5_format\ts4classes
train: images\train\ # train images (relative to 'path')
val: images\validation\ # val images (relative to 'path')
test: images\test\ # test images (relative to 'path')
# Number of classes and their names
nc: 4
names: [ 'prohibitory', 'danger', 'mandatory', 'other']
https://www.youtube.com/watch?v=-bU0ZBbG8l4" alt="">
https://www.udemy.com/course/yolo-v5-label-train-and-test
Have a look at the abilities that you will obtain:
📢Run
YOLO v5 to detect objects on image, video and in real time by camera in the first lectures.
📢Label-Create-Convert
own dataset in YOLO format.
📢Train & Test
both: in yourlocal machine
and in thecloud machine
(with custom data and by few lines of the code).
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F3400968%2Fac1893f68be61efb21e376b3c405147c%2Fconcept_map_YOLO_v5.png?generation=1701165575909796&alt=media" alt="Concept map of the YOLO v5 course">
https://www.udemy.com/course/yolo-v5-label-train-and-test
Initial data is The German Traffic Sign Recognition Benchmarks (GTSRB).
This dataset was created by Khubchandani
This dataset is a modified version of the xView1 dataset, specifically tailored for seamless integration with YOLOv5 in Google Colab. The xView1 dataset originally consists of high-resolution satellite imagery labeled for object detection tasks. In this adapted version, we have preprocessed the data and organized it to facilitate easy usage with YOLOv5, a popular deep learning framework for object detection.
Images: The dataset includes a collection of high-resolution satellite images covering diverse geographic locations. These images have been resized and preprocessed to align with the requirements of YOLOv5, ensuring efficient training and testing.
Object annotations are provided for each image, specifying the bounding boxes and class labels of various objects present in the scenes. The annotations are formatted to match the YOLOv5 input specifications.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here are a few use cases for this project:
Sports Analytics: YoloV5 can be used to automate analytics in sports games, particularly in games like badminton. By identifying individual players and objects like the net and shuttlecock, the model could track player movements, interactions with the shuttlecock, and count the number of times the shuttlecock hits the net.
Training and Coaching: The model can assist coaches in understanding their players' performance better by monitoring their footwork, strategy implementation, speed, and other performance metrics during practice sessions or matches.
Gaming & Virtual Reality: The model could be applied in the development of interactive sports video games or VR simulations, where real-world actions of players are captured and transformed into in-game movements.
Sports Equipment Testing: Companies could use the model during the quality testing phase of sports equipment—like rackets and shuttlecocks—by tracking the movement and response of the equipment under various conditions.
Sports Broadcasting and Journalism: This model could be used to aid sports journalists and broadcasters by automatically generating statistics and key highlights of the game (e.g., number of net hits, shuttlecock speed and trajectory, player positioning) in real-time, making covering, analyzing, and summarizing games more efficient.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains images of an artifical flower platform with different insects sitting on it or flying above it. All images were automatically recorded with the Insect Detect DIY camera trap, a hardware combination of the Luxonis OAK-1, Raspberry Pi Zero 2 W and PiJuice Zero pHAT for automated insect monitoring (bioRxiv preprint). Classes The following object classes were annotated in this dataset:
wasp (mostly Vespula sp.) hbee (Apis mellifera) fly (mostly Brachycera) hovfly (various Syrphidae, e.g. Episyrphus balteatus) other (all Arthropods with insufficient occurences, e.g. various Hymenoptera, true bugs, beetles) shadow (shadows of the recorded insects) View the Health Check for more info on class balance. Versions
v4 insect_detect_416_1class
squashed to square (aspect ratio 1:1) downscaled to 416x416 pixel all classes merged into one class ("insect") v5 insect_detect_raw_4K
original images in 4K resolution (3840x2160 pixel) v7 insect_detect_320_1class
squashed to square (aspect ratio 1:1) downscaled to 320x320 pixel all classes merged into one class ("insect") Deployment You can use this dataset as starting point to train your own insect detection models. Check the model training instructions for more information. Open source Python scripts to deploy the trained models can be found at the insect-detect GitHub repo.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
S1 File is the supporting information of Fig 11. Among them, results_for_YOLOv5.csv is the data of YOLOv5 during the training process, and results_for_SCB-YOLOV5.csv is the data of SCB-YOLOv5 during the training process. In the data, the first column is the number of training epoch, and the 2–4 columns are the changes in the value of each loss function during training. The 5–7 columns are the changes of precision, recall, and mAP@0.5. The 8–10 columns are the changes in the value of each loss function during validation process, and the last 3 columns are the change in learning rate. (ZIP)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An object detection dataset used for training, validating and evaluating sea urchin detection models. This dataset contains 9,872 images and over 44,000 annotations belonging to three urchin species from a variety of locations around New Zealand and Australia.
Complete_urchin_dataset.csv contain a full list of images in the dataset and the corresponding bounding boxes and additional metadata, including image source, campaign/deployment names, latitude, longitude, depth, altitude, time stamps and more. High_conf_clipped_dataset.csv is a preprocessed version of the complete dataset that has removed annotations with low annotators' confidence scores (< 0.7), removed annotations that are flagged for review and clipped all bounding boxes to fit within the bounds of the images.
Run the download_images.py script to download all the images from the URLs in the csv files.
Labels.zip (YOLOv5 formatted txt bounding box label files), yolov5_dataset.yaml (YOLOv5 dataset configuration file) and train/val/test.txt (training, validation and test splits) can be used to train a YOLOv5 object detection model on this dataset.
See https://github.com/kraw084/Urchin-Detector for code, models and more documentation relating to this dataset.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
✊ ✋ ✌️
I want to create an easy game by leveraging the power of AI and Deep Learning. Hence I came up with this idea.
The train_data directory contains images and labels directory which in itself is divided into train and valid directories. The train and valid in images contains .jpg images for training and validation. The train and valid in labels contains .txt files for training and validation.
Images taken from 'https://www.kaggle.com/alishmanandhar/rock-scissor-paper'. Labelled them myself using 'makesense.ai'.
USE THIS DATASET TO CREATE A POWERFUL IMAGE DETECTION MODEL AND DEVELOP THE FAMOUS GAME OF ROCK PAPER SCISSORS THAT WE ALL ONCE ENJOYED.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Sonar Train is a dataset for object detection tasks - it contains All annotations for 7,583 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Himalayan Statues Bounding‑Box Dataset
Images: 480 train, 50 val
Label format: YOLO v5/v8 (class cx cy w h, all 0‑1)
Class list:
id name
0 statue
Boxes were generated with a dual‑mask heuristic (background vs. colour contrast) and spot‑checked manually. License: CC‑BY‑4.0.
For training examples see the Ultralytics section below.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was originally created by Richard. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/new-workspace-rp0z0/csgo-train-yolo-v5.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
https://public.roboflow.ai/object-detection/chess-full
Provided by Roboflow License: Public Domain
This is a dataset of Chess board photos and various pieces. All photos were captured from a constant angle, a tripod to the left of the board. The bounding boxes of all pieces are annotated as follows: white-king
, white-queen
, white-bishop
, white-knight
, white-rook
, white-pawn
, black-king
, black-queen
, black-bishop
, black-knight
, black-rook
, black-pawn
. There are 2894 labels across 292 images.
https://i.imgur.com/nkjobw1.png" alt="Chess Example">
Follow this tutorial to see an example of training an object detection model using this dataset or jump straight to the Colab notebook.
At Roboflow, we built a chess piece object detection model using this dataset.
https://blog.roboflow.ai/content/images/2020/01/chess-detection-longer.gif" alt="ChessBoss">
You can see a video demo of that here. (We did struggle with pieces that were occluded, i.e. the state of the board at the very beginning of a game has many pieces obscured - let us know how your results fare!)
We're releasing the data free on a public license.
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.
Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Workpiece surface defect detection is an indispensable part of intelligent production. The surface information obtained by traditional 2D image detection has some limitations due to the influence of environmental light factors and part complexity. However, the digital twin model has the characteristics of high fidelity and scalability, and the digital twin surface can be obtained by a device with a scanning accuracy of 0.02mm to achieve the representation of the real surface of the workpiece. The surface defect detection system for digital twin models is proposed based on the improved YOLOv5 model in this paper. Firstly, the digital twin model of the workpiece is reconstructed by the point cloud data obtained by the scanning device, and the surface features with defects are captured. Subsequently, the training dataset is calibrated based on the defect surface, where the defect types include Inclusion, Perforation, pitting surface and Rolled-in scale. Finally, the improved YOLOv5 model with CBAM mechanism and BiFPN module was used to identify the surface defects of the digital twin model and compare it with the original YOLOv5 model and other common models. The results show that the improved YOLOv5 model can realize the identification and classification of surface defects. Compared with the original YOLOv5 model, the mAP value of the improved YOLOv5 model has increased by 0.2%, and the model has high precision. On the basis of the same data set, the improved YOLOv5 model has higher recognition accuracy than other models, improving 11.7%, 3.4%, 6.2%, 33.5%, respectively. As a result, this study provides a practical and systematic detection method for digital twin model surface during the intelligent production process, and realizes the rapid screening of the workpiece with defects.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Personal Protective Equipment Dataset (PPED)
This dataset serves as a benchmark for PPE in chemical plants We provide datasets and experimental results.
We produced a data set based on the actual needs and relevant regulations in chemical plants. The standard GB 39800.1-2020 formulated by the Ministry of Emergency Management of the People’s Republic of China defines the protective requirements for plants and chemical laboratories. The complete dataset is contained in the folder PPED/data.
1.1. Image collection
We took more than 3300 pictures. We set the following different characteristics, including different environments, different distances, different lighting conditions, different angles, and the diversity of the number of people photographed.
Backgrounds: There are 4 backgrounds, including office, near machines, factory and regular outdoor scenes.
Scale: By taking pictures from different distances, the captured PPEs are classified in small, medium and large scales.
Light: Good lighting conditions and poor lighting conditions were studied.
Diversity: Some images contain a single person, and some contain multiple people.
Angle: The pictures we took can be divided into front and side.
A total of more than 3300 photos were taken in the raw data under all conditions. All images are located in the folder “PPED/data/JPEGImages”.
1.2. Label
We use Labelimg as the labeling tool, and we use the PASCAL-VOC labelimg format. Yolo use the txt format, we can use trans_voc2yolo.py to convert the XML file in PASCAL-VOC format to txt file. Annotations are stored in the folder PPED/data/Annotations
1.3. Dataset Features
The pictures are made by us according to the different conditions mentioned above. The file PPED/data/feature.csv is a CSV file which notes all the .os of all the image. It records every feature of the picture, including lighting conditions, angles, backgrounds, number of people and scale.
1.4. Dataset Division
The data set is divided into 9:1 training set and test set.
We provide baseline results with five models, namely Faster R-CNN ®, Faster R-CNN (M), SSD, YOLOv3-spp, and YOLOv5. All code and results is given in folder PPED/experiment.
2.1. Environment and Configuration:
Intel Core i7-8700 CPU
NVIDIA GTX1060 GPU
16 GB of RAM
Python: 3.8.10
pytorch: 1.9.0
pycocotools: pycocotools-win
Windows 10
2.2. Applied Models
The source codes and results of the applied models is given in folder PPED/experiment with sub-folders corresponding to the model names.
2.2.1. Faster R-CNN
Faster R-CNN
backbone: resnet50+fpn
We downloaded the pre-training weights from https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth.
We modified the dataset path, training classes and training parameters including batch size.
We run train_res50_fpn.py start training.
Then, the weights are trained by the training set.
Finally, we validate the results on the test set.
backbone: mobilenetv2
the same training method as resnet50+fpn, but the effect is not as good as resnet50+fpn, so it is directly discarded.
The Faster R-CNN source code used in our experiment is given in folder PPED/experiment/Faster R-CNN. The weights of the fully-trained Faster R-CNN (R), Faster R-CNN (M) model are stored in file PPED/experiment/trained_models/resNetFpn-model-19.pth and mobile-model.pth. The performance measurements of Faster R-CNN (R) Faster R-CNN (M) are stored in folder PPED/experiment/results/Faster RCNN(R)and Faster RCNN(M).
2.2.2. SSD
backbone: resnet50
We downloaded pre-training weights from https://download.pytorch.org/models/resnet50-19c8e357.pth.
The same training method as Faster R-CNN is applied.
The SSD source code used in our experiment is given in folder PPED/experiment/ssd. The weights of the fully-trained SSD model are stored in file PPED/experiment/trained_models/SSD_19.pth. The performance measurements of SSD are stored in folder PPED/experiment/results/SSD.
2.2.3. YOLOv3-spp
backbone: DarkNet53
We modified the type information of the XML file to match our application.
We run trans_voc2yolo.py to convert the XML file in VOC format to a txt file.
The weights used are: yolov3-spp-ultralytics-608.pt.
The YOLOv3-spp source code used in our experiment is given in folder PPED/experiment/YOLOv3-spp. The weights of the fully-trained YOLOv3-spp model are stored in file PPED/experiment/trained_models/YOLOvspp-19.pt. The performance measurements of YOLOv3-spp are stored in folder PPED/experiment/results/YOLOv3-spp.
2.2.4. YOLOv5
backbone: CSP_DarkNet
We modified the type information of the XML file to match our application.
We run trans_voc2yolo.py to convert the XML file in VOC format to a txt file.
The weights used are: yolov5s.
The YOLOv5 source code used in our experiment is given in folder PPED/experiment/yolov5. The weights of the fully-trained YOLOv5 model are stored in file PPED/experiment/trained_models/YOLOv5.pt. The performance measurements of YOLOv5 are stored in folder PPED/experiment/results/YOLOv5.
2.3. Evaluation
The computed evaluation metrics as well as the code needed to compute them from our dataset are provided in the folder PPED/experiment/eval.
Faster R-CNN (R and M)
official code: https://github.com/pytorch/vision/blob/main/torchvision/models/detection/faster_rcnn.py
SSD
official code: https://github.com/pytorch/vision/blob/main/torchvision/models/detection/ssd.py
YOLOv3-spp
YOLOv5
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Train Yolov5 is a dataset for object detection tasks - it contains Sawit annotations for 1,917 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).