Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset consists of 300 synthetic images featuring water-sensitive paper. It was generated using an automated algorithm that creates individual droplets and positions them against a yellow artificial background. Annotations for instance segmentation each droplet are stored in a text file formatted according to YOLOv8 annotation format.
Some key features include:
The dataset is organized in two folder:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
https://i.imgur.com/eEWi4PT.png" alt="EgoHands Dataset">
The EgoHands dataset is a collection of 4800 annotated images of human hands from a first-person view originally collected and labeled by Sven Bambach, Stefan Lee, David Crandall, and Chen Yu of Indiana University.
The dataset was captured via frames extracted from video recorded through head-mounted cameras on a Google Glass headset while peforming four activities: building a puzzle, playing chess, playing Jenga, and playing cards. There are 100 labeled frames for each of 48 video clips.
The original EgoHands dataset was labeled with polygons for segmentation and released in a Matlab binary format. We converted it to an object detection dataset using a modified version of this script from @molyswu and have archived it in many popular formats for use with your computer vision models.
After converting to bounding boxes for object detection, we noticed that there were several dozen unlabeled hands. We added these by hand and improved several hundred of the other labels that did not fully encompass the hands (usually to include omitted fingertips, knuckles, or thumbs). In total, 344 images' annotations were edited manually.
We chose a new random train/test split of 80% training, 10% validation, and 10% testing. Notably, this is not the same split as in the original EgoHands paper.
There are two versions of the converted dataset available:
* specific is labeled with four classes: myleft
, myright
, yourleft
, yourright
representing which hand of which person (the viewer or the opponent across the table) is contained in the bounding box.
* generic contains the same boxes but with a single hand
class.
The authors have graciously allowed Roboflow to re-host this derivative dataset. It is released under a Creative Commons by Attribution 4.0 license. You may use it for academic or commercial purposes but must cite the original paper.
Please use the following Bibtext:
@inproceedings{egohands2015iccv,
title = {Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions},
author = {Sven Bambach and Stefan Lee and David Crandall and Chen Yu},
booktitle = {IEEE International Conference on Computer Vision (ICCV)},
year = {2015}
}
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The goal of this project is to create a specialized model for detecting and recognizing specific wild animals, including Elephant
, Gorilla
, Giraffe
, Lion
, Tiger
, and Zebra
. We gathered images of these animals and used the Roboflow annotation tool to manually label each animal class. After annotation, the data was exported in the YOLOv8
format.
Next, we trained a custom YOLOv8
model on this dataset to accurately detect and recognize the selected animal species in images. The project leverages YOLOv8’s object detection capabilities to improve detection accuracy for wildlife monitoring and research purposes.
You can find more details about the project on GitHub by clicking on this link. To view the training logs and metrics on wandb, click here.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Explore our dataset of 621 solar panel images with detailed bounding box annotations in YOLO format, perfect for training YOLOv8 models.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Norway prawns (Nephrops norvegicus), also known as the Dublin Bay prawn, are common around the Irish coast. They are found in distinct sandy/muddy areas where the sediment is suitable for them to construct their burrows. Nephrops spend a great deal of time in their burrows and their emergence from these is related to time of year, light intensity and tidal strength. The Irish Nephrops fishery is extremely valuable with landings recently worth around €55m at first sale, supporting an important Irish fishing industry.
Nephrops are managed in Functional Units (FUs). The Marine Institute has conducted under water television surveys since 2002 to independently estimate abundance, distribution and stock sizes of Nephrops norvegicus for:
Each year during the summer months, on average 300 stations are surveyed each year, in three survey legs, covering all the FUs in depths from 20 to 650 metres.
A high definition camera system is towed over the sea bed for 10 minutes travelling approx. 200m at 0.8 knots on a purpose built sledge. The UWTV survey follows survey protocols available https://doi.org/10.17895/ices.pub.8014">here agreed by International Council for the Exploration of the Sea (ICES) Working Group on Nephrops surveys (WGNEPS).
As part of the iMagine project a selection of images from the Underwater TV survey Functional Units were annotated with bounding boxes and labels in YOLOv8 format to train an YOLOv8 Object Detection Models. The training dataset is saved in YOLOv8 format. It is intended to train a YOLOv8 Nephrrops burrow object detection model to assess the utility of an Object Detection model is assisting Prawn Survey work in the semi automated annotation of prawn burrow imagery.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The "PPE Dataset" is a robust and diverse collection of images designed for the development and enhancement of machine learning models in the realm of workplace safety. This dataset focuses on the detection and classification of various types of personal protective equipment (PPE) typically used in industrial and construction environments. The goal is to facilitate automated monitoring systems that ensure adherence to safety protocols, thereby contributing to the prevention of workplace accidents and injuries.
The dataset comprises annotated images spanning four primary PPE categories:
Boots: Safety footwear, including steel-toe and insulated boots. Helmet: Various types of safety helmets and hard hats. Person: Individuals, both with and without PPE, to enhance person detection alongside PPE recognition. Vest: High-visibility vests, reflective safety gear for visibility in low-light conditions. Ear-protection: adding images Mask: Respiratory masks adding images Glass: Safety glasses adding images Glove: Safety Gloves adding images Safety cones: to be added Each class is annotated to provide precise bounding boxes, ensuring high-quality data for model training.
Phase 1 - Collection: Gathering images from diverse sources, focusing on different environments, lighting conditions, and angles. Phase 2 - Annotation: Ongoing process of labeling the images with accurate bounding boxes. Phase 3 - Model Training: Scheduled to commence post-annotation, targeting advanced object detection models like YOLOv8 & YOLO-NAS.
Contribution and Labeling Guidelines We welcome contributions from the community! If you wish to contribute images or assist with annotations:
Image Contributions: Please ensure images are high-resolution and showcase clear instances of PPE usage. Annotation Guidelines: Follow the standard annotation format as per Roboflow's Annotation Guide. Your contributions will play a vital role in enhancing workplace safety through AI-driven solutions.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset consists of 300 synthetic images featuring water-sensitive paper. It was generated using an automated algorithm that creates individual droplets and positions them against a yellow artificial background. Annotations for instance segmentation each droplet are stored in a text file formatted according to YOLOv8 annotation format.
Some key features include:
The dataset is organized in two folder: