6 datasets found
  1. Synthetic Water-Sensitive Paper Droplet Annotation

    • zenodo.org
    Updated Oct 29, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Inês Simões; Inês Simões (2024). Synthetic Water-Sensitive Paper Droplet Annotation [Dataset]. http://doi.org/10.5281/zenodo.13995950
    Explore at:
    Dataset updated
    Oct 29, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Inês Simões; Inês Simões
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset consists of 300 synthetic images featuring water-sensitive paper. It was generated using an automated algorithm that creates individual droplets and positions them against a yellow artificial background. Annotations for instance segmentation each droplet are stored in a text file formatted according to YOLOv8 annotation format.

    Some key features include:

    • Distribution of number of droplets per image: the number of droplets per image is determined
      based on the configuration values to follow a normal distribution.
    • Size distribution of droplets: the algorithm calculates the size of each droplet based on the
      Rosin-Rammler distribution.
    • Image Resolution: images were created with three different resolutions
    • Yellow Background: the background of each image is composed by a yellow radial gradient generated for each water-sensitive paperimage. The gradient transitions between two randomly chosen tones of yellow from a list of shades of yellow taken from real images of water-sensitive paper.
    • Droplet Color: the colors of the droplets are taken from two distinct real datasets of water-sensitive paper.
    • Droplet Shape: the shapes of the droplets are selected from a list containing 25 404 shapes, which are taken from real water-sensitive paper images.

    The dataset is organized in two folder:

    1. image: contains each one of the water-sensitive paper images
    2. label: contains the labels in YOLOv8 polygon format of each one of the droplets in the image
  2. R

    EgoHands Object Detection Dataset - specific

    • public.roboflow.com
    zip
    Updated Apr 22, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    IU Computer Vision Lab (2022). EgoHands Object Detection Dataset - specific [Dataset]. https://public.roboflow.com/object-detection/hands/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 22, 2022
    Dataset authored and provided by
    IU Computer Vision Lab
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Bounding Boxes of hands
    Description

    https://i.imgur.com/eEWi4PT.png" alt="EgoHands Dataset">

    About this dataset

    The EgoHands dataset is a collection of 4800 annotated images of human hands from a first-person view originally collected and labeled by Sven Bambach, Stefan Lee, David Crandall, and Chen Yu of Indiana University.

    The dataset was captured via frames extracted from video recorded through head-mounted cameras on a Google Glass headset while peforming four activities: building a puzzle, playing chess, playing Jenga, and playing cards. There are 100 labeled frames for each of 48 video clips.

    Our modifications

    The original EgoHands dataset was labeled with polygons for segmentation and released in a Matlab binary format. We converted it to an object detection dataset using a modified version of this script from @molyswu and have archived it in many popular formats for use with your computer vision models.

    After converting to bounding boxes for object detection, we noticed that there were several dozen unlabeled hands. We added these by hand and improved several hundred of the other labels that did not fully encompass the hands (usually to include omitted fingertips, knuckles, or thumbs). In total, 344 images' annotations were edited manually.

    We chose a new random train/test split of 80% training, 10% validation, and 10% testing. Notably, this is not the same split as in the original EgoHands paper.

    There are two versions of the converted dataset available: * specific is labeled with four classes: myleft, myright, yourleft, yourright representing which hand of which person (the viewer or the opponent across the table) is contained in the bounding box. * generic contains the same boxes but with a single hand class.

    Using this dataset

    The authors have graciously allowed Roboflow to re-host this derivative dataset. It is released under a Creative Commons by Attribution 4.0 license. You may use it for academic or commercial purposes but must cite the original paper.

    Please use the following Bibtext: @inproceedings{egohands2015iccv, title = {Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions}, author = {Sven Bambach and Stefan Lee and David Crandall and Chen Yu}, booktitle = {IEEE International Conference on Computer Vision (ICCV)}, year = {2015} }

  3. R

    Wild Animals Detection Dataset

    • universe.roboflow.com
    zip
    Updated Oct 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Puspendu AI Vision Workspace (2024). Wild Animals Detection Dataset [Dataset]. https://universe.roboflow.com/puspendu-ai-vision-workspace/wild-animals-detection-fspct/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 27, 2024
    Dataset authored and provided by
    Puspendu AI Vision Workspace
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Animals Bounding Boxes
    Description

    The goal of this project is to create a specialized model for detecting and recognizing specific wild animals, including Elephant, Gorilla, Giraffe, Lion, Tiger, and Zebra. We gathered images of these animals and used the Roboflow annotation tool to manually label each animal class. After annotation, the data was exported in the YOLOv8 format.

    Next, we trained a custom YOLOv8 model on this dataset to accurately detect and recognize the selected animal species in images. The project leverages YOLOv8’s object detection capabilities to improve detection accuracy for wildlife monitoring and research purposes.

    You can find more details about the project on GitHub by clicking on this link. To view the training logs and metrics on wandb, click here.

  4. g

    Solar Panel Bounding Boxes

    • gts.ai
    • paperswithcode.com
    json
    Updated Sep 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Solar Panel Bounding Boxes [Dataset]. https://gts.ai/dataset-download/solar-panel-bounding-boxes/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Sep 28, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Explore our dataset of 621 solar panel images with detailed bounding box annotations in YOLO format, perfect for training YOLOv8 models.

  5. Nephrops (Nephrops norvegicus) Burrow object detection simple training...

    • zenodo.org
    zip
    Updated Oct 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Éabha Melvin; Éabha Melvin (2024). Nephrops (Nephrops norvegicus) Burrow object detection simple training dataset from Irish Underwater TV surveys [Dataset]. http://doi.org/10.5281/zenodo.13987958
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 24, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Éabha Melvin; Éabha Melvin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jun 14, 2019
    Description

    Norway prawns (Nephrops norvegicus), also known as the Dublin Bay prawn, are common around the Irish coast. They are found in distinct sandy/muddy areas where the sediment is suitable for them to construct their burrows. Nephrops spend a great deal of time in their burrows and their emergence from these is related to time of year, light intensity and tidal strength. The Irish Nephrops fishery is extremely valuable with landings recently worth around €55m at first sale, supporting an important Irish fishing industry.

    Nephrops are managed in Functional Units (FUs). The Marine Institute has conducted under water television surveys since 2002 to independently estimate abundance, distribution and stock sizes of Nephrops norvegicus for:

    Each year during the summer months, on average 300 stations are surveyed each year, in three survey legs, covering all the FUs in depths from 20 to 650 metres.

    A high definition camera system is towed over the sea bed for 10 minutes travelling approx. 200m at 0.8 knots on a purpose built sledge. The UWTV survey follows survey protocols available https://doi.org/10.17895/ices.pub.8014">here agreed by International Council for the Exploration of the Sea (ICES) Working Group on Nephrops surveys (WGNEPS).

    As part of the iMagine project a selection of images from the Underwater TV survey Functional Units were annotated with bounding boxes and labels in YOLOv8 format to train an YOLOv8 Object Detection Models. The training dataset is saved in YOLOv8 format. It is intended to train a YOLOv8 Nephrrops burrow object detection model to assess the utility of an Object Detection model is assisting Prawn Survey work in the semi automated annotation of prawn burrow imagery.

  6. R

    Ppe For Workplace Dataset

    • universe.roboflow.com
    zip
    Updated Jan 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SiaBar (2024). Ppe For Workplace Dataset [Dataset]. https://universe.roboflow.com/siabar/ppe-dataset-for-workplace
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 19, 2024
    Dataset authored and provided by
    SiaBar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Boots EarProtection Glass Glove Bounding Boxes
    Description

    PPE Dataset for Workplace Safety

    Overview

    The "PPE Dataset" is a robust and diverse collection of images designed for the development and enhancement of machine learning models in the realm of workplace safety. This dataset focuses on the detection and classification of various types of personal protective equipment (PPE) typically used in industrial and construction environments. The goal is to facilitate automated monitoring systems that ensure adherence to safety protocols, thereby contributing to the prevention of workplace accidents and injuries.

    Class Types

    The dataset comprises annotated images spanning four primary PPE categories:

    Boots: Safety footwear, including steel-toe and insulated boots. Helmet: Various types of safety helmets and hard hats. Person: Individuals, both with and without PPE, to enhance person detection alongside PPE recognition. Vest: High-visibility vests, reflective safety gear for visibility in low-light conditions. Ear-protection: adding images Mask: Respiratory masks adding images Glass: Safety glasses adding images Glove: Safety Gloves adding images Safety cones: to be added Each class is annotated to provide precise bounding boxes, ensuring high-quality data for model training.

    Current Status and Timeline

    Phase 1 - Collection: Gathering images from diverse sources, focusing on different environments, lighting conditions, and angles. Phase 2 - Annotation: Ongoing process of labeling the images with accurate bounding boxes. Phase 3 - Model Training: Scheduled to commence post-annotation, targeting advanced object detection models like YOLOv8 & YOLO-NAS.

    Contribution and Labeling Guidelines We welcome contributions from the community! If you wish to contribute images or assist with annotations:

    Image Contributions: Please ensure images are high-resolution and showcase clear instances of PPE usage. Annotation Guidelines: Follow the standard annotation format as per Roboflow's Annotation Guide. Your contributions will play a vital role in enhancing workplace safety through AI-driven solutions.

  7. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Inês Simões; Inês Simões (2024). Synthetic Water-Sensitive Paper Droplet Annotation [Dataset]. http://doi.org/10.5281/zenodo.13995950
Organization logo

Synthetic Water-Sensitive Paper Droplet Annotation

Explore at:
Dataset updated
Oct 29, 2024
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Inês Simões; Inês Simões
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The dataset consists of 300 synthetic images featuring water-sensitive paper. It was generated using an automated algorithm that creates individual droplets and positions them against a yellow artificial background. Annotations for instance segmentation each droplet are stored in a text file formatted according to YOLOv8 annotation format.

Some key features include:

  • Distribution of number of droplets per image: the number of droplets per image is determined
    based on the configuration values to follow a normal distribution.
  • Size distribution of droplets: the algorithm calculates the size of each droplet based on the
    Rosin-Rammler distribution.
  • Image Resolution: images were created with three different resolutions
  • Yellow Background: the background of each image is composed by a yellow radial gradient generated for each water-sensitive paperimage. The gradient transitions between two randomly chosen tones of yellow from a list of shades of yellow taken from real images of water-sensitive paper.
  • Droplet Color: the colors of the droplets are taken from two distinct real datasets of water-sensitive paper.
  • Droplet Shape: the shapes of the droplets are selected from a list containing 25 404 shapes, which are taken from real water-sensitive paper images.

The dataset is organized in two folder:

  1. image: contains each one of the water-sensitive paper images
  2. label: contains the labels in YOLOv8 polygon format of each one of the droplets in the image
Search
Clear search
Close search
Google apps
Main menu