14 datasets found
  1. Wine Labels Dataset

    • universe.roboflow.com
    zip
    Updated May 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roboflow 100 (2023). Wine Labels Dataset [Dataset]. https://universe.roboflow.com/roboflow-100/wine-labels
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 7, 2023
    Dataset provided by
    Roboflow
    Authors
    Roboflow 100
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Wine Labels Bounding Boxes
    Description

    This dataset was originally created by Yilong Zheng. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/wine-label/wine-label-detection.

    This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.

    Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark

  2. Animal Recognition Using Methods Of Fine-Grained Visual Analysis - YOLOv5...

    • zenodo.org
    zip
    Updated Jul 17, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yu Shiang Tee; Yu Shiang Tee (2022). Animal Recognition Using Methods Of Fine-Grained Visual Analysis - YOLOv5 Breed Classification Dataset (Tsinghua Dogs) [Dataset]. http://doi.org/10.5281/zenodo.6849958
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 17, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Yu Shiang Tee; Yu Shiang Tee
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Tsinghua Dogs Dataset with ground truth labels for breeds in YOLOv5 format.

  3. covid_yolov5_labels

    • kaggle.com
    zip
    Updated Jul 15, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Varun Dutt (2021). covid_yolov5_labels [Dataset]. https://www.kaggle.com/datasets/varundutt9213/covid-yolov5-labels
    Explore at:
    zip(1390535 bytes)Available download formats
    Dataset updated
    Jul 15, 2021
    Authors
    Varun Dutt
    Description

    Dataset

    This dataset was created by Varun Dutt

    Contents

  4. Benchmark datasets for detection and identification of insects from camera...

    • zenodo.org
    zip
    Updated Dec 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kim Bjerge; Kim Bjerge; Jamie Alison; Jamie Alison; Mads Dyrmann; Mads Dyrmann; Carsten E. Frigaard; Hjalte M. R. Mann; Hjalte M. R. Mann; Toke T. Hoye; Toke T. Hoye; Carsten E. Frigaard (2022). Benchmark datasets for detection and identification of insects from camera trap images with deep learning [Dataset]. http://doi.org/10.5281/zenodo.7395752
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 5, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Kim Bjerge; Kim Bjerge; Jamie Alison; Jamie Alison; Mads Dyrmann; Mads Dyrmann; Carsten E. Frigaard; Hjalte M. R. Mann; Hjalte M. R. Mann; Toke T. Hoye; Toke T. Hoye; Carsten E. Frigaard
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Insect benchmark datasets for training, validation and test (train1201.zip, val1201.zip and test1201.zip) with time-lapse images as described in paper:

    Bjerge K, Alison J, Dyrmann M, Frigaard C.E., Mann H. M. R., Høye T.T., Accurate detection and identification of insects from camera trap images with deep learning, bioRxiv:10.1101/2022.10.25.513484v1

    Labels in YOLO format: ultralytics/yolov5: label format

    The annotated training and validation datasets contains insects of nine different species as listed below:

    0 Coccinellidae septempunctata
    1 Apis mellifera
    2 Bombus lapidarius
    3 Bombus terrestris
    4 Eupeodes corolla
    5 Episyrphus balteatus
    6 Aglais urticae
    7 Vespula vulgaris
    8 Eristalis tenax

    The test dataset contains additional classes of insects.

    9 Non-Bombus Anthophila
    10 Bombus spp.
    11 Syrphidae
    12 Fly spp.
    13 Unclear insect
    14 Mixed animals:
    ——————————
    Rhopalocera
    Non-Anthophila Hymenoptera
    Non-Syrphidae Diptera
    Non-Conccinalidae Coleoptera
    Concinellidae
    Other animals

    There are two naming conventions for image (.jpg) and label (.txt) files.

    Background images without insects are named:
    X_Seq-YYYYMMDDHHMMSS-snapshot”.
    E.g.:
    Background image: 12_13-20190704172200-snapshot.jpg
    Empty label file: 12_13-20190704172200-snapshot.txt

    Images annotated with insects are named:
    SZ_IP-MonthDate_C_Seq-YYYYMMDDHHMMSS”.
    E.g.:
    Image file: S1_146-Aug23_1_156-20190822133230.jpg
    Label file: S1_146-Aug23_1_156-20190822133230.txt

    Abbreviations:

    YYYYMMDDHHMMSS – Capture timestamp with year, month, date, hour, minutes, and second
    Seq – Sequence number created by the motion program to separate images
    C – Identification of two cameras with Id=0 or Id=1 in system identified by SZ_IP
    MonthDate – Folder name for where the original image were stored in the system
    SZ_IP – Identification of five camera systems: S1_123, S2_146, S3_194, S4_199, S5_187 (Two cameras in each system)
    X – An index number related to a specific camera and folder ensuring unique file names of background images from different camera systems.

    The important information in a filename is system (SZ_IP), camera Id (C) and timestamp (YYYYMMDDHHMMSS).

    The three best YOLOv5 models (YOLOv5models.zip) from the paper are available in pytorch format.

    All models are tested with YOLOv5 release v7.0 (22-11-2022): ultralytics/yolov5: YOLOv5 in PyTorch

    insect1201-bestF1-640v5m.pt: Model no. 6 in Table 2 (F1=0.912)
    insect1201-bestF1-1280v5m6.pt: Model no. 8 in Table 2 (F1=0.925)
    insect1201-bestF1-1280v5m6.pt: Model no. 10 in Table 2 (F1=0.932)

    insects-1201val.yaml: YAML file with label names to train YOLOv5

    trainInsects-1201m.sh: Linux bash shell script with parameters to train YOLOv5m6
    valInsectsF1-1201.sh: Linux bash shell script with parameters to validated models

  5. last pre trained yolov5 epoch100

    • kaggle.com
    zip
    Updated Aug 29, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark Antonyo (2020). last pre trained yolov5 epoch100 [Dataset]. https://www.kaggle.com/markantonyo/last-pre-trained-yolov5-epoch100
    Explore at:
    zip(160877393 bytes)Available download formats
    Dataset updated
    Aug 29, 2020
    Authors
    Mark Antonyo
    Description

    Dataset

    This dataset was created by Mark Antonyo

    Contents

  6. h

    cropsVSweed

    • huggingface.co
    Updated Oct 11, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    samarth (2022). cropsVSweed [Dataset]. https://huggingface.co/datasets/Sa-m/cropsVSweed
    Explore at:
    Dataset updated
    Oct 11, 2022
    Authors
    Samarth Agarwal
    Description

    WeedCrop Image Dataset Data Description It includes 2822 images. Images are annotated in YOLO v5 PyTorch format. -Train directory contains 2469 images and respective labels in yolov5 Pytorch format. -Validation directory contains 235 images and respective labels in yolov5 Pytorch format. -Test directory contains 118 images and respective labels in yolov5 Pytorch format. Reference- https://www.kaggle.com/datasets/vinayakshanawad/weedcrop-image-dataset

  7. last pre trained model yolov5 epoch200

    • kaggle.com
    zip
    Updated Aug 31, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark Antonyo (2020). last pre trained model yolov5 epoch200 [Dataset]. https://www.kaggle.com/datasets/markantonyo/last-pre-trained-model-yolov5-epoch200
    Explore at:
    zip(161885172 bytes)Available download formats
    Dataset updated
    Aug 31, 2020
    Authors
    Mark Antonyo
    Description

    Dataset

    This dataset was created by Mark Antonyo

    Contents

  8. P

    Side Profile Tires Dataset Dataset

    • paperswithcode.com
    Updated Sep 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Side Profile Tires Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/side-profile-tires-dataset
    Explore at:
    Dataset updated
    Sep 18, 2024
    Description

    Description:

    👉 Download the dataset here

    This dataset consists of meticulously annotated images of tire side profiles, specifically designed for image segmentation tasks. Each tire has been manually labeled to ensure high accuracy, making this dataset ideal for training machine learning models focused on tire detection, classification, or related automotive applications.

    The annotations are provided in the YOLO v5 format, leveraging the PyTorch framework for deep learning applications. The dataset offers a robust foundation for researchers and developers working on object detection, autonomous vehicles, quality control, or any project requiring precise tire identification from images.

    Download Dataset

    Data Collection and Labeling Process:

    Manual Labeling: Every tire in the dataset has been individually labeled to guarantee that the annotations are highly precise, significantly reducing the margin of error in model training.

    Annotation Format: YOLO v5 PyTorch format, a highly efficient and widely used format for real-time object detection systems.

    Pre-processing Applied:

    Auto-orientation: Pixel data has been automatically oriented, and EXIF orientation metadata has been stripped to ensure uniformity across all images, eliminating issues related to

    image orientation during processing.

    Resizing: All images have been resized to 416×416 pixels using stretching to maintain compatibility with common object detection frameworks like YOLO. This resizing standardizes the image input size while preserving visual integrity.

    Applications:

    Automotive Industry: This dataset is suitable for automotive-focused AI models, including tire quality assessment, tread pattern recognition, and autonomous vehicle systems.

    Surveillance and Security: Use cases in monitoring systems where identifying tires is crucial for vehicle recognition in parking lots or traffic management systems.

    Manufacturing and Quality Control: Can be used in tire manufacturing processes to automate defect detection and classification.

    Dataset Composition:

    Number of Images: [Add specific number]

    File Format: JPEG/PNG

    Annotation Format: YOLO v5 PyTorch

    Image Size: 416×416 (standardized across all images)

    This dataset is sourced from Kaggle.

  9. last9 pre trained model yolov5

    • kaggle.com
    zip
    Updated Aug 17, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark Antonyo (2020). last9 pre trained model yolov5 [Dataset]. https://www.kaggle.com/markantonyo/last9-pre-trained-model-yolov5
    Explore at:
    zip(656010967 bytes)Available download formats
    Dataset updated
    Aug 17, 2020
    Authors
    Mark Antonyo
    Description

    Dataset

    This dataset was created by Mark Antonyo

    Contents

  10. Mechanical Parts Dataset 2022

    • zenodo.org
    Updated Jan 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mübarek Mazhar Çakır; Mübarek Mazhar Çakır (2023). Mechanical Parts Dataset 2022 [Dataset]. http://doi.org/10.5281/zenodo.7499618
    Explore at:
    Dataset updated
    Jan 4, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Mübarek Mazhar Çakır; Mübarek Mazhar Çakır
    Description

    Mechanical Parts Dataset

    The dataset consists of a total of 2250 images obtained by downloading from various internet platforms. Among the images in the dataset, there are 714 images with bearings, 632 images with bolts, 616 images with gears and 586 images with nuts. A total of 10597 manual labeling processes were carried out in the dataset, including 2099 labels belonging to the bearing class, 2734 labels belonging to the bolt class, 2662 labels belonging to the gear class and 3102 labels belonging to the nut class.

    Folder Content

    The created dataset is divided into 3 as 80% train, 10% validation and 10% test. In the "Mechanical Parts Dataset" folder, there are three separate folders as "train", "test" and "val". In each of these three folders there are folders named "images" and "labels". Images are kept in the "images" folder and tag information is kept in the "labels" folder.

    Finally, inside the folder there is a yaml file named "mech_parts_data" for the Yolo algorithm. This file contains the number of classes and class names.

    Images and Labels

    The dataset was prepared in accordance with the Yolov5 algorithm.
    For example, the tag information of the image named "2a0xhkr_jpg.rf.45a11bf63c40ad6e47da384fdf6bb7a1.jpg" is stored in the txt file with the same name. The tag information (coordinates) in the txt file are as follows: "class x_center y_center width height".

    Related paper: doi.org/10.5281/zenodo.7496767

  11. Mechanical Parts Dataset 2022

    • zenodo.org
    Updated Jan 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mübarek Mazhar Çakır; Mübarek Mazhar Çakır (2023). Mechanical Parts Dataset 2022 [Dataset]. http://doi.org/10.5281/zenodo.7504801
    Explore at:
    Dataset updated
    Jan 5, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Mübarek Mazhar Çakır; Mübarek Mazhar Çakır
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Mechanical Parts Dataset

    The dataset consists of a total of 2250 images obtained by downloading from various internet platforms. Among the images in the dataset, there are 714 images with bearings, 632 images with bolts, 616 images with gears and 586 images with nuts. A total of 10597 manual labeling processes were carried out in the dataset, including 2099 labels belonging to the bearing class, 2734 labels belonging to the bolt class, 2662 labels belonging to the gear class and 3102 labels belonging to the nut class.

    Folder Content

    The created dataset is divided into 3 as 80% train, 10% validation and 10% test. In the "Mechanical Parts Dataset" folder, there are three separate folders as "train", "test" and "val". In each of these three folders there are folders named "images" and "labels". Images are kept in the "images" folder and tag information is kept in the "labels" folder.

    Finally, inside the folder there is a yaml file named "mech_parts_data" for the Yolo algorithm. This file contains the number of classes and class names.

    Images and Labels

    The dataset was prepared in accordance with the Yolov5 algorithm.
    For example, the tag information of the image named "2a0xhkr_jpg.rf.45a11bf63c40ad6e47da384fdf6bb7a1.jpg" is stored in the txt file with the same name. The tag information (coordinates) in the txt file are as follows: "class x_center y_center width height".

    Update 05.01.2023

    ***Pascal voc and coco json formats have been added.***

    Related paper: doi.org/10.5281/zenodo.7496767

  12. Synthetic Gloomhaven Monsters

    • kaggle.com
    zip
    Updated Aug 30, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eric de Potter (2020). Synthetic Gloomhaven Monsters [Dataset]. https://www.kaggle.com/ericdepotter/synthetic-gloomhaven-monsters
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Aug 30, 2020
    Authors
    Eric de Potter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Context

    One of my passions is playing board games with my friends. However one of them lives abroad and so we like to stream the game when playing with him. However instead of just having a normal stream, I wanted to show some additional information about the monsters that are on the game board. This originated in a fun project to train CNNs in order to detect these monsters.

    Content

    To have enough training data, I made a little project in UE4 to generate these training images. For each image there is a mask for every monster that appears in it. The dataset also includes annotations for the train images in the COCO format (annotations.json) and labes for the bounding box in Darknet format in the folder labels.

    There is a training and validation subset for the images, labels and masks folders. The structure is as follows: for the first training image containing an earth_demon and harrower_infester:

    • The image is stored at images/train/image_1.png
    • The label-file is stored at labels/train/label_1.png. This file contains two lines. One line for each monster. A line is constructed as follows: class_id center_x center_y width height. Note that the position and dimensions are relative to the image width and height.
    • There are two mask images located at masks/train. One is named image_1_mask_0_harrower_infester.png and the other image_1_mask_1_earth_demon.png.

    The code for generating this dataset and training a MaskRCNN and YoloV5 model can be found at https://github.com/ericdepotter/Gloomhaven-Monster-Recognizer.

    Acknowledgements

    I took pictures for the images of the monsters myself. The images of the game tiles I obtained from this collection of Gloomhaven assets.

    Inspiration

    This is a classic object detection or object segmentation problem.

  13. f

    Training dataset configuration.

    • plos.figshare.com
    txt
    Updated Nov 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carles Rubio Maturana; Allisson Dantas de Oliveira; Francesc Zarzuela; Edurne Ruiz; Elena Sulleiro; Alejandro Mediavilla; Patricia Martínez-Vallejo; Sergi Nadal; Tomàs Pumarola; Daniel López-Codina; Alberto Abelló; Elisa Sayrol; Joan Joseph-Munné (2024). Training dataset configuration. [Dataset]. http://doi.org/10.1371/journal.pntd.0012614.s001
    Explore at:
    txtAvailable download formats
    Dataset updated
    Nov 15, 2024
    Dataset provided by
    PLOS Neglected Tropical Diseases
    Authors
    Carles Rubio Maturana; Allisson Dantas de Oliveira; Francesc Zarzuela; Edurne Ruiz; Elena Sulleiro; Alejandro Mediavilla; Patricia Martínez-Vallejo; Sergi Nadal; Tomàs Pumarola; Daniel López-Codina; Alberto Abelló; Elisa Sayrol; Joan Joseph-Munné
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundUrogenital schistosomiasis is considered a Neglected Tropical Disease (NTD) by the World Health Organization (WHO). It is estimated to affect 150 million people worldwide, with a high relevance in resource-poor settings of the African continent. The gold-standard diagnosis is still direct observation of Schistosoma haematobium eggs in urine samples by optical microscopy. Novel diagnostic techniques based on digital image analysis by Artificial Intelligence (AI) tools are a suitable alternative for schistosomiasis diagnosis.MethodologyDigital images of 24 urine sediment samples were acquired in non-endemic settings. S. haematobium eggs were manually labeled in digital images by laboratory professionals and used for training YOLOv5 and YOLOv8 models, which would achieve automatic detection and localization of the eggs. Urine sediment images were also employed to perform binary classification of images to detect erythrocytes/leukocytes with the MobileNetv3Large, EfficientNetv2, and NasNetLarge models. A robotized microscope system was employed to automatically move the slide through the X-Y axis and to auto-focus the sample.ResultsA total number of 1189 labels were annotated in 1017 digital images from urine sediment samples. YOLOv5x training demonstrated a 99.3% precision, 99.4% recall, 99.3% F-score, and 99.4% mAP0.5 for S. haematobium detection. NasNetLarge has an 85.6% accuracy for erythrocyte/leukocyte detection with the test dataset. Convolutional neural network training and comparison demonstrated that YOLOv5x for the detection of eggs and NasNetLarge for the binary image classification to detect erythrocytes/leukocytes were the best options for our digital image database.ConclusionsThe development of low-cost novel diagnostic techniques based on the detection and identification of S. haematobium eggs in urine by AI tools would be a suitable alternative to conventional microscopy in non-endemic settings. This technical proof-of-principle study allows laying the basis for improving the system, and optimizing its implementation in the laboratories.

  14. Data from: A Fine-Grained Vehicle Detection (FGVD) Dataset for Unconstrained...

    • zenodo.org
    • data.niaid.nih.gov
    bin
    Updated Jan 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prafful Kumar Khoba; Prafful Kumar Khoba; Chirag Parikh; Chirag Parikh; Rohit Saluja; Rohit Saluja; Ravi Kiran Sarvadevabhatla; Ravi Kiran Sarvadevabhatla; C. V. Jawahar; C. V. Jawahar (2023). A Fine-Grained Vehicle Detection (FGVD) Dataset for Unconstrained Roads [Dataset]. http://doi.org/10.5281/zenodo.7499479
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 3, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Prafful Kumar Khoba; Prafful Kumar Khoba; Chirag Parikh; Chirag Parikh; Rohit Saluja; Rohit Saluja; Ravi Kiran Sarvadevabhatla; Ravi Kiran Sarvadevabhatla; C. V. Jawahar; C. V. Jawahar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.

  15. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Roboflow 100 (2023). Wine Labels Dataset [Dataset]. https://universe.roboflow.com/roboflow-100/wine-labels
Organization logo

Wine Labels Dataset

wine-labels

wine-labels-dataset

Explore at:
zipAvailable download formats
Dataset updated
May 7, 2023
Dataset provided by
Roboflow
Authors
Roboflow 100
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Variables measured
Wine Labels Bounding Boxes
Description

This dataset was originally created by Yilong Zheng. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/wine-label/wine-label-detection.

This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.

Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark

Search
Clear search
Close search
Google apps
Main menu