32 datasets found
  1. PotHole Detector Dataset Augmented

    • kaggle.com
    zip
    Updated Nov 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VincentMaes (2024). PotHole Detector Dataset Augmented [Dataset]. https://www.kaggle.com/datasets/vincenttgre/pothole-detector-dataset-augmented
    Explore at:
    zip(1507748590 bytes)Available download formats
    Dataset updated
    Nov 4, 2024
    Authors
    VincentMaes
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Road Damage and Pothole Detection Dataset

    Overview

    This dataset is specifically curated for object detection tasks aimed at identifying and classifying road damage and potholes. The original dataset on which this augmented dataset is based, included images labeled with four distinct classes: - Pothole - Alligator Crack - Long Crack - Lat Crack However, for training the model for detecting road damages, it has been placed into 1 class, namely the "Pothole" class, which now also includes the alligator, longitudinal, and lateral cracks.

    Data Augmentation

    To enhance the robustness and generalization capability of models trained on this dataset, extensive data augmentation techniques have been applied. The augmentation pipeline includes:

    • Horizontal Flip (50% probability)
    • Vertical Flip (10% probability)
    • Random Rotation by 90 degrees (50% probability)
    • Rotation (±10 degrees, 50% probability)
    • Random Brightness and Contrast adjustments (50% probability)
    • Gaussian Blur (30% probability)
    • Color Jitter (30% probability)
    • Random Scaling (±10% scale, 50% probability)
    • Perspective Transformations (scale range 0.05 to 0.1, 30% probability)

    These augmentations ensure that models can learn to recognize road damages under various conditions and viewpoints, improving their detection performance.

    Bounding Box Parameters

    Bounding boxes are provided in the YOLO format, ensuring easy integration with popular object detection frameworks. The bounding boxes are adjusted to correspond with the augmented images to maintain annotation accuracy.

    Classes

    The dataset includes the following class:

    Class ID Class Name 0 Pothole

    Data Split

    The dataset is divided into training, validation, and testing sets with the following proportions:

    • Training: 85%
    • Validation: 7%
    • Testing: 8%

    This split ensures a sufficient amount of data for training the model while maintaining enough data for validation and testing to assess model performance accurately.

    Conclusion

    This dataset aims to aid researchers and developers in building and fine-tuning models for road damage detection, contributing to safer and more efficient road maintenance systems.

  2. R

    6 7pm Augmented Dataset

    • universe.roboflow.com
    zip
    Updated Apr 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    YOLOV5 SMALL CCTV (2023). 6 7pm Augmented Dataset [Dataset]. https://universe.roboflow.com/yolov5-small-cctv/6-7pm-augmented-dataset/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 12, 2023
    Dataset authored and provided by
    YOLOV5 SMALL CCTV
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Motorcycle Helmet Bounding Boxes
    Description

    6 7pm Augmented Dataset

    ## Overview
    
    6 7pm Augmented Dataset is a dataset for object detection tasks - it contains Motorcycle Helmet annotations for 5,125 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. siim-cov19-yolov5-augment-8-jul-2021

    • kaggle.com
    zip
    Updated Jul 8, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Swadesh Jana (2021). siim-cov19-yolov5-augment-8-jul-2021 [Dataset]. https://www.kaggle.com/datasets/swadeshjana/siimcov19yolov5augment8jul2021
    Explore at:
    zip(508527536 bytes)Available download formats
    Dataset updated
    Jul 8, 2021
    Authors
    Swadesh Jana
    Description

    Dataset

    This dataset was created by Swadesh Jana

    Contents

  4. Food Images and Labels Dataset for YoloV5

    • kaggle.com
    zip
    Updated Mar 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CALEB STEPHEN URK20AI1009 (2023). Food Images and Labels Dataset for YoloV5 [Dataset]. https://www.kaggle.com/calebstephen/food-images-and-labels-dataset-for-yolov5
    Explore at:
    zip(41436337 bytes)Available download formats
    Dataset updated
    Mar 22, 2023
    Authors
    CALEB STEPHEN URK20AI1009
    Description

    This dataset contains 810 images of 12 different classes of food types. The dataset contains food that is generically found across the globe like Pizzas, Burgers, Fries, etc., and some food items that are geographically specific to India. Those include Idli, Vada, Chapathi, etc. In order for the Yolo model to recognize extremely generic items like fruits and common ingredients, the dataset was trained on Apples, Bananas, Rice, Tomatoes, etc. This dataset was created using roboflow's dataset creator present on the roboflow website. The data was augmented using roboflow's dataset augmentation methods like Flip 90 degrees and different ranges of saturation. The dataset can be used with YoloV5 and YoloV8 as well.

  5. R

    Bccd_yolov5_augmented Dataset

    • universe.roboflow.com
    zip
    Updated Aug 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    cells (2024). Bccd_yolov5_augmented Dataset [Dataset]. https://universe.roboflow.com/cells-6e2bg/bccd_yolov5_augmented/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 1, 2024
    Dataset authored and provided by
    cells
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Cells_group Bounding Boxes
    Description

    Bccd_YOLOv5_augmented

    ## Overview
    
    Bccd_YOLOv5_augmented is a dataset for object detection tasks - it contains Cells_group annotations for 364 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  6. chess object detection + yolov5 for chess

    • kaggle.com
    zip
    Updated Mar 27, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahmed Haytham (2022). chess object detection + yolov5 for chess [Dataset]. https://www.kaggle.com/datasets/ahmedhaytham/chess-object-detection-yolov5-for-chess
    Explore at:
    zip(77793443 bytes)Available download formats
    Dataset updated
    Mar 27, 2022
    Authors
    Ahmed Haytham
    Description

    here ?

    1. just uploaded it here to make it easy for me and others to use
    2. There is no data similar to it on kaggle # $ evaluation of my model is in ****yolov5/runs/train/exp/****

    Chess Pieces > 416x416_aug

    https://public.roboflow.ai/object-detection/chess-full

    Provided by Roboflow License: Public Domain

    Overview

    This is a dataset of Chess board photos and various pieces. All photos were captured from a constant angle, a tripod to the left of the board. The bounding boxes of all pieces are annotated as follows: white-king, white-queen, white-bishop, white-knight, white-rook, white-pawn, black-king, black-queen, black-bishop, black-knight, black-rook, black-pawn. There are 2894 labels across 292 images.

    https://i.imgur.com/nkjobw1.png" alt="Chess Example">

    Follow this tutorial to see an example of training an object detection model using this dataset or jump straight to the Colab notebook.

    Use Cases

    At Roboflow, we built a chess piece object detection model using this dataset.

    https://blog.roboflow.ai/content/images/2020/01/chess-detection-longer.gif" alt="ChessBoss">

    You can see a video demo of that here. (We did struggle with pieces that were occluded, i.e. the state of the board at the very beginning of a game has many pieces obscured - let us know how your results fare!)

    Using this Dataset

    We're releasing the data free on a public license.

    About Roboflow

    Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.

    Developers reduce 50% of their boilerplate code when using Roboflow's workflow, save training time, and increase model reproducibility.

    Roboflow Workmark

  7. R

    Yolov5 Only Large Images Dataset

    • universe.roboflow.com
    zip
    Updated Dec 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MICROSPOTTER (2021). Yolov5 Only Large Images Dataset [Dataset]. https://universe.roboflow.com/microspotter-tvwww/yolov5---only-large-images/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 2, 2021
    Dataset authored and provided by
    MICROSPOTTER
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    MIC Bounding Boxes
    Description

    YOLOv5 Only Large Images

    ## Overview
    
    YOLOv5  Only Large Images is a dataset for object detection tasks - it contains MIC annotations for 211 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  8. R

    Thesis Yolov5 Dataset

    • universe.roboflow.com
    zip
    Updated Aug 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ateneo de Zamboanga University (2023). Thesis Yolov5 Dataset [Dataset]. https://universe.roboflow.com/ateneo-de-zamboanga-university/thesis-yolov5-jvl6z/model/25
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 4, 2023
    Dataset authored and provided by
    Ateneo de Zamboanga University
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Horror Images Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Horror Movie Content Filtering: Thesis YoloV5 can be used by streaming platforms and content providers to identify and categorize horror movies based on the type of horror imagery present, offering tailored content recommendations to users based on their preferred horror subgenres.

    2. Video Game Scene Classification: Game developers can use Thesis YoloV5 to analyze video game scenes in horror games, enabling more immersive and dynamic gameplay experiences by adapting game environments, NPC interactions, or difficulty levels according to the detected horror elements.

    3. Content Moderation for Online Communities: Online forums, social media platforms, and image sharing sites can utilize Thesis YoloV5 to ensure that users adhere to content policies, automatically moderating and flagging inappropriate horror imagery to maintain a safe and inclusive online community.

    4. Augmented Reality Experiences: Thesis YoloV5 can be integrated into AR applications to generate interactive and engaging horror-themed experiences for entertainment, education, or marketing purposes. Users could interact with AI-generated horror characters, solve puzzles based on detected horror elements, or enjoy immersive and personalized storytelling experiences.

    5. Horror-centric Art and Design: Artists, graphic designers, and filmmakers can use Thesis YoloV5 to analyze and reference horror imagery for creating unique visual styles, mood boards, and thematic concepts for art, design projects, or marketing campaigns centered around horror themes.

  9. Experimental results of YOLOv8+WIOU.

    • plos.figshare.com
    xls
    Updated Mar 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meiling Shi; Dongling Zheng; Tianhao Wu; Wenjing Zhang; Ruijie Fu; Kailiang Huang (2024). Experimental results of YOLOv8+WIOU. [Dataset]. http://doi.org/10.1371/journal.pone.0299902.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Mar 21, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Meiling Shi; Dongling Zheng; Tianhao Wu; Wenjing Zhang; Ruijie Fu; Kailiang Huang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Accurate identification of small tea buds is a key technology for tea harvesting robots, which directly affects tea quality and yield. However, due to the complexity of the tea plantation environment and the diversity of tea buds, accurate identification remains an enormous challenge. Current methods based on traditional image processing and machine learning fail to effectively extract subtle features and morphology of small tea buds, resulting in low accuracy and robustness. To achieve accurate identification, this paper proposes a small object detection algorithm called STF-YOLO (Small Target Detection with Swin Transformer and Focused YOLO), which integrates the Swin Transformer module and the YOLOv8 network to improve the detection ability of small objects. The Swin Transformer module extracts visual features based on a self-attention mechanism, which captures global and local context information of small objects to enhance feature representation. The YOLOv8 network is an object detector based on deep convolutional neural networks, offering high speed and precision. Based on the YOLOv8 network, modules including Focus and Depthwise Convolution are introduced to reduce computation and parameters, increase receptive field and feature channels, and improve feature fusion and transmission. Additionally, the Wise Intersection over Union loss is utilized to optimize the network. Experiments conducted on a self-created dataset of tea buds demonstrate that the STF-YOLO model achieves outstanding results, with an accuracy of 91.5% and a mean Average Precision of 89.4%. These results are significantly better than other detectors. Results show that, compared to mainstream algorithms (YOLOv8, YOLOv7, YOLOv5, and YOLOx), the model improves accuracy and F1 score by 5-20.22 percentage points and 0.03-0.13, respectively, proving its effectiveness in enhancing small object detection performance. This research provides technical means for the accurate identification of small tea buds in complex environments and offers insights into small object detection. Future research can further optimize model structures and parameters for more scenarios and tasks, as well as explore data augmentation and model fusion methods to improve generalization ability and robustness.

  10. R

    Indian Food Yolov5 Dataset

    • universe.roboflow.com
    zip
    Updated Aug 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Smart India Hackathon (2022). Indian Food Yolov5 Dataset [Dataset]. https://universe.roboflow.com/smart-india-hackathon/indian-food-yolov5/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 20, 2022
    Dataset authored and provided by
    Smart India Hackathon
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Food Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Culinary Education Apps: The model can be integrated into culinary education software or apps to help students and enthusiasts learn the names and appearances of various Indian dishes. When used in combination with augmented reality, users can view the image of a dish and get information about it instantly.

    2. Healthy Eating and Diet Planning: Nutritionists and dieticians can use this model to categorize and identify Indian food items. Using this information, they can develop personalized meal plans for their clients and provide detailed nutritional breakdowns of traditional Indian meals based on their components.

    3. Restaurant Automation: The model can be used in an automated ordering system at Indian restaurants. By identifying the dishes being served, the system can correctly account for what's been served to each table and add it to their bill, reducing human error in bill generation.

    4. Multicultural Cooking Shows or Competitions: In multi-cuisine cooking shows, judges can use this model to verify if participants have correctly prepared a specified Indian dish. This can help ensure fair judging and provide a more informed analysis of the dishes.

    5. Food Delivery & Recognition Apps: This model can be used in food delivery apps where customers take a photo of a dish and the app recognizes the dish for them. The feature can recommend restaurants where they can order similar dishes or suggest recipes they can try.

  11. Poker Cards Dataset

    • universe.roboflow.com
    zip
    Updated May 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roboflow 100 (2023). Poker Cards Dataset [Dataset]. https://universe.roboflow.com/roboflow-100/poker-cards-cxcvz/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 7, 2023
    Dataset provided by
    Roboflowhttps://roboflow.com/
    Authors
    Roboflow 100
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Poker Cards Bounding Boxes
    Description

    This dataset was originally created by Team Roboflow and Augmented Startups. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/roboflow-100/poker-cards-cxcvz.

    This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.

    Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark

  12. Balanced Scoliosis X-ray Dataset (YOLOv5 Format)

    • kaggle.com
    zip
    Updated Oct 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Salman (2025). Balanced Scoliosis X-ray Dataset (YOLOv5 Format) [Dataset]. https://www.kaggle.com/datasets/salmankey/balanced-scoliosis-x-ray-dataset-yolov5-format
    Explore at:
    zip(496021086 bytes)Available download formats
    Dataset updated
    Oct 9, 2025
    Authors
    Muhammad Salman
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset is a balanced and augmented version of the original Scoliosis Detection Dataset designed for deep learning and computer vision tasks, particularly spinal curvature classification using YOLOv5.

    It contains dermatoscopic-style spine X-ray images categorized into four classes based on the severity of scoliosis:

    1-derece → Mild scoliosis

    2-derece → Moderate scoliosis

    3-derece → Severe scoliosis

    saglikli → Healthy (no scoliosis)

    ⚙️ Data Details

    Train set: ../train/images

    Validation set: ../valid/images

    Test set: ../test/images

    Total Classes: 4

    Balanced Samples: Each class contains approximately 1259 images and labels

    Augmentations Applied:

    Rotation

    Brightness and contrast adjustment

    Horizontal flip

    Random zoom and cropping

    Gaussian noise

    These augmentations were used to improve model robustness and reduce class imbalance.

    🎯 Use Cases

    This dataset is ideal for:

    Scoliosis detection and classification research

    Object detection experiments (YOLOv5, YOLOv8, EfficientDet)

    Transfer learning on medical image datasets

    Model comparison and explainability studies

    📊 Source

    Originally sourced and preprocessed using Roboflow, then restructured and balanced manually for research and experimentation.

    Roboflow Project Link: 🔗 View on Roboflow

    🧠 License

    CC BY 4.0 — Free to use and share with attribution.

  13. Scoliosis X-ray Dataset (YOLOv5 Format) disks

    • kaggle.com
    zip
    Updated Nov 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Salman (2025). Scoliosis X-ray Dataset (YOLOv5 Format) disks [Dataset]. https://www.kaggle.com/datasets/salmankey/scoliosis-x-ray-dataset-yolov5-format-disks
    Explore at:
    zip(236170694 bytes)Available download formats
    Dataset updated
    Nov 7, 2025
    Authors
    Muhammad Salman
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    🩻 Scoliosis Spine Detection Dataset (YOLOv5 Ready)

    This dataset is a curated and preprocessed version of a Scoliosis Spine X-ray dataset, designed specifically for deep learning–based object detection and classification tasks using frameworks like YOLOv5, YOLOv8, and TensorFlow Object Detection API.

    It contains annotated spinal X-ray images categorized into three classes, representing different spinal conditions.

    🧩 Dataset Configuration

    train: scoliosis2.v16i.tensorflow/images/train
    val: scoliosis2.v16i.tensorflow/images/valid
    test: scoliosis2.v16i.tensorflow/images/test
    
    nc: 3
    names: ['Vertebra', 'scoliosis spine', 'normal spine']
    

    ⚙️ Data Details

    • Train Set: /images/train
    • Validation Set: /images/valid
    • Test Set: /images/test
    • Total Classes: 3
    • Annotations: YOLO format (.txt files with class, x_center, y_center, width, height)
    • Image Format: .jpg / .png

    Classes Description:

    1. Vertebra — Labeled vertebral regions used for bone localization.
    2. Scoliosis Spine — X-rays showing curvature or deformity in the spinal structure.
    3. Normal Spine — Healthy, straight spinal alignment without scoliosis signs.

    🧠 Augmentations Applied

    To enhance diversity and model robustness, the dataset was augmented using:

    • Rotation
    • Brightness and contrast adjustment
    • Horizontal flip
    • Random zoom and cropping
    • Gaussian noise

    🎯 Use Cases

    This dataset is ideal for:

    • Scoliosis detection and classification research
    • Vertebra localization and spine anomaly detection
    • Medical object detection experiments (YOLOv5, YOLOv8, EfficientDet)
    • Transfer learning on medical X-ray datasets
    • Explainable AI and model comparison studies

    📊 Source

    The dataset was preprocessed and labeled using Roboflow, then manually refined and balanced for research use. Originally derived from a spinal X-ray dataset and adapted for deep learning object detection.

    Roboflow Project Link: 🔗 View on Roboflow (add your Roboflow link here)

    🧾 License

    CC BY 4.0 — Free to use, modify, and share with attribution.

    Would you like me to make a short summary version (under 1000 characters) for the “Short Description” field on Kaggle too? It’s required for the dataset card.

  14. Performance comparison of different methods.

    • plos.figshare.com
    xls
    Updated Dec 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiaoli Wang; Siti Sarah Maidin; Malathy Batumalay (2024). Performance comparison of different methods. [Dataset]. http://doi.org/10.1371/journal.pone.0315424.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 12, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Xiaoli Wang; Siti Sarah Maidin; Malathy Batumalay
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    With the development of integrated circuit packaging technology, the layout of printed circuit boards has become complicated. Moreover, the traditional defect detection methods have been difficult to meet the requirements of high precision. Therefore, in order to solve the problem of low efficiency in defect detection of printed circuit boards, a defect detection method based on pseudo-inverse transform and improved YOLOv5 is proposed. Firstly, a defect image restoration model is constructed to improve image clarity. Secondly, Transformer is introduced to improve YOLOv5, and the batch normalization and network loss function are optimized. These methods improve the speed and accuracy of PCB defect detection. Experimental verification showed that the restoration speed of the image restoration model was 37.60%-42.38% higher than other methods. Compared with other models, the proposed PCB defect detection model had an average increase of 10.90% in recall and 12.87% in average detection accuracy. The average detection accuracy of six types of defects in the self-made PCB data set was over 98.52%, and the average detection accuracy was as high as 99.1%. The results demonstrate that the proposed method can enhance the quality of image processing and optimize YOLOv5 to improve the accuracy of detecting defects in printed circuit boards. This method is demonstrably more effective than existing technology, offering significant value and potential for application in industrial contexts. Its promotion could facilitate the advancement of industrial automation manufacturing.

  15. R

    Tennis Tracker Dataset

    • universe.roboflow.com
    zip
    Updated Jan 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    tennistracker (2025). Tennis Tracker Dataset [Dataset]. https://universe.roboflow.com/tennistracker-dogbm/tennis-tracker-duufq/dataset/20
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 30, 2025
    Dataset authored and provided by
    tennistracker
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    Players Balls Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. "Sports Analysis": The "tennis-tracker" model could be used for detailed game analysis in broadcasting. Being able to differentiate between player-front, player-back, and ball are crucial elements for sports analysts to study player movements, strategies, and game patterns.

    2. "Player Performance Evaluation": Coaches and trainers could use this model to assess players' performance during training or matches. The model's ability to identify players and tennis balls can be used for tracking player movement, speed, consistency, and accuracy, contributing to better training strategies.

    3. "Automated Replay System": This model can be utilized for managing replays in live or recorded matches. It can quickly identify key moments or points of interest (like when a player hits the ball) to create automated highlights or checks for foul play.

    4. "Augmented Reality Tennis Game": Game developers could use this model in the development of AR-based tennis games. The model could identify player and ball positions to create a realistic and interactive gaming experience.

    5. "Crowd Control & Safety Management": During major tournaments, security staff can use this model to monitor crowd behavior. Distinguishing between players, balls, and spectators can help identify potential disruptions or emergencies. It can also ensure player safety, tracking unauthorized individuals entering the court.

  16. R

    Yolov5 With Pytorch_work Boots Dataset

    • universe.roboflow.com
    zip
    Updated Jun 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Yolov5 With Pytorch_work Boots Dataset [Dataset]. https://universe.roboflow.com/project-htamx/yolov5-with-pytorch_work-boots/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 12, 2023
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Safety Shoes Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Workplace Compliance Monitoring: This model can be used in industrial or construction environments where compliance with safety protocol is crucial. The system can automatically identify if workers wear the required safety shoes or not, ensuring immediate action can be taken to reduce safety risks.

    2. Retail Assistance: In a retail setting, the model could help in identifying and categorizing various types of footwear on the shelves. It could specifically highlight safety shoes for customers searching for them, enhancing the shopping experience.

    3. Smart CCTV Surveillance: The model can be leveraged in CCTV footage analysis where it identifies individuals with or without safety footwear in restricted or hazardous areas. It can enable instant notifications for improper attire.

    4. Automated Sorting in Warehouses: In logistics and supply chain warehouses that deal with various kinds of shoes, this model could speed up the packing and sorting processes by correctly identifying and categorizing the safety shoes.

    5. Remote Safety Training: In a virtual or augmented reality training environment, the model could be used for real-time verification if the trainees are wearing their safety shoes correctly, especially in occupations where proper safety training is required.

  17. u

    Data from: Augmented and Diverse Herding Dataset for Autonomous Shepherd...

    • portalcientifico.unileon.es
    Updated 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sánchez González, Lidia; Mayoko, Jean Chrysostome; Sánchez González, Lidia; Mayoko, Jean Chrysostome (2025). Augmented and Diverse Herding Dataset for Autonomous Shepherd Robots [Dataset]. https://portalcientifico.unileon.es/documentos/688b602217bb6239d2d48d76
    Explore at:
    Dataset updated
    2025
    Authors
    Sánchez González, Lidia; Mayoko, Jean Chrysostome; Sánchez González, Lidia; Mayoko, Jean Chrysostome
    Description

    This dataset enables real-time object detection of sheep, wolves, dogs, wild dog, redfox, fox, coyote, cow and humans for robotic shepherding applications. Built from raw YOLOv5-style sources, it integrates class balancing, video-based diversity, and strong augmentations to enhance robustness. A recycling strategy is used for rare classes. Compatible with YOLOv5 to YOLOv12, RT-DETR, and ROS 2 deployments on legged robots, the dataset includes labels, images, statistics, and visualizations, ready for direct use in training detection models for autonomous livestock protection.

  18. Comparison of detection results of different models in PCB dataset.

    • plos.figshare.com
    xls
    Updated Dec 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiaoli Wang; Siti Sarah Maidin; Malathy Batumalay (2024). Comparison of detection results of different models in PCB dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0315424.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 12, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Xiaoli Wang; Siti Sarah Maidin; Malathy Batumalay
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparison of detection results of different models in PCB dataset.

  19. NTS-YOLO

    • figshare.com
    application/x-rar
    Updated May 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mengqi Guo (2024). NTS-YOLO [Dataset]. http://doi.org/10.6084/m9.figshare.25816276.v1
    Explore at:
    application/x-rarAvailable download formats
    Dataset updated
    May 15, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Mengqi Guo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    NTS-YOLO:a nocturnal traffic sign detection method based on improved YOLOv5In this paper, a nighttime traffic sign recognition method "NTS-YOLO" is proposed, which consists of three main parts. Firstly, this paper adopts the unsupervised nighttime image enhancement technique proposed by Ye-Young Kim et al. Secondly, the Convolutional Block Attention Module (CBAM) attentional mechanism is introduced on the basis of the YOLOv5 network structure, and lastly, the Optimal Transmission Allocation (OTA) loss function is used to optimize the model's performance in the target detection task. With this approach, the accuracy of predicting the bounding box can be effectively optimized so that the model can predict the location of the target and the bounding box more accurately, thus improving the robustness and stability of the model in the target detection task.Other datasIn this paper, 599 nighttime images from the CCTSDB2021 dataset are referenced, of which 80% of the images (479 images) are used as the training set and 20% of the images (120 images) are used as the validation set. In view of the relatively small number of road sign types at night, 9170 daytime road scene images from the TT100K dataset are also referenced to increase the diversity of the data, which are divided into a training set (7208 images) and a validation set (1962 images) at a ratio of 8:2.Links to other publicly accessible locations of the data:CCTSDB2021:GitHub - csust7zhangjm/CCTSDB2021TT100K:http://cg.cs.tsinghua.edu.cn/traffic-sign/data_model_code/data.zipEnvironmentThe experimental environment consists of a high-performance computer configured with an Intel Core i7 processor, 32GB RAM, and an NVIDIA GeForce RTX 4060 graphics card. PyTorch 2.0.1 was chosen as the main deep learning framework, and CUDA technology was utilized to accelerate model training and inference to ensure computational efficiency and data processing power during the experiment.

  20. R

    Navigateit Dataset

    • universe.roboflow.com
    zip
    Updated Jun 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ITESM (2023). Navigateit Dataset [Dataset]. https://universe.roboflow.com/itesm-easou/navigateit/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 8, 2023
    Dataset authored and provided by
    ITESM
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Traffict Lights Signs Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Advanced Driver-Assistance Systems (ADAS): This model can be used as part of an ADAS, enhancing the vehicle's perception capabilities. It could read and interpret signs, improving navigation accuracy and ensuring better road safety by offering real-time information on traffic conditions and road signs to the driver.

    2. Navigational Apps: This model can be incorporated into navigational apps. It can identify road signs and provide real-time directives to users, potentially offering more dynamic, contextually aware directions than current GPS systems.

    3. Autonomous Vehicles: The NavigateIt model may serve as a critical component for Autonomous vehicles. It can provide essential information on road signs and traffic conditions, helping the autonomous vehicle's decision-making process concerning speed adjustment, direction, and adherence to traffic rules.

    4. Advanced Mapping Services: Mapping services could use this model to automatically update and correct their map databases, ensuring they include up-to-date traffic sign information.

    5. Augmented Reality (AR) Applications: In an AR setting, the model can be used to provide context-based information to users. For instance, it could identify signs and inform walkers or cyclists about routes, potential hazards, or guidance based on the perceived traffic signs.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
VincentMaes (2024). PotHole Detector Dataset Augmented [Dataset]. https://www.kaggle.com/datasets/vincenttgre/pothole-detector-dataset-augmented
Organization logo

PotHole Detector Dataset Augmented

Yolov5 Pothole Detection Dataset for South-East Asia

Explore at:
zip(1507748590 bytes)Available download formats
Dataset updated
Nov 4, 2024
Authors
VincentMaes
License

Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically

Description

Road Damage and Pothole Detection Dataset

Overview

This dataset is specifically curated for object detection tasks aimed at identifying and classifying road damage and potholes. The original dataset on which this augmented dataset is based, included images labeled with four distinct classes: - Pothole - Alligator Crack - Long Crack - Lat Crack However, for training the model for detecting road damages, it has been placed into 1 class, namely the "Pothole" class, which now also includes the alligator, longitudinal, and lateral cracks.

Data Augmentation

To enhance the robustness and generalization capability of models trained on this dataset, extensive data augmentation techniques have been applied. The augmentation pipeline includes:

  • Horizontal Flip (50% probability)
  • Vertical Flip (10% probability)
  • Random Rotation by 90 degrees (50% probability)
  • Rotation (±10 degrees, 50% probability)
  • Random Brightness and Contrast adjustments (50% probability)
  • Gaussian Blur (30% probability)
  • Color Jitter (30% probability)
  • Random Scaling (±10% scale, 50% probability)
  • Perspective Transformations (scale range 0.05 to 0.1, 30% probability)

These augmentations ensure that models can learn to recognize road damages under various conditions and viewpoints, improving their detection performance.

Bounding Box Parameters

Bounding boxes are provided in the YOLO format, ensuring easy integration with popular object detection frameworks. The bounding boxes are adjusted to correspond with the augmented images to maintain annotation accuracy.

Classes

The dataset includes the following class:

Class ID Class Name 0 Pothole

Data Split

The dataset is divided into training, validation, and testing sets with the following proportions:

  • Training: 85%
  • Validation: 7%
  • Testing: 8%

This split ensures a sufficient amount of data for training the model while maintaining enough data for validation and testing to assess model performance accurately.

Conclusion

This dataset aims to aid researchers and developers in building and fine-tuning models for road damage detection, contributing to safer and more efficient road maintenance systems.

Search
Clear search
Close search
Google apps
Main menu