Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Training Yolov11 Coba Coba is a dataset for object detection tasks - it contains Eyes Mouth Titles annotations for 310 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Yolo V11 Testing is a dataset for object detection tasks - it contains Objects annotations for 646 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Train The Yolov11 On Pests is a dataset for object detection tasks - it contains Pests annotations for 8,518 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
High-resolution aerial imagery with 16,000+ oriented bounding boxes for vehicle detection, pre-formatted for Ultralytics YOLOv11.
This dataset is a ready-to-use version of the original Eagle Dataset from the German Aerospace Center (DLR). The original dataset was created to benchmark object detection models on challenging aerial imagery, featuring vehicles at various orientations.
This version has been converted to the YOLOv11-OBB (Oriented Bounding Box) format. The conversion makes the dataset directly compatible with modern deep learning frameworks like Ultralytics YOLO, allowing researchers and developers to train state-of-the-art object detectors with minimal setup.
The dataset is ideal for tasks requiring precise localization of rotated objects, such as vehicle detection in parking lots, traffic monitoring, and urban planning from aerial viewpoints.
The dataset is split into training, validation, and test sets, following a standard structure for computer vision tasks.
Dataset Split & Counts:
Directory Structure:
EagleDatasetYOLO/
├── train/
│ ├── images/ # 159 images
│ └── labels/ # 159 .txt obb labels
├── val/
│ ├── images/ # 53 images
│ └── labels/ # 53 .txt obb labels
├── test/
│ ├── images/ # 106 images
│ └── labels/ # 106 .txt obb labels
├── data.yaml
└── license.md
Annotation Format (YOLOv11-OBB):
Each .txt
label file contains one object per line. The format for each object is:
<class_id> <x_center> <y_center> <width> <height> <angle>
<class_id>
: The class index (in this case, 0
for 'vehicle').<x_center> <y_center>
: The normalized center coordinates of the bounding box.<width> <height>
: The normalized width and height of the bounding box.<angle>
: The rotation angle of the box in radians, from -π/2 to π/2.data.yaml
Configuration:
A data.yaml
file is included for easy integration with the Ultralytics framework.
path: ../EagleDatasetYOLO
train: train/images
val: val/images
test: test/images
nc: 1
names: ['vehicle']
This dataset is a conversion of the original work by the German Aerospace Center (DLR). The conversion to YOLOv11-OBB format was performed by Mridankan Mandal.
The dataset is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International license (CC BY-NC-SA 4.0).
If you use this dataset in your research, please cite the original creators and acknowledge the conversion work.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains annotated images of Indonesian vehicle license plates, intended for use in object detection tasks. The annotations are formatted for the YOLOv11 object detection model and were sourced from a public Roboflow project.
The dataset was originally published under the title "anpr-model-1" by the Roboflow user "anpr", and was accessed via Roboflow Universe:
🔗 https://universe.roboflow.com/anpr-n2dbe/anpr-model-1/dataset/1
The dataset was not modified after download and retains its original folder structure and label format. It includes:
images/
folder (training and validation images)
labels/
folder (YOLOv11-compatible annotations)
Class label: license-plate
This re-upload serves to preserve the dataset in a citable form with a DOI for academic use and reproducibility.
License and Usage Notice:
No license information was specified on the original source page. This dataset is shared for academic and research purposes only. Users should verify and respect the original dataset’s usage policies and obtain necessary permissions before any commercial or derivative use. The uploader does not claim new rights over the original data.
Attribution:
Please cite both this Zenodo upload and the original dataset by Roboflow user “anpr” if you use this data in your work.
Disclaimer:
The uploader is not responsible for any misuse, legal issues, or ethical concerns arising from the use of this dataset. Use it at your own discretion and responsibility.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
YOLOV11 PAGI is a dataset for object detection tasks - it contains Plastics annotations for 5,890 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
🧃 Iranian Snack & Chips Detection Dataset (YOLO Format) This dataset contains annotated images of popular Iranian supermarket snacks and chips collected and labeled for object detection and instance segmentation tasks. It features 19 different product classes from well-known brands like Ashi Mashi, Cheetoz, Maz Maz, Naderi, Minoo, and Lina.
📁 Dataset Structure:
train/ – Training images valid/ – Validation images test/ – Test images data.yaml – Configuration file for YOLO models
🧠 Classes (19 Total):
['Ashi Mashi snacks', 'Chee pellet ketchup', 'Chee pellet vinegar', 'Cheetoz chili chips', 'Cheetoz ketchup chips', 'Cheetoz onion and parsley chips', 'Cheetoz salty chips', 'Cheetoz snack 30g', 'Cheetoz snack 90g', 'Cheetoz vinegar chips', 'Cheetoz wheelsnack', 'Maz Maz ketchup chips', 'Maz Maz potato sticks', 'Maz Maz salty chips', 'Maz Maz vinegar chips',
🔧 Recommended Use Cases:
Product recognition in retail and supermarket scenes Fine-tuning YOLO models for regional or branded goods Instance segmentation of snacks and chips
📎 Source & License:
Annotated with Roboflow License: CC BY 4.0 – Free to use, modify, and redistribute with attribution Created by: Hamed Mahmoudi (halfbloodhamed)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The rapid development of modern industry has significantly raised the demand for workpieces. To ensure the quality of workpieces, workpiece surface defect detection has become an indispensable part of industrial production. Most workpiece surface defect detection technologies rely on cloud computing. However, transmitting large volumes of data via wireless networks places substantial computational burdens on cloud servers, significantly reducing defect detection speed. Therefore, to enable efficient and precise detection, this paper proposes a workpiece surface defect detection method based on YOLOv11 and edge computing. First, the NEU-DET dataset was expanded using random flipping, cropping, and the self-attention generative adversarial network (SA-GAN). Then, the accuracy indicators of the YOLOv7–YOLOv11 models were compared on NEU-DET and validated on the Tianchi aluminium profile surface defect dataset. Finally, the cloud-based YOLOv11 model, which achieved the highest accuracy, was converted to the edge-based YOLOv11-RKNN model and deployed on the RK3568 edge device to improve the detection speed. Results indicate that YOLOv11 with SA-GAN achieved mAP@0.5 improvements of 7.7%, 3.1%, 5.9%, and 7.0% over YOLOv7, YOLOv8, YOLOv9, and YOLOv10, respectively, on the NEU-DET dataset. Moreover, YOLOv11 with SA-GAN achieved an 87.0% mAP@0.5 on the Tianchi aluminium profile surface defect dataset, outperforming the other models again. This verifies the generalisability of the YOLOv11 model. Additionally, quantising and deploying YOLOv11 on the edge device reduced its size from 10,156 kB to 4,194 kB and reduced its single-image detection time from 52.1ms to 33.6ms, which represents a significant efficiency enhancement.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for Training YOLOv11 AI Model to Detect Risky Objects for Assistive Guidance.
The goal of this project is to create a comprehensive dataset to train an AI model using YOLOv11 (You Only Look Once version 11). The model will detect and identify "risky objects" that blind and visually impaired individuals may encounter in indoor and outdoor environments. This dataset serves as the foundation for an assistive technology tool designed to enhance mobility and safety by providing real-time object detection and guidance.
Objects identified as potentially risky were selected through research and user studies. The dataset focuses on items that could obstruct paths, pose tripping hazards, or cause injury if unnoticed.
Examples include:
- Outdoor Risks:
* Vehicles
* Bicycles
* Potholes
* Curbs
* Barriers
* People
The YOLOv11 model will process visual data from a wearable or smartphone camera, identifying and alerting the user to risks in real-time.
By providing proactive guidance, the system empowers blind and visually impaired individuals to navigate more independently and safely.
This project aims to leverage advanced AI technology to address the unique challenges faced by blind and visually impaired individuals. By creating a specialized dataset for training YOLOv11, the model can detect risky objects with high precision, enhancing safety and mobility. The ultimate outcome is an AI-powered assistive system that provides greater independence and confidence to its users in their everyday lives.
This project incorporates images from the following public datasets. We extend our gratitude to the creators and contributors of these datasets for making their work freely available to the research community:
We adhere to the terms and conditions of these datasets' licenses and greatly appreciate their contribution to advancing research in AI and assistive technologies.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset is an enhanced version of the original D-Fire dataset, designed to facilitate smoke and fire detection tasks. It has been restructured to include a validation split, making it more accessible and user-friendly.
Introducing Flare Guard — an advanced, open-source solution for real-time fire and smoke detection.
This system uses YOLOv11, an advanced object detection model, to monitor live video feeds and detect fire hazards in real-time. Detected threats trigger instant alerts via Telegram and WhatsApp for rapid response.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12748471%2F632cfe5056cc683123c1873547d670ce%2Falert_20250210-034709-167281.jpg?generation=1742122420748481&alt=media" alt="CCVT_EXAMPLE">
The dataset is organized as follows:
train/
images/
: Training imageslabels/
: Training labels in YOLO formatval/
images/
: Validation imageslabels/
: Validation labels in YOLO formattest/
images/
: Test imageslabels/
: Test labels in YOLO formatThe dataset includes annotations for the following classes:
0
: Smoke1
: FireThe dataset comprises over 21,000 images, categorized as follows:
Category | Number of Images |
---|---|
Only fire | 1,164 |
Only smoke | 5,867 |
Fire and smoke | 4,658 |
None | 9,838 |
Total bounding boxes:
The dataset is divided into training, validation, and test sets to support model development and evaluation.
If you use this dataset in your research or projects, please cite the original paper:
Pedro Vinícius Almeida Borges de Venâncio, Adriano Chaves Lisboa, Adriano Vilela Barbosa. "An automatic fire detection system based on deep convolutional neural networks for low-power, resource-constrained devices." Neural Computing and Applications, vol. 34, no. 18, 2022, pp. 15349–15368. DOI: 10.1007/s00521-022-07467-z.
Credit for the original dataset goes to the researchers from Gaia, solutions on demand (GAIA). The original dataset and more information can be found in the D-Fire GitHub repository.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of annotated high-resolution aerial imagery of roof materials in Bonn, Germany, in the Ultralytics YOLO instance segmentation dataset format. Aerial imagery was sourced from OpenAerialMap, specifically from the Maxar Open Data Program. Roof material labels and building outlines were sourced from OpenStreetMap. Images and labels are split into training, validation, and test sets, meant for future machine learning models to be trained upon, for both building segmentation and roof type classification.The dataset is intended for applications such as informing studies on thermal efficiency, roof durability, heritage conservation, or socioeconomic analyses. There are six roof material types: roof tiles, tar paper, metal, concrete, gravel, and glass.Note: The data is in a .zip due to file upload limits. Please find a more detailed dataset description in the README.md
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Objective This project focuses on developing an object detection model using the YOLOv11 architecture. The primary goal is to accurately detect and classify objects within images across three distinct classes. The model was trained for 250 epochs to achieve high performance in terms of mean Average Precision (mAP), Precision, and Recall.
Dataset Information - Number of Images: 300 - Number of Annotations: 582 - Classes: 3 - Average Image Size: 0.30 megapixels - Image Size Range: 0.03 megapixels to 11.83 megapixels - Median Image Ratio: 648x500 pixels
Preprocessing - Auto-Orient: Applied to ensure correct image orientation. - Resize: Images were stretched to a uniform size of 640x640 pixels to maintain consistency across the dataset. Augmentations - Outputs per Training Example: 3 augmented outputs were generated for each training example to enhance the diversity of the training data. - Crop: Random cropping was applied with a minimum zoom of 0% and a maximum zoom of 8%. - Rotation: Images were randomly rotated between -8° and +8° to improve the model's robustness to different orientations.
Training and Performance The model was trained for 250 epochs, and the following performance metrics were achieved: - mAP (mean Average Precision): 90.4% - Precision: 87.7% - Recall: 83.4%
These metrics indicate that the model is highly effective in detecting and classifying objects within the images, with a strong balance between precision and recall.
** Key Insights** - mAP: The high mAP score of 90.4% suggests that the model is accurate in predicting the correct bounding boxes and class labels for objects in the dataset. - Precision: A precision of 87.7% indicates that the model has a low false positive rate, meaning it is reliable in identifying true objects. - Recall: The recall of 83.4% shows that the model is capable of detecting most of the relevant objects in the images. Visualization The training process was monitored using various metrics, including mAP, Box Loss, Class Loss, and Object Loss. The visualizations show the progression of these metrics over the 250 epochs, demonstrating the model's learning and improvement over time.
Conclusion The project successfully implemented and trained an object detection model using the YOLOv11 architecture. The achieved performance metrics highlight the model's effectiveness and reliability in detecting objects across different classes. This model can be further refined and applied to real-world applications for object detection tasks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Yolov11 annotated training, validation, and test dataset for detection of drones in thermal images.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Designed for industrial safety applications, this dataset provides high-quality, well-annotated data focusing on the detection of Personal Protective Equipment (PPE) and is particularly suitable for the training and application of computer vision models. The dataset contains 2,286 images of 640×640 pixels and focuses on the detection of PPE, such as the wearing of helmets and reflective undershirts. The data comes from a variety of sources, including public platforms such as GitHub, Kaggle, and Roboflow, as well as real-life photographs from different scenarios, to ensure that the data is diverse and can be adapted to a variety of scenarios and applications. The dataset is labeled and categorized according to the official YOLO specification, and the data can be directly applied to mainstream object detection frameworks such as YOLOv8 and YOLOv11, making it an important resource for researchers, developers, and practitioners. This dataset can be used to improve industrial safety monitoring systems and enhance construction site safety.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The BRA-Dataset is an expanded dataset of Brazilian wildlife, developed for object detection tasks, combining real images with synthetic samples generated by Generative Adversarial Networks (GANs). It includes five medium- and large-sized mammal species frequently involved in roadkill incidents on Brazilian highways: lowland tapir (Tapirus terrestris), jaguarundi (Herpailurus yagouaroundi), maned wolf (Chrysocyon brachyurus), puma (Puma concolor), and giant anteater (Myrmecophaga tridactyla). The primary goal is to provide a comprehensive and standardized resource for biodiversity conservation research, wildlife monitoring technologies, and computer vision applications, with an emphasis on automated wildlife detection.
The original dataset by Ferrante et al. (2022) was built from images of wildlife captured through camera traps, field cameras, and structured internet searches, followed by manual curation and bounding box annotation. In this work, the dataset was expanded to approximately 9,238 images, divided into three main groups:
Real images — original photographs collected from the aforementioned sources. Total: 1,823.
Images augmented by classical techniques — generated from real images using transformations such as rotations (RT), horizontal flips (HF), vertical flips (VF), and horizontal (HS) and vertical shifts (VS). Total: 7,300.
Synthetic images generated by GANs — produced with WGAN-GP models trained individually for each species, using pre-processed image subsets. All generated samples underwent qualitative assessment to ensure morphological consistency, proper framing, and visual fidelity before inclusion. Total: 115.
The directory structure is organized into images/ and labels/, each subdivided into train/ and val/, following an 80% training and 20% validation split. Images are provided in .jpg format and annotations in .txt following the YOLO standard (class_id x_center y_center width height, with normalized coordinates). Furthermore, the file naming convention is designed to clearly indicate the species and the type of data augmentation applied.
The dataset is compatible with various object detection architectures and was evaluated using YOLOv5, YOLOv8, and YOLOv11 in n, s, and m variants, aiming to assess the impact of dataset expansion in scenarios with different computational capabilities and performance requirements.
By combining real data, classical augmentations, and high-quality synthetic samples, the BRA-Dataset provides a valuable resource for wildlife detection, environmental monitoring, and conservation research, especially in contexts where image availability for rare or threatened species is limited.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this Dataset I annotated the Cow's Neck for train the model for detect the Neck.I trained it with Yolov11 and I got 92.2% Precision confidence.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The TgFC dataset comprises 5,236 annotated microscopic images (640×640 resolution) of fungal spores from diseased Tectona grandis (teak) leaves, collected in Bangladesh (Sylhet and Rangamati). The images represent three fungal taxa:Olivea tectonae (2,219 images)Neopestalotiopsis sp. (1,688 images)Colletotrichum siamense (1,329 images)Spores were imaged at 40× magnification using a Zeiss Primostar 3 microscope. Wet mount preparations were made without staining to preserve field morphology. All images were manually annotated with bounding boxes using LabelImg for object detection, based on species-specific spore morphology.The dataset is structured for YOLO-based object detection and includes separate folders for training (80%), validation (10%), and testing (10%)—each containing images/ and labels/ subdirectories. No image preprocessing was performed, allowing flexible adaptation to custom pipelines.The dataset is annotated in YOLO format for object detection but can be adapted for various computer vision and image analysis tasks, including traditional machine learning, deep learning, classification, and segmentation workflows.Applications and Impact:Enables automated spore identification from foliar samples, reducing manual diagnostic workloadSupports airborne spore detection and spore quantification in environmental samples for outbreak predictionIntegrates with real-time disease monitoring systems for early intervention in agricultureFacilitates training and benchmarking of deep learning models (e.g., YOLO, CNNs) for fungal detectionSupports transfer learning across fungal species and imaging conditionsProvides a foundation for developing portable, field-ready diagnostic toolsApplicable beyond teak, as the included fungi are common in other tropical and subtropical ecosystemsOpen-access format encourages future expansion and collaborative contributions of more annotated images
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Project Overview This project, Helmet and Number Plate Detection for Motorbike Safety, provides a dataset for detecting motorbike riders wearing helmets, riders without helmets, and the presence of number plates. The dataset can be used for road safety monitoring, automated enforcement, and toll access systems.
Current Status: Training Complete: Using the YOLOv11 model, we trained with 20,000 images across 300 epochs to achieve high accuracy. Public and Open to Contribution: The dataset allows community contributions to continually improve label accuracy and expand the dataset.
External Resources: YOLOv11 Documentation
Data were collected on 24 pigs that were video-monitored day and night under two contrasted conditions: thermoneutral (TN, 22°C) and Heat Stress (32°C). All pigs were housed individually and had free access to an automatic electronic feeder delivering pellets four times a day, and to water. After acquisition, videos were processed using YOLOv11, a real-time object detection algorithm object detector that uses a convolutional neural network (CNN), to extract the following behavioural traits: drinking, willingness to eat, lying down, standing up, moving around, curiosity towards the littermate housed in the neighbouring pen, and contact between the two animals (cuddling). A minute frequency basis was applied (each minute correspond to 150 frames processed) for a continuous period of 16 days, spanning the two different thermal conditions (9 days on TN, 6 days on HS, 1 day back to TN). The algorithm was first trained thanks to manual video analysis labelling at the individual scale. Consistency with the automatic electronic feeder’s data (also provided) was thoroughly checked. The dataset allows quantitative criterion to be analysed to decipher inter-individual differences in animal behaviour and their dynamic adaptation to heat stress. This dataset can be used to train any machine learning methods for behaviour prediction from videos in conventional growing pigs.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Falls pose a significant health risk for elderly populations, necessitating advanced monitoring technologies. This study introduces a novel two-stage fall detection system that combines computer vision and machine learning to accurately identify fall events. The system uses the YOLOv11 object detection model to track individuals and estimate their body pose by identifying 17 key body points across video frames. The proposed approach extracts nine critical geometric features, including the center of mass and various body angles. These features are used to train a support vector machine (SVM) model for binary classification, distinguishing between standing and lying with high precision. The system’s temporal validation method analyzes sequential frame changes, ensuring robust fall detection. Experimental results, evaluated on the University of Rzeszow Fall Detection (URFD) dataset and the Multiple Cameras Fall Dataset (MCFD), demonstrate exceptional performance, achieving 88.8% precision, 94.1% recall, an F1-score of 91.4%, and a specificity of 95.6%. The method outperforms existing approaches by effectively capturing complex geometric changes during fall events. The system is applicable to smart homes, wearable devices, and healthcare monitoring platforms, offering a scalable, reliable, and efficient solution to enhance safety and independence for elderly individuals, thereby contributing to advancements in health-monitoring technology.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Training Yolov11 Coba Coba is a dataset for object detection tasks - it contains Eyes Mouth Titles annotations for 310 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).