Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
YOLOV11 TRAINING DIAZ is a dataset for object detection tasks - it contains Eyes Mouth Titles annotations for 310 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Training Yolov11 Coba Coba is a dataset for object detection tasks - it contains Eyes Mouth Titles annotations for 310 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Yolo V11 Testing is a dataset for object detection tasks - it contains Objects annotations for 646 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Yolo V 11 Training is a dataset for object detection tasks - it contains P annotations for 310 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data were collected on 24 pigs that were video-monitored day and night under two contrasted conditions: thermoneutral (TN, 22°C) and HS (32°C). All pigs were housed individually and had free access to an automatic electronic feeder delivering pellets four times a day, and to water. Environmental conditions (temperature, humidity) in the room were recorded by sensors. After acquisition, videos were processed using YOLOv11, a real-time object detection algorithm that uses a convolutional neural network (CNN), to extract the following behavioural traits: drinking, willingness to eat, lying down, standing up, moving around, curiosity towards the littermate housed in the neighbouring pen, and contact between the two animals (cuddling). A minute frequency sampling rate was applied (each minute correspond to 150 frames processed) for a continuous period of 16 days, spanning the two different thermal conditions (9 days on TN, 6 days on HS, 1 day back to TN). The algorithm was first trained thanks to manual video analysis labelling at the individual scale. Consistency with the automatic electronic feeder’s data (also provided) was thoroughly checked. The dataset allows quantitative criterion to be analysed to decipher inter-individual differences in animal behaviour and their dynamic adaptation to heat stress.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains annotated images of Indonesian vehicle license plates, intended for use in object detection tasks. The annotations are formatted for the YOLOv11 object detection model and were sourced from a public Roboflow project.
The dataset was originally published under the title "anpr-model-1" by the Roboflow user "anpr", and was accessed via Roboflow Universe:
🔗 https://universe.roboflow.com/anpr-n2dbe/anpr-model-1/dataset/1
The dataset was not modified after download and retains its original folder structure and label format. It includes:
images/
folder (training and validation images)
labels/
folder (YOLOv11-compatible annotations)
Class label: license-plate
This re-upload serves to preserve the dataset in a citable form with a DOI for academic use and reproducibility.
License and Usage Notice:
No license information was specified on the original source page. This dataset is shared for academic and research purposes only. Users should verify and respect the original dataset’s usage policies and obtain necessary permissions before any commercial or derivative use. The uploader does not claim new rights over the original data.
Attribution:
Please cite both this Zenodo upload and the original dataset by Roboflow user “anpr” if you use this data in your work.
Disclaimer:
The uploader is not responsible for any misuse, legal issues, or ethical concerns arising from the use of this dataset. Use it at your own discretion and responsibility.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Experimental data for the paper "Hierarchical Deep Learning Framework for Automated Marine Vegetation and Fauna Analysis Using ROV Video Data."This dataset supports the study "Hierarchical Deep Learning Framework for Automated Marine Vegetation and Fauna Analysis Using ROV Video Data" by providing resources essential for reproducing and validating the research findings.Dataset Contents and Structure:Hierarchical Model Weights: - .pth
files containing trained weights for all alpha regularization values used in hierarchical classification models.MaskRCNN-Segmented Objects: - .jpg
files representing segmented objects detected by the MaskRCNN model. - Accompanied by maskrcnn-segmented-objects-dataset.parquet
, which includes metadata and classifications: - Columns:masked_image: Path to the segmented image file.confidence: Confidence score for the prediction.predicted_species: Predicted species label.species: True species label.MaskRCNN Weights: - Trained MaskRCNN model weights, including hierarchical CNN models integrated with MaskRCNN in the processing pipeline.Pre-Trained Models:.pt files for all object detectors trained on the Esefjorden Marine Vegetation Segmentation Dataset (EMVSD) in YOLO txt format.Segmented Object Outputs: - Segmentation outputs and datasets for the following models: - RT-DETR: - Segmented objects: rtdetr-segmented-objects/
- Dataset: rtdetr-segmented-objects-dataset.parquet
- YOLO-SAG: - Segmented objects: yolosag-segmented-objects/
- Dataset: yolosag-segmented-objects-dataset.parquet
- YOLOv11: - Segmented objects: yolov11-segmented-objects/
- Dataset: yolov11-segmented-objects-dataset.parquet
- YOLOv8: - Segmented objects: yolov8-segmented-objects/
- Dataset: yolov8-segmented-objects-dataset.parquet
- YOLOv9: - Segmented objects: yolov9-segmented-objects/
- Dataset: yolov9-segmented-objects-dataset.parquet
Usage Instructions:1. Download and extract the dataset.2. Utilize the Python scripts provided in the associated GitHub repository for evaluation and inference: https://github.com/Ci2Lab/FjordVisionReproducibility:The dataset includes pre-trained weights, segmentation outputs, and experimental results to facilitate reproducibility. The .parquet
files and segmented object directories follow a standardized format to ensure consistency.Licensing:This dataset is released under the CC-BY 4.0 license, permitting reuse with proper attribution.Related Materials:- GitHub Repository: https://github.com/Ci2Lab/FjordVision
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset is an enhanced version of the original D-Fire dataset, designed to facilitate smoke and fire detection tasks. It has been restructured to include a validation split, making it more accessible and user-friendly.
Introducing Flare Guard — an advanced, open-source solution for real-time fire and smoke detection.
This system uses YOLOv11, an advanced object detection model, to monitor live video feeds and detect fire hazards in real-time. Detected threats trigger instant alerts via Telegram and WhatsApp for rapid response.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12748471%2F632cfe5056cc683123c1873547d670ce%2Falert_20250210-034709-167281.jpg?generation=1742122420748481&alt=media" alt="CCVT_EXAMPLE">
The dataset is organized as follows:
train/
images/
: Training imageslabels/
: Training labels in YOLO formatval/
images/
: Validation imageslabels/
: Validation labels in YOLO formattest/
images/
: Test imageslabels/
: Test labels in YOLO formatThe dataset includes annotations for the following classes:
0
: Smoke1
: FireThe dataset comprises over 21,000 images, categorized as follows:
Category | Number of Images |
---|---|
Only fire | 1,164 |
Only smoke | 5,867 |
Fire and smoke | 4,658 |
None | 9,838 |
Total bounding boxes:
The dataset is divided into training, validation, and test sets to support model development and evaluation.
If you use this dataset in your research or projects, please cite the original paper:
Pedro Vinícius Almeida Borges de Venâncio, Adriano Chaves Lisboa, Adriano Vilela Barbosa. "An automatic fire detection system based on deep convolutional neural networks for low-power, resource-constrained devices." Neural Computing and Applications, vol. 34, no. 18, 2022, pp. 15349–15368. DOI: 10.1007/s00521-022-07467-z.
Credit for the original dataset goes to the researchers from Gaia, solutions on demand (GAIA). The original dataset and more information can be found in the D-Fire GitHub repository.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Lettuce Yolov11 is a dataset for instance segmentation tasks - it contains Nutrition Deficiency Yolo annotations for 486 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of annotated high-resolution aerial imagery of roof materials in Bonn, Germany, in the Ultralytics YOLO instance segmentation dataset format. Aerial imagery was sourced from OpenAerialMap, specifically from the Maxar Open Data Program. Roof material labels and building outlines were sourced from OpenStreetMap. Images and labels are split into training, validation, and test sets, meant for future machine learning models to be trained upon, for both building segmentation and roof type classification.The dataset is intended for applications such as informing studies on thermal efficiency, roof durability, heritage conservation, or socioeconomic analyses. There are six roof material types: roof tiles, tar paper, metal, concrete, gravel, and glass.Note: The data is in a .zip due to file upload limits. Please find a more detailed dataset description in the README.md
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The rapid development of modern industry has significantly raised the demand for workpieces. To ensure the quality of workpieces, workpiece surface defect detection has become an indispensable part of industrial production. Most workpiece surface defect detection technologies rely on cloud computing. However, transmitting large volumes of data via wireless networks places substantial computational burdens on cloud servers, significantly reducing defect detection speed. Therefore, to enable efficient and precise detection, this paper proposes a workpiece surface defect detection method based on YOLOv11 and edge computing. First, the NEU-DET dataset was expanded using random flipping, cropping, and the self-attention generative adversarial network (SA-GAN). Then, the accuracy indicators of the YOLOv7–YOLOv11 models were compared on NEU-DET and validated on the Tianchi aluminium profile surface defect dataset. Finally, the cloud-based YOLOv11 model, which achieved the highest accuracy, was converted to the edge-based YOLOv11-RKNN model and deployed on the RK3568 edge device to improve the detection speed. Results indicate that YOLOv11 with SA-GAN achieved mAP@0.5 improvements of 7.7%, 3.1%, 5.9%, and 7.0% over YOLOv7, YOLOv8, YOLOv9, and YOLOv10, respectively, on the NEU-DET dataset. Moreover, YOLOv11 with SA-GAN achieved an 87.0% mAP@0.5 on the Tianchi aluminium profile surface defect dataset, outperforming the other models again. This verifies the generalisability of the YOLOv11 model. Additionally, quantising and deploying YOLOv11 on the edge device reduced its size from 10,156 kB to 4,194 kB and reduced its single-image detection time from 52.1ms to 33.6ms, which represents a significant efficiency enhancement.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
YOLOv11 Instance Segmentation V2 is a dataset for instance segmentation tasks - it contains Kitchen 8jTn annotations for 349 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Designed for industrial safety applications, this dataset provides high-quality, well-annotated data focusing on the detection of Personal Protective Equipment (PPE) and is particularly suitable for the training and application of computer vision models. The dataset contains 3,212 images of 640×640 pixels and focuses on the detection of PPE, such as the wearing of helmets and reflective undershirts. The data comes from a variety of sources, including public platforms such as GitHub, Kaggle, and Roboflow, as well as real-life photographs from different scenarios, to ensure that the data is diverse and can be adapted to a variety of scenarios and applications. The dataset is labeled and categorized according to the official YOLO specification, and the data can be directly applied to mainstream object detection frameworks such as YOLOv8 and YOLOv11, making it an important resource for researchers, developers, and practitioners. This dataset can be used to improve industrial safety monitoring systems and enhance construction site safety.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Objective This project focuses on developing an object detection model using the YOLOv11 architecture. The primary goal is to accurately detect and classify objects within images across three distinct classes. The model was trained for 250 epochs to achieve high performance in terms of mean Average Precision (mAP), Precision, and Recall.
Dataset Information - Number of Images: 300 - Number of Annotations: 582 - Classes: 3 - Average Image Size: 0.30 megapixels - Image Size Range: 0.03 megapixels to 11.83 megapixels - Median Image Ratio: 648x500 pixels
Preprocessing - Auto-Orient: Applied to ensure correct image orientation. - Resize: Images were stretched to a uniform size of 640x640 pixels to maintain consistency across the dataset. Augmentations - Outputs per Training Example: 3 augmented outputs were generated for each training example to enhance the diversity of the training data. - Crop: Random cropping was applied with a minimum zoom of 0% and a maximum zoom of 8%. - Rotation: Images were randomly rotated between -8° and +8° to improve the model's robustness to different orientations.
Training and Performance The model was trained for 250 epochs, and the following performance metrics were achieved: - mAP (mean Average Precision): 90.4% - Precision: 87.7% - Recall: 83.4%
These metrics indicate that the model is highly effective in detecting and classifying objects within the images, with a strong balance between precision and recall.
** Key Insights** - mAP: The high mAP score of 90.4% suggests that the model is accurate in predicting the correct bounding boxes and class labels for objects in the dataset. - Precision: A precision of 87.7% indicates that the model has a low false positive rate, meaning it is reliable in identifying true objects. - Recall: The recall of 83.4% shows that the model is capable of detecting most of the relevant objects in the images. Visualization The training process was monitored using various metrics, including mAP, Box Loss, Class Loss, and Object Loss. The visualizations show the progression of these metrics over the 250 epochs, demonstrating the model's learning and improvement over time.
Conclusion The project successfully implemented and trained an object detection model using the YOLOv11 architecture. The achieved performance metrics highlight the model's effectiveness and reliability in detecting objects across different classes. This model can be further refined and applied to real-world applications for object detection tasks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Yolov11 annotated training, validation, and test dataset for detection of drones in thermal images.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Yolov11 annotated training, validation, and test dataset for detection of drones in RGB images.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Falls pose a significant health risk for elderly populations, necessitating advanced monitoring technologies. This study introduces a novel two-stage fall detection system that combines computer vision and machine learning to accurately identify fall events. The system uses the YOLOv11 object detection model to track individuals and estimate their body pose by identifying 17 key body points across video frames. The proposed approach extracts nine critical geometric features, including the center of mass and various body angles. These features are used to train a support vector machine (SVM) model for binary classification, distinguishing between standing and lying with high precision. The system’s temporal validation method analyzes sequential frame changes, ensuring robust fall detection. Experimental results, evaluated on the University of Rzeszow Fall Detection (URFD) dataset and the Multiple Cameras Fall Dataset (MCFD), demonstrate exceptional performance, achieving 88.8% precision, 94.1% recall, an F1-score of 91.4%, and a specificity of 95.6%. The method outperforms existing approaches by effectively capturing complex geometric changes during fall events. The system is applicable to smart homes, wearable devices, and healthcare monitoring platforms, offering a scalable, reliable, and efficient solution to enhance safety and independence for elderly individuals, thereby contributing to advancements in health-monitoring technology.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Emotion Recognition YOLO V11 FYP 2 is a dataset for object detection tasks - it contains Emotion V2Q8 annotations for 800 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This project aims to detect and localize register numbers in document images using a custom-trained YOLOv11 object detection model. It automates the process of identifying register numbers from scanned academic sheets, making data handling faster and more efficient.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
The following objects are annotated for detection in latest version: 1. Aircraft 2. Missile 3. Military-Vehicle 4. Hand-Gun 5. Pistol 6. Rifle 7. Knife 8. Grenade 9. Smoke 10. Fire 11. Soldier 12. Camouflage 13. Drone
Yolov12 model is trained on roboflow based on version 12 dataset.
Older verisons have the following extra object classes: 14. Gun 15. Pointing-Gun 16. Non-Pointing Gun 17. Person
Yolov11 model is trained on roboflow based on version 9 dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
YOLOV11 TRAINING DIAZ is a dataset for object detection tasks - it contains Eyes Mouth Titles annotations for 310 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).