MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
A custom object detection dataset for vehicle and pedestrian tracking.
Includes annotated instances of mycar
, person
, car
, and motorcycle
.
This dataset is designed to train and evaluate real-time models (e.g. YOLOv8/YOLOv12) for tasks such as surveillance, traffic monitoring, or autonomous systems.
📦 License: MIT
📸 Source: Collected from private camera footage and public domain datasets
📊 Class Distribution: 4 classes across ~700 images
⚙️ Augmented via Roboflow with blur, scale, flip, and exposure variance
This project supports iterative model-assisted labeling using Roboflow Train and Deploy.
Optimized for model-assisted annotation — detect first, fix later!
Dataset prepared proper structure with ".txt label files" for YOLO12.
WIDER FACE dataset is a face detection benchmark dataset, of which images are selected from the publicly available WIDER dataset. We choose 32,203 images and label 393,703 faces with a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER FACE dataset is organized based on 61 event classes. For each event class, we randomly select 40%/10%/50% data as training, validation and testing sets. We adopt the same evaluation metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets, we do not release bounding box ground truth for the test images. Users are required to submit final prediction files, which we shall proceed to evaluate.
Original Dataset from: http://shuoyang1213.me/WIDE
Creative Common License (cc by-nc-nd)
https://api.github.com/licenses/cc0-1.0https://api.github.com/licenses/cc0-1.0
Yolov12 dataset: contains 3900 training set images and 988 validation set images, all of which are annotated in YOLO format, with three categories: people, cargo, and forklift, and the data augmentation method of rotation and negative is used.YOLOv12-CRA source code: YOLOv12 adds the attention mechanism SE module, RT-DETR detection head, and replaces the traditional loss function LoU with the loss function ATFL.Test Video: Static/dynamic distance test video.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
🧃 Iranian Snack & Chips Detection Dataset (YOLO Format) This dataset contains annotated images of popular Iranian supermarket snacks and chips collected and labeled for object detection and instance segmentation tasks. It features 19 different product classes from well-known brands like Ashi Mashi, Cheetoz, Maz Maz, Naderi, Minoo, and Lina.
📁 Dataset Structure:
train/ – Training images valid/ – Validation images test/ – Test images data.yaml – Configuration file for YOLO models
🧠 Classes (19 Total):
['Ashi Mashi snacks', 'Chee pellet ketchup', 'Chee pellet vinegar', 'Cheetoz chili chips', 'Cheetoz ketchup chips', 'Cheetoz onion and parsley chips', 'Cheetoz salty chips', 'Cheetoz snack 30g', 'Cheetoz snack 90g', 'Cheetoz vinegar chips', 'Cheetoz wheelsnack', 'Maz Maz ketchup chips', 'Maz Maz potato sticks', 'Maz Maz salty chips', 'Maz Maz vinegar chips',
🔧 Recommended Use Cases:
Product recognition in retail and supermarket scenes Fine-tuning YOLO models for regional or branded goods Instance segmentation of snacks and chips
📎 Source & License:
Annotated with Roboflow License: CC BY 4.0 – Free to use, modify, and redistribute with attribution Created by: Hamed Mahmoudi (halfbloodhamed)
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
A custom object detection dataset for vehicle and pedestrian tracking.
Includes annotated instances of mycar
, person
, car
, and motorcycle
.
This dataset is designed to train and evaluate real-time models (e.g. YOLOv8/YOLOv12) for tasks such as surveillance, traffic monitoring, or autonomous systems.
📦 License: MIT
📸 Source: Collected from private camera footage and public domain datasets
📊 Class Distribution: 4 classes across ~700 images
⚙️ Augmented via Roboflow with blur, scale, flip, and exposure variance
This project supports iterative model-assisted labeling using Roboflow Train and Deploy.
Optimized for model-assisted annotation — detect first, fix later!
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset presents a comprehensive Bengali food segmentation dataset designed to support both semantic segmentation and object detection tasks using deep learning techniques. The dataset consists of high-quality images of traditional Bengali dishes captured in diverse real-life settings, annotated with polygon-based masks and categorized into multiple food classes. Annotation and preprocessing were performed using the Roboflow platform, with exports available in both COCO and mask formats. The dataset was used to train UNet for segmentation and YOLOv12 for detection. Augmentation and class balancing techniques were applied to improve model generalization. This dataset provides a valuable benchmark for food recognition, dietary assessment, and culturally contextualized computer vision research.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
dataset.zip
A compressed folder that contains:
images/
: Raw image data used for training and validation and testing.
labels/
: YOLO-format .txt
annotation files for the corresponding images.
out/
: Images annotated with segmentation polygons and class labels, useful for visualization and quality control.
trainer.sh
A shell script to initiate YOLOv12 model training. It configures the dataset, model architecture, and training hyperparameters.
data_species.yaml
YAML file specifying dataset structure and class labels. It defines 7 classes:
["D_pulex", "ballooned", "copepod", "egg", "resting_egg", "D_galeata", "S_vetulus"]
.
chunker.py
A utility script for preprocessing high-resolution images. It splits large images into chunks for more effective training on high-resolution biological data. Supports .tif
, .png
, and .jpg
formats.
d_pulex_missclass_analysis.ipynb
A Jupyter notebook used for post-training analysis, focusing on the misclassification of Daphnia pulex.
autosplit_train_res.csvA csv file containing the inference results of the test images (used for the analysis of misclassification of Daphnia pulex)
DaphnAI.pt
The weights of the model (pre trained-model).
Interactive Jupyter tutorial (tutorial.ipynb
)
A step-by-step notebook for inference, visualization, and data extraction with code and explanations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of 508 images labeled for detecting plant health in vineyards. The images were randomly selected from aerial photographs captured using DJI Mavic drones on different dates between 2023 and 2024, covering various stages of plant growth.
The dataset includes three classes:
Healthy
Mildew
Low-Iron Deficiency
Each image is annotated with bounding boxes and has a resolution of 640 × 640 pixels. The dataset is formatted for YOLOv12 and was manually labeled using Roboflow. It is divided into 356 images for training, 102 images for validation, and 50 images for testing, ensuring a balanced distribution for model development and evaluation.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
A custom object detection dataset for vehicle and pedestrian tracking.
Includes annotated instances of mycar
, person
, car
, and motorcycle
.
This dataset is designed to train and evaluate real-time models (e.g. YOLOv8/YOLOv12) for tasks such as surveillance, traffic monitoring, or autonomous systems.
📦 License: MIT
📸 Source: Collected from private camera footage and public domain datasets
📊 Class Distribution: 4 classes across ~700 images
⚙️ Augmented via Roboflow with blur, scale, flip, and exposure variance
This project supports iterative model-assisted labeling using Roboflow Train and Deploy.
Optimized for model-assisted annotation — detect first, fix later!