Facebook
TwitterThe ImageNet dataset is a large-scale image database that contains over 14 million images, each labeled with one of 21,841 categories.
Facebook
TwitterDataset Description
Tiny ImageNet is a reduced version of the original ImageNet dataset, containing 200 classes (a subset of the 1,000 ImageNet categories)
Homepage: https://www.image-net.org/
Citation
@inproceedings{deng2009imagenet, title={ImageNet: A large-scale hierarchical image database}, author={Deng, Jia and others}, booktitle={CVPR}, year={2009} }
Facebook
TwitterDataset Description
The ImageNet-1K dataset contains over 1.2 million training images across 1,000 object categories.
Homepage: https://www.image-net.org/
Note : What's hosted in this repo is only the validation split. If you wish to downlaod the train split please use the official website.
Citation
@inproceedings{deng2009imagenet, title={ImageNet: A large-scale hierarchical image database}, author={Deng, Jia and others}, booktitle={CVPR}, year={2009} }
Facebook
Twitterhttps://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
Dataset Card for ImageNet
Dataset Summary
ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). ImageNet aims to provide on average 1000 images to illustrate each synset. Images of each concept are… See the full description on the dataset page: https://huggingface.co/datasets/ILSVRC/imagenet-1k.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Horikawa, T. & Kamitani, Y. (2017) Generic decoding of seen and imagined objects using hierarchical visual features. Nature Communications 8:15037. https://www.nature.com/articles/ncomms15037
In this study, fMRI data was recorded while subjects were viewing object images (image presentation experiment) or were imagining object images (imagery experiment). The image presentation experiment consisted of two distinct types of sessions: training image sessions and test image sessions. In the training image session, a total of 1,200 images from 150 object categories (8 images from each category) were each presented only once (24 runs). In the test image session, a total of 50 images from 50 object categories (1 image from each category) were presented 35 times each (35 runs). All images were taken from ImageNet (http://www.image-net.org/, Fall 2011 release), a large-scale hierarchical image database. During the image presentation experiment, subjects performed one-back image repetition task (5 trials in each run). In the imagery experiment, subjects were required to visually imagine images from 1 of the 50 categories (20 runs; 25 categories in each run; 10 samples for each category) that were presented in the test image session of the image presentation experiment. fMRI data in the training image sessions were used to train models (decoders) which predict visual features from fMRI patterns, and those in the test image sessions and the imagery experiment were used to evaluate the model performance. Predicted features for the test image sessions and imagery experiment are used to identify seen/imagined object categories from a set of computed features for numerous object images.
Analysis demo code is available at GitHub (KamitaniLab/GenericObjectDecoding).
The present dataset contains fMRI data from five subjects ('sub-01', 'sub-02', 'sub-03', 'sub-04', and 'sub-05'). Each subject data contains three types of MRI data each of which was collected over multiple scanning sessions.
Each scanning session consisted of functional (EPI) and anatomical (inplane T2) data. The functional EPI images covered the entire brain (TR, 3000 ms; TE, 30 ms; flip angle, 80°; voxel size, 3 × 3 × 3 mm; FOV, 192 × 192 mm; number of slices, 50, slice gap, 0 mm) and inplane T2-weighted anatomical images were acquired with the same slices used for the EPI (TR, 7020 ms; TE, 69 ms; flip angle, 160°; voxel size, 0.75 × 0.75 × 3.0 mm; FOV, 192 × 192 mm). The dataset also includes a T1-weighted anatomical reference image for each subject (TR, 2250 ms; TE, 3.06 ms; TI, 900 ms; flip angle, 9°; voxel size, 1.0 × 1.0 × 1.0 mm; FOV, 256 × 256 mm). The T1-weighted images were scanned only once for each subject in a separate scanning session and are stored in 'ses-anatomy' directories. The T1-weighted images were defaced by pydeface (https://pypi.python.org/pypi/pydeface). All DICOM files are converted to Nifti-1 files by mri_convert in FreeSurfer. In addition, the dataset contains mask images of manually defined ROIs for each subject in 'sourcedata' directory (See 'README' in 'sourcedata' for more details).
Preprocessed fMRI data are available in derivatives/preproc-spm. See the original paper (Horikawa & Kamitani, 2017) for the details of preprocessing.
Task event files (‘sub-*_ses-*_task-*_run-*_events.tsv’) contains recorded event (stimuli presentation, subject responses, etc.) during fMRI runs. In task event files for perception task (‘ses-perceptionTraining' and 'ses-perceptionTest'), each column represents:
In task event files for imagery task ('ses-imageryTest'), each column represents:
The stimulus images are named as 'n03626115_19498' where 'n03626115' is ImageNet/WorNet ID for a synset (category) and '19498' is image ID. The categories are named as the ImageNet/WordNet sysnet ID (e.g., 'n03626115'). The stimulus and category names are included in the task event files as 'stimulus_name' and 'category_name', respectively. For use in analysis code, the task event files also contain 'stimulus_id' and 'category_id', which are float numbers generated based on the stimulus or category names (e.g., 'n03626115_19498' --> 3626115.019498).
The mapping between stimulus/category names and IDs:
Because of licensing issues, we do not include the stimulus images in the dataset. A script downloading the images from ImageNet is available at https://github.com/KamitaniLab/GenericObjectDecoding. Image features (CNN unit responses, HMAX, GIST, and SIFT) used in the original study are available at https://figshare.com/articles/Generic_Object_Decoding/7387130.
Facebook
TwitterModified version of Jessica Li's dataset, where I made some image processing operations. I cropped the images to have the dog in the center of the picture. All the images should have the same resolution.
You'll find here a training folder with 120 folders corresponding to the 120 breeds and images of the corresponding dog breed inside and a testing folder structured in a the same manner.
Thanks To Jessica Li who posted it previously.
The original data source is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information on the train/test splits and baseline results.
If you use this dataset in a publication, please cite the dataset on the following papers:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary: J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex]
Facebook
TwitterDataset used for paper -> "Rethinking Dataset Compression: Shifting Focus From Labels to Images"
Dataset created according to the paper Imagenet: A large-scale hierarchical image database.
Basic Usage
from datasets import load_dataset dataset = load_dataset("he-yang/2025-rethinkdc-imagenet-random-ipc-20")
For more information, please refer to the Rethinking-Dataset-Compression
Facebook
TwitterThis dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It was originally collected for fine-grain image categorization, a challenging problem as certain dog breeds have near identical features or differ in colour and age.
The original data source is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information on the train/test splits and baseline results. If you use this dataset in a publication, please cite the dataset on the following papers: Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. Secondary: J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
CAMINA Urban Mobility Detection Dataset
Purpose: Edge-optimized active mobility detection for citizen-led urban analytics
Dataset Overview: This dataset extends the COCO taxonomy to include relevant urban mobility modes: cyclists, e-scooters, SUVs, and delivery vans. The collection comprises 1,834 images with 13,148 annotated instances across 9 classes.
Sources: - ImageNet (Deng et al., 2009): 1295 images containing overlapping 'person' and 'bicycle' instances - E-scooter dataset (Apurv, Tian and Sherony, 2021): 600 randomly selected images
Total raw images: 1,895 (filtered to 1,834 after quality control)
Auto-labeling Strategy: - YOLO11l: Detection of 6 COCO-aligned classes (person, bicycle/cyclist, car, motorcycle, bus, truck) - YOLOv8m-Worldv2: Open-vocabulary detection of emerging classes (e-scooter, SUV, delivery van) - Rule-based cyclist detection: Spatial association logic combining person + bicycle detections (IoU ≥ 0.20)
Annotation Protocol: 1. Automated initial labeling using hybrid detection pipeline 2. Manual review and correction by trained annotators 3. Quality validation through independent double-checking 4. Active mobility focus: Only instances with visible riders included (excludes parked/unattended vehicles)
Class Distribution: - Person: 6,975 instances - Cyclist: 2,012 instances - Car: 2,105 instances - E-scooter: 728 instances - SUV: 456 instances - Bus: 321 instances - Motorcycle: 307 instances - Truck: 132 instances - Delivery van: 112 instances
Citation:
Tamagusko, T., Niroshan, L., Soubam, S., Desnoyer, T., Rogers, B., Istrate, A., & Pilla, F. (2026). Edge-Optimized YOLO Model for Active Mobility Detection in Citizen-Led Urban Analytics. Proceedings of the Transportation Research Arena 2026.
References: - Apurv, K., Tian, R. and Sherony, R. (2021) "Detection of E-scooter Riders in Naturalistic Scenes," arXiv preprint arXiv:2111.14060 - Deng, J. et al. (2009) "ImageNet: A large-scale hierarchical image database," IEEE CVPR 2009
Facebook
TwitterThe Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It was originally collected for fine-grain image categorization, a challenging problem as certain dog breeds have near identical features or differ in colour and age.
The original data source is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information on the train/test splits and baseline results.
If you use this dataset in a publication, please cite the dataset on the following papers:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary: J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex]
Banner Image from Hannah Lim on Unsplash
Facebook
TwitterThis is a combination of two data sets.
Acknowledgements Stanford Dog Dataset: The original data source is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information on the train/test splits and baseline results.
If you use this dataset in a publication, please cite the dataset on the following papers:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary: J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex]
Banner Image from Hannah Lim on Unsplash
Oxford Dog and Cat: The dataset is available to download for commercial/research purposes under a Creative Commons Attribution-ShareAlike 4.0 International License. The copyright remains with the original owners of the images.
Facebook
TwitterModified Version of Jessica Li's dataset, where I changed the annotations to be in YOLOv8 format. I also renamed the images and annotations so they are named after the dog breed they belong to.
To use this dataset choose the dog breeds you want to use for your dataset and split them into test, train and validation sets. Most common is a 70:20:10 split. After that add a data.yaml in which the paths to your test,train and validation data is stored and the class id's for the different breeds are stored. If you are looking for a script to change the id's for you then you can use the one I used, which is in my github project
The dataset contains 20 581 images with 120 dog breeds.
The original data source is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information on the train/test splits and baseline results.
If you use this dataset in a publication, please cite the dataset on the following papers:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary: J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex]
Also thanks to Jessica Li who posted it previously.
Facebook
TwitterThis dataset utilized images from the Stanford dataset and images found on Unsplash.com. For the images taken from the Stanford dataset the annotations have been changed to fit the YOLOv8 format. For the images taken from Unsplash the annotations have been done by me using Roboflow. The dog breeds, for which the images have been sampled and annotated by me, are Corgi, Husky and Retriever, the other 2 dog breeds have been sampled from the Stanford dataset. The data is split into train, validation and test sets with a 70:20:10 split. There has been no data augmentation to keep the dataset fairly small and make it more beginner friendly.
To use this dataset you can implement a YOLOv8 model and input the data.yaml. You might need to adapt the image paths in the data.yaml for the training, test and validation sets.
The dataset contains 793 images with 5 dog breeds aswell as a data.yaml with the file paths and the classes.
The original data source for the 2 dog breeds is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information.
If you use this dataset in a publication, please cite the dataset on the following papers:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary: J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex]
Also thanks to Jessica Li who posted it previously.
Facebook
Twitter[xmin,ymin,xmax,ymax]The original data source is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information on the train/test splits and baseline results.
If you use this dataset in a publication, please cite the dataset on the following papers:
Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]
Secondary: J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex]
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterThe ImageNet dataset is a large-scale image database that contains over 14 million images, each labeled with one of 21,841 categories.