The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection. The publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld. ILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”. The ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.
Total number of non-empty WordNet synsets: 21841 Total number of images: 14197122 Number of images with bounding box annotations: 1,034,908 Number of synsets with SIFT features: 1000 Number of images with SIFT features: 1.2 million
ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy.
The test split contains 100K images but no labels because no labels have been publicly released. We provide support for the test split from 2012 with the minor patch released on October 10, 2019. In order to manually download this data, a user must perform the following operations:
The resulting tar-ball may then be processed by TFDS.
To assess the accuracy of a model on the ImageNet test split, one must run inference on all images in the split, export those results to a text file that must be uploaded to the ImageNet evaluation server. The maintainers of the ImageNet evaluation server permits a single user to submit up to 2 submissions per week in order to prevent overfitting.
To evaluate the accuracy on the test split, one must first create an account at image-net.org. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at https://image-net.org/challenges/LSVRC/eval_server.php The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is "Classification submission (top-5 cls error)". A sample of an exported text file looks like the following:
771 778 794 387 650
363 691 764 923 427
737 369 430 531 124
755 930 755 59 168
The export format is described in full in "readme.txt" within the 2013 development kit available here: https://image-net.org/data/ILSVRC/2013/ILSVRC2013_devkit.tgz Please see the section entitled "3.3 CLS-LOC submission format". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See labels.txt.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('imagenet2012_subset', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
https://storage.googleapis.com/tfds-data/visualization/fig/imagenet2012_subset-1pct-5.0.0.png" alt="Visualization" width="500px">
The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) dataset is a large-scale image classification dataset. It contains over 14 million images from 21,841 categories.
The statistic shows the best classification error rate achieved by computer vision algorithms tested on a large-scale visual recognition challenge, from 2010 to 2017. In 2015, the winning algorithm became the first to surpass the average human classification error rate of five percent, and by 2017 machine learning algorithms were able to achieve a classification error rate of 2.3 percent, making fewer than half the number of classification errors as a human.
https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
A BitTorrent file to download data with the title 'ImageNet Large Scale Visual Recognition Challenge (V2017)'
The ImageCLEF-DA dataset is a benchmark dataset for ImageCLEF 2014 domain adaptation challenges, which contains 12 categories shared by three domains: Caltech-256 (C), ImageNet ILSVRC 2012 (I), and Pascal VOC 2012 (P).
https://mtl.yyliu.net/download/https://mtl.yyliu.net/download/
The tieredImageNet dataset is a larger subset of ILSVRC-12 with 608 classes (779,165 images) grouped into 34 higher-level nodes in the ImageNet human-curated hierarchy. This set of nodes is partitioned into 20, 6, and 8 disjoint sets of training, validation, and testing nodes, and the corresponding classes form the respective meta-sets. As argued in Ren et al. (2018), this split near the root of the ImageNet hierarchy results in a more challenging, yet realistic regime with test classes that are less similar to training classes.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Feature maps summaryHere we release the PCA-downsampled deep neural network (DNN) feature maps used in the data resource paper: "A large and rich EEG dataset for modeling human visual object recognition". We used four DNN architectures (AlexNet, ResNet-50, CORnet-S, MoCo), and extracted their feature map responses to images coming from the THINGS database and from the ILSVRC-2012 challenge.Useful materialAdditional informationFor additional information on the DNNs used, the stimuli images and feature maps extraction procedure please refer to our paper and code.Additional dataset resourcesPlease visit the dataset page for the paper, dataset tutorial, code and more.OSFFor additional data and resources visit our OSF project, where you can find:The stimuli imagesA detailed descriptions of the DNN feature maps data filesCitationsIf you use any of our data, please cite our paper.
This dataset contains ILSVRC-2012 (ImageNet) validation images augmented with a new set of "Re-Assessed" (ReaL) labels from the "Are we done with ImageNet" paper, see https://arxiv.org/abs/2006.07159. These labels are collected using the enhanced protocol, resulting in multi-label and more accurate annotations.
Important note: about 3500 examples contain no label, these should be excluded from the averaging when computing the accuracy. One possible way of doing this is with the following NumPy code:
is_correct = [pred in real_labels[i] for i, pred in enumerate(predictions) if real_labels[i]]
real_accuracy = np.mean(is_correct)
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('imagenet2012_real', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
https://storage.googleapis.com/tfds-data/visualization/fig/imagenet2012_real-1.0.0.png" alt="Visualization" width="500px">
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Configuration of VGG-19 on CIFAR-100.
COMPASS-XP is a dataset of matched photographic and X-ray images of single objects, made available for use in Machine Learning & Computer Vision research, in particular in the context of transport security. Objects are imaged in multiple poses, and accompanied by metadata including labels for whether we consider the object to be dangerous in the context of aviation. Object classes overlap with those in the popular ImageNet Large Scale Visual Recognition Challenge class set and theWordNet lexical database, and identifiers for shared classes in both schemes are also provided.
Hardware Configuration
Photographs were captured with a Sony DSC-W800 compact digital camera. X-ray scans were obtained
using a Gilardoni FEP ME 536 mailroom X-ray machine, distributed in the UK by Todd Research
under the name TR50. The scanner is dual energy and generates several image outputs:
• Low: Raw 8-bit greyscale data from the scanner’s low energy X-ray channel.
• High: Raw 8-bit greyscale data from the scanner’s high energy X-ray channel.
• Density: 8-bit greyscale data representing inferred material density computed from the two channels.
• Grey: RGB PNG image representing a combination of both low and high energy channels with some appearance improvements. Although nominally greyscale, the image does include subtle duotone-style colouration.
• Colour RGB PNG image with false colour palette representing material density.
In practice the grey and colour versions are probably most useful, but for completeness the dataset includes all variants for each scan.
Data Files Image files are supplied in six subdirectories, corresponding to the five X-ray image variants above plus photos. X-rays are provided in PNG format, while photos are JPEG. Each scan is identified by a numeric index, which is also used to name the file, padded with leading zeros to always be 4 digits long.
Scan metadata is provided in the accompanying tab-delimited text file, meta.txt. This includes the
following columns:
• basename: The zero-padded identifier for the scan. All six image type variants for the same class-instance-pose have the same basename. X-ray files are named basename.png while photos are basename.jpg.
• class: The object class in the scan.
• instance: An integer identifying the object instance. Instances start at 1 for each class.
• pose: An integer identifying the object pose. Poses start at 1 for each instance.
• scan tray: Either A, indicating that the pose was imaged in a weighted tray, or N indicating it was not.
• dangerous: Whether the object was considered dangerous (True/False).
• IN id: Numeric index of the object class in the ILSVRC list of 1000 classes, or empty if the class isn’t present there.
• WN id: WordNet identifier for the object class, or empty if the class isn’t present inWordNet.
License The COMPASS-XP dataset was acquired as part of a research project funded by the UK Government Future Aviation Security Solutions programme. Both the images and their metadata are licensed under the Creative Commons Attribution 4.0 International License and may be freely used for research and commercial purpose, including derivative works, providing the source is acknowledged.
COMPASS-XP Dataset Authors Lewis D. Griffin*, Matthew Caldwell, Jerone T. A. Andrews Computational Security Science Group, UCL * l.griffin@cs.ucl.ac.uk
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison of VGG-19 on CIFAR-100.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparison of AlexNet on CIFAR-10.
ImageNet is an image database organized according to the WordNet hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. Currently we have an average of over five hundred images per node. We hope ImageNet will become a useful resource for researchers, educators, students and all of you who share our passion for pictures.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Parameters overview of different CNN architectures.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Configuration of AlexNet on CIFAR-10.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection. The publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld. ILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”. The ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.
Total number of non-empty WordNet synsets: 21841 Total number of images: 14197122 Number of images with bounding box annotations: 1,034,908 Number of synsets with SIFT features: 1000 Number of images with SIFT features: 1.2 million