42 datasets found
  1. h

    tiny-imagenet

    • huggingface.co
    • datasets.activeloop.ai
    Updated Aug 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hao Zheng (2022). tiny-imagenet [Dataset]. https://huggingface.co/datasets/zh-plus/tiny-imagenet
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 12, 2022
    Authors
    Hao Zheng
    License

    https://choosealicense.com/licenses/undefined/https://choosealicense.com/licenses/undefined/

    Description

    Dataset Card for tiny-imagenet

      Dataset Summary
    

    Tiny ImageNet contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. Each class has 500 training images, 50 validation images, and 50 test images.

      Languages
    

    The class labels in the dataset are in English.

      Dataset Structure
    
    
    
    
    
      Data Instances
    

    { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=64x64 at 0x1A800E8E190, 'label': 15 }… See the full description on the dataset page: https://huggingface.co/datasets/zh-plus/tiny-imagenet.

  2. T

    imagenet2012_subset

    • tensorflow.org
    Updated Oct 21, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenet2012_subset [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenet2012_subset
    Explore at:
    Dataset updated
    Oct 21, 2024
    Description

    ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy.

    The test split contains 100K images but no labels because no labels have been publicly released. We provide support for the test split from 2012 with the minor patch released on October 10, 2019. In order to manually download this data, a user must perform the following operations:

    1. Download the 2012 test split available here.
    2. Download the October 10, 2019 patch. There is a Google Drive link to the patch provided on the same page.
    3. Combine the two tar-balls, manually overwriting any images in the original archive with images from the patch. According to the instructions on image-net.org, this procedure overwrites just a few images.

    The resulting tar-ball may then be processed by TFDS.

    To assess the accuracy of a model on the ImageNet test split, one must run inference on all images in the split, export those results to a text file that must be uploaded to the ImageNet evaluation server. The maintainers of the ImageNet evaluation server permits a single user to submit up to 2 submissions per week in order to prevent overfitting.

    To evaluate the accuracy on the test split, one must first create an account at image-net.org. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at https://image-net.org/challenges/LSVRC/eval_server.php The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is "Classification submission (top-5 cls error)". A sample of an exported text file looks like the following:

    771 778 794 387 650
    363 691 764 923 427
    737 369 430 531 124
    755 930 755 59 168
    

    The export format is described in full in "readme.txt" within the 2013 development kit available here: https://image-net.org/data/ILSVRC/2013/ILSVRC2013_devkit.tgz Please see the section entitled "3.3 CLS-LOC submission format". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See labels.txt.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenet2012_subset', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenet2012_subset-1pct-5.0.0.png" alt="Visualization" width="500px">

  3. h

    imagenet-1k

    • huggingface.co
    Updated Apr 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Large Scale Visual Recognition Challenge (2022). imagenet-1k [Dataset]. https://huggingface.co/datasets/ILSVRC/imagenet-1k
    Explore at:
    Dataset updated
    Apr 30, 2022
    Dataset authored and provided by
    Large Scale Visual Recognition Challenge
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Card for ImageNet

      Dataset Summary
    

    ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). ImageNet aims to provide on average 1000 images to illustrate each synset. Images of each concept are… See the full description on the dataset page: https://huggingface.co/datasets/ILSVRC/imagenet-1k.

  4. h

    imagenet-w21-wds

    • huggingface.co
    Updated Sep 19, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PyTorch Image Models (2025). imagenet-w21-wds [Dataset]. https://huggingface.co/datasets/timm/imagenet-w21-wds
    Explore at:
    Dataset updated
    Sep 19, 2025
    Dataset authored and provided by
    PyTorch Image Models
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Summary

    This is a copy of the full Winter21 release of ImageNet in webdataset tar format with JPEG images. This release consists of 19167 classes, 2674 fewer classes than the original 21841 class Fall11 release of the full ImageNet. The classes were removed due to these concerns: https://www.image-net.org/update-sep-17-2019.php

      Data Splits
    

    The full ImageNet dataset has no defined splits. This release follows that and leaves everything in the train split.… See the full description on the dataset page: https://huggingface.co/datasets/timm/imagenet-w21-wds.

  5. h

    imagenet-50-subset

    • huggingface.co
    Updated Jul 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Logan Riggs Smith (2025). imagenet-50-subset [Dataset]. https://huggingface.co/datasets/Elriggs/imagenet-50-subset
    Explore at:
    Dataset updated
    Jul 7, 2025
    Authors
    Logan Riggs Smith
    Description

    ImageNet-50 Subset

    This dataset contains the first 50 classes from ImageNet-1K with up to 1,000 images per class (where available).

      Dataset Statistics
    

    Total Classes: 50 Total Images: 50000 Train/Val Split: 90%/10% Max Images per Class: 1000

      Dataset Structure
    

    imagenet-50-subset/ ├── train/ │ ├── n01440764/ # tench │ │ ├── n01440764_1234.JPEG │ │ └── ... │ ├── n01443537/ # goldfish │ └── ... ├── val/ │ ├── n01440764/ │ ├── n01443537/ │ └──… See the full description on the dataset page: https://huggingface.co/datasets/Elriggs/imagenet-50-subset.

  6. Z

    Data from: ImageNet-Patch: A Dataset for Benchmarking Machine Learning...

    • data.niaid.nih.gov
    Updated Jun 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ambra Demontis (2022). ImageNet-Patch: A Dataset for Benchmarking Machine Learning Robustness against Adversarial Patches [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6568777
    Explore at:
    Dataset updated
    Jun 30, 2022
    Dataset provided by
    Fabio Roli
    Ambra Demontis
    Luca Demetrio
    Maura Pintor
    Daniele Angioni
    Angelo Sotgiu
    Battista Biggio
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Adversarial patches are optimized contiguous pixel blocks in an input image that cause a machine-learning model to misclassify it. However, their optimization is computationally demanding and requires careful hyperparameter tuning. To overcome these issues, we propose ImageNet-Patch, a dataset to benchmark machine-learning models against adversarial patches. It consists of a set of patches optimized to generalize across different models and applied to ImageNet data after preprocessing them with affine transformations. This process enables an approximate yet faster robustness evaluation, leveraging the transferability of adversarial perturbations.

    We release our dataset as a set of folders indicating the patch target label (e.g., banana), each containing 1000 subfolders as the ImageNet output classes.

    An example showing how to use the dataset is shown below.

    code for testing robustness of a model

    import os.path

    from torchvision import datasets, transforms, models import torch.utils.data

    class ImageFolderWithEmptyDirs(datasets.ImageFolder): """ This is required for handling empty folders from the ImageFolder Class. """

    def find_classes(self, directory):
      classes = sorted(entry.name for entry in os.scandir(directory) if entry.is_dir())
      if not classes:
        raise FileNotFoundError(f"Couldn't find any class folder in {directory}.")
      class_to_idx = {cls_name: i for i, cls_name in enumerate(classes) if
              len(os.listdir(os.path.join(directory, cls_name))) > 0}
      return classes, class_to_idx
    

    extract and unzip the dataset, then write top folder here

    dataset_folder = 'data/ImageNet-Patch'

    available_labels = { 487: 'cellular telephone', 513: 'cornet', 546: 'electric guitar', 585: 'hair spray', 804: 'soap dispenser', 806: 'sock', 878: 'typewriter keyboard', 923: 'plate', 954: 'banana', 968: 'cup' }

    select folder with specific target

    target_label = 954

    dataset_folder = os.path.join(dataset_folder, str(target_label)) normalizer = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) transforms = transforms.Compose([ transforms.ToTensor(), normalizer ])

    dataset = ImageFolderWithEmptyDirs(dataset_folder, transform=transforms) model = models.resnet50(pretrained=True) loader = torch.utils.data.DataLoader(dataset, shuffle=True, batch_size=5) model.eval()

    batches = 10 correct, attack_success, total = 0, 0, 0 for batch_idx, (images, labels) in enumerate(loader): if batch_idx == batches: break pred = model(images).argmax(dim=1) correct += (pred == labels).sum() attack_success += sum(pred == target_label) total += pred.shape[0]

    accuracy = correct / total attack_sr = attack_success / total

    print("Robust Accuracy: ", accuracy) print("Attack Success: ", attack_sr)

  7. a

    MXNet pre-trained model Full ImageNet Network inception-21k.tar.gz

    • academictorrents.com
    bittorrent
    Updated Jul 22, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    dmlc (2016). MXNet pre-trained model Full ImageNet Network inception-21k.tar.gz [Dataset]. https://academictorrents.com/details/27330fbd1ec0648e72b2cf5c40aa0d4df1931221
    Explore at:
    bittorrent(125141204)Available download formats
    Dataset updated
    Jul 22, 2016
    Dataset authored and provided by
    dmlc
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    Full ImageNet Network This model is a pretrained model on full imagenet dataset [1] with 14,197,087 images in 21,841 classes. The model is trained by only random crop and mirror augmentation. The network is based on Inception-BN network [2], and added more capacity. This network runs roughly 2 times slower than standard Inception-BN Network. We trained this network on a machine with 4 GeForce GTX 980 GPU. Each round costs 23 hours, the released model is the 9 round. Train Top-1 Accuracy over 21,841 classes: 37.19% Single image prediction memory requirement: 15MB ILVRC2012 Validation Performance: | | Over 1,000 classes | Over 21,841 classes | | ——— | ————————— | —————————- | | Top-1 | 68.3% | 41.9% | | Top-5 | 89.0% | 69.6% |

  8. T

    imagenette

    • tensorflow.org
    • opendatalab.com
    • +1more
    Updated Jun 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenette [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenette
    Explore at:
    Dataset updated
    Jun 1, 2024
    Description

    Imagenette is a subset of 10 easily classified classes from the Imagenet dataset. It was originally prepared by Jeremy Howard of FastAI. The objective behind putting together a small version of the Imagenet dataset was mainly because running new ideas/algorithms/experiments on the whole Imagenet take a lot of time.

    This version of the dataset allows researchers/practitioners to quickly try out ideas and share with others. The dataset comes in three variants:

    • Full size
    • 320 px
    • 160 px

    Note: The v2 config correspond to the new 70/30 train/valid split (released in Dec 6 2019).

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenette', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenette-full-size-v2-1.0.0.png" alt="Visualization" width="500px">

  9. n

    Data from: Using convolutional neural networks to efficiently extract...

    • data.niaid.nih.gov
    • search.dataone.org
    • +2more
    zip
    Updated Jan 4, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rachel Reeb; Naeem Aziz; Samuel Lapp; Justin Kitzes; J. Mason Heberling; Sara Kuebbing (2022). Using convolutional neural networks to efficiently extract immense phenological data from community science images [Dataset]. http://doi.org/10.5061/dryad.mkkwh7123
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 4, 2022
    Dataset provided by
    University of Pittsburgh
    Carnegie Museum of Natural History
    Authors
    Rachel Reeb; Naeem Aziz; Samuel Lapp; Justin Kitzes; J. Mason Heberling; Sara Kuebbing
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Community science image libraries offer a massive, but largely untapped, source of observational data for phenological research. The iNaturalist platform offers a particularly rich archive, containing more than 49 million verifiable, georeferenced, open access images, encompassing seven continents and over 278,000 species. A critical limitation preventing scientists from taking full advantage of this rich data source is labor. Each image must be manually inspected and categorized by phenophase, which is both time-intensive and costly. Consequently, researchers may only be able to use a subset of the total number of images available in the database. While iNaturalist has the potential to yield enough data for high-resolution and spatially extensive studies, it requires more efficient tools for phenological data extraction. A promising solution is automation of the image annotation process using deep learning. Recent innovations in deep learning have made these open-source tools accessible to a general research audience. However, it is unknown whether deep learning tools can accurately and efficiently annotate phenophases in community science images. Here, we train a convolutional neural network (CNN) to annotate images of Alliaria petiolata into distinct phenophases from iNaturalist and compare the performance of the model with non-expert human annotators. We demonstrate that researchers can successfully employ deep learning techniques to extract phenological information from community science images. A CNN classified two-stage phenology (flowering and non-flowering) with 95.9% accuracy and classified four-stage phenology (vegetative, budding, flowering, and fruiting) with 86.4% accuracy. The overall accuracy of the CNN did not differ from humans (p = 0.383), although performance varied across phenophases. We found that a primary challenge of using deep learning for image annotation was not related to the model itself, but instead in the quality of the community science images. Up to 4% of A. petiolata images in iNaturalist were taken from an improper distance, were physically manipulated, or were digitally altered, which limited both human and machine annotators in accurately classifying phenology. Thus, we provide a list of photography guidelines that could be included in community science platforms to inform community scientists in the best practices for creating images that facilitate phenological analysis.

    Methods Creating a training and validation image set

    We downloaded 40,761 research-grade observations of A. petiolata from iNaturalist, ranging from 1995 to 2020. Observations on the iNaturalist platform are considered “research-grade if the observation is verifiable (includes image), includes the date and location observed, is growing wild (i.e. not cultivated), and at least two-thirds of community users agree on the species identification. From this dataset, we used a subset of images for model training. The total number of observations in the iNaturalist dataset are heavily skewed towards more recent years. Less than 5% of the images we downloaded (n=1,790) were uploaded between 1995-2016, while over 50% of the images were uploaded in 2020. To mitigate temporal bias, we used all available images between the years 1995 and 2016 and we randomly selected images uploaded between 2017-2020. We restricted the number of randomly-selected images in 2020 by capping the number of 2020 images to approximately the number of 2019 observations in the training set. The annotated observation records are available in the supplement (supplementary data sheet 1). The majority of the unprocessed records (those which hold a CC-BY-NC license) are also available on GBIF.org (2021).

    One of us (R. Reeb) annotated the phenology of training and validation set images using two different classification schemes: two-stage (non-flowering, flowering) and four-stage (vegetative, budding, flowering, fruiting). For the two-stage scheme, we classified 12,277 images and designated images as ‘flowering’ if there was one or more open flowers on the plant. All other images were classified as non-flowering. For the four-stage scheme, we classified 12,758 images. We classified images as ‘vegetative’ if no reproductive parts were present, ‘budding’ if one or more unopened flower buds were present, ‘flowering’ if at least one opened flower was present, and ‘fruiting’ if at least one fully-formed fruit was present (with no remaining flower petals attached at the base). Phenology categories were discrete; if there was more than one type of reproductive organ on the plant, the image was labeled based on the latest phenophase (e.g. if both flowers and fruits were present, the image was classified as fruiting).

    For both classification schemes, we only included images in the model training and validation dataset if the image contained one or more plants with clearly visible reproductive parts were clear and we could exclude the possibility of a later phenophase. We removed 1.6% of images from the two-stage dataset that did not meet this requirement, leaving us with a total of 12,077 images, and 4.0% of the images from the four-stage leaving us with a total of 12,237 images. We then split the two-stage and four-stage datasets into a model training dataset (80% of each dataset) and a validation dataset (20% of each dataset).

    Training a two-stage and four-stage CNN

    We adapted techniques from studies applying machine learning to herbarium specimens for use with community science images (Lorieul et al. 2019; Pearson et al. 2020). We used transfer learning to speed up training of the model and reduce the size requirements for our labeled dataset. This approach uses a model that has been pre-trained using a large dataset and so is already competent at basic tasks such as detecting lines and shapes in images. We trained a neural network (ResNet-18) using the Pytorch machine learning library (Psake et al. 2019) within Python. We chose the ResNet-18 neural network because it had fewer convolutional layers and thus was less computationally intensive than pre-trained neural networks with more layers. In early testing we reached desired accuracy with the two-stage model using ResNet-18. ResNet-18 was pre-trained using the ImageNet dataset, which has 1,281,167 images for training (Deng et al. 2009). We utilized default parameters for batch size (4), learning rate (0.001), optimizer (stochastic gradient descent), and loss function (cross entropy loss). Because this led to satisfactory performance, we did not further investigate hyperparameters.

    Because the ImageNet dataset has 1,000 classes while our data was labeled with either 2 or 4 classes, we replaced the final fully-connected layer of the ResNet-18 architecture with fully-connected layers containing an output size of 2 for the 2-class problem and 4 for the 4-class problem. We resized and cropped the images to fit ResNet’s input size of 224x224 pixels and normalized the distribution of the RGB values in each image to a mean of zero and a standard deviation of one, to simplify model calculations. During training, the CNN makes predictions on the labeled data from the training set and calculates a loss parameter that quantifies the model’s inaccuracy. The slope of the loss in relation to model parameters is found and then the model parameters are updated to minimize the loss value. After this training step, model performance is estimated by making predictions on the validation dataset. The model is not updated during this process, so that the validation data remains ‘unseen’ by the model (Rawat and Wang 2017; Tetko et al. 1995). This cycle is repeated until the desired level of accuracy is reached. We trained our model for 25 of these cycles, or epochs. We stopped training at 25 epochs to prevent overfitting, where the model becomes trained too specifically for the training images and begins to lose accuracy on images in the validation dataset (Tetko et al. 1995).

    We evaluated model accuracy and created confusion matrices using the model’s predictions on the labeled validation data. This allowed us to evaluate the model’s accuracy and which specific categories are the most difficult for the model to distinguish. For using the model to make phenology predictions on the full, 40,761 image dataset, we created a custom dataloader function in Pytorch using the Custom Dataset function, which would allow for loading images listed in a csv and passing them through the model associated with unique image IDs.

    Hardware information

    Model training was conducted using a personal laptop (Ryzen 5 3500U cpu and 8 GB of memory) and a desktop computer (Ryzen 5 3600 cpu, NVIDIA RTX 3070 GPU and 16 GB of memory).

    Comparing CNN accuracy to human annotation accuracy

    We compared the accuracy of the trained CNN to the accuracy of seven inexperienced human scorers annotating a random subsample of 250 images from the full, 40,761 image dataset. An expert annotator (R. Reeb, who has over a year’s experience in annotating A. petiolata phenology) first classified the subsample images using the four-stage phenology classification scheme (vegetative, budding, flowering, fruiting). Nine images could not be classified for phenology and were removed. Next, seven non-expert annotators classified the 241 subsample images using an identical protocol. This group represented a variety of different levels of familiarity with A. petiolata phenology, ranging from no research experience to extensive research experience (two or more years working with this species). However, no one in the group had substantial experience classifying community science images and all were naïve to the four-stage phenology scoring protocol. The trained CNN was also used to classify the subsample images. We compared human annotation accuracy in each phenophase to the accuracy of the CNN using students

  10. NINCO (Out-Of-Distribution detection dataset for ImageNet)

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip
    Updated Aug 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Julian Bitterwolf; Julian Bitterwolf; Maximilian Müller; Matthias Hein; Maximilian Müller; Matthias Hein (2023). NINCO (Out-Of-Distribution detection dataset for ImageNet) [Dataset]. http://doi.org/10.5281/zenodo.8013288
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Aug 22, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Julian Bitterwolf; Julian Bitterwolf; Maximilian Müller; Matthias Hein; Maximilian Müller; Matthias Hein
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The NINCO (No ImageNet Class Objects) dataset is introduced in the ICML 2023 paper In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation. The images in this dataset are free from objects that belong to any of the 1000 classes of ImageNet-1K (ILSVRC2012), which makes NINCO suitable for evaluating out-of-distribution detection on ImageNet-1K .

    The NINCO main dataset consists of 64 OOD classes with a total of 5879 samples. These OOD classes were selected to have no categorical overlap with any classes of ImageNet-1K. Each sample was inspected individually by the authors to not contain ID objects.

    Besides NINCO, included are (in the same .tar.gz file) truly OOD versions of 11 popular OOD datasets with in total 2715 OOD samples.

    Further included are 17 OOD unit-tests, with 400 samples each.

    Code for loading and evaluating on each of the three datasets is provided at https://github.com/j-cb/NINCO.

    When using NINCO, please consider citing (besides the bibtex given below) the following data sources that were used to create NINCO:

    • Hendrycks et al.: ”Scaling out-of-distribution detection for real-world settings”, ICML, 2022.
    • Bossard et al.: ”Food-101 – mining discriminative components with random forests”, ECCV 2014.
    • Zhou et al.: ”Places: A 10 million image database for scene recognition”, IEEE PAMI 2017.
    • Huang et al.: ”Mos: Towards scaling out-of-distribution detection for large semantic space”, CVPR 2021.
    • Li et al.: ”Caltech 101 (1.0)”, 2022.
    • Ismail et al.: ”MYNursingHome: A fully-labelled image dataset for indoor object classification.”, Data in Brief (V. 32) 2020.
    • The iNaturalist project: https://www.inaturalist.org/

    When using NINCO_popular_datasets_subsamples, additionally to the above, please consider citing:

    • Cimpoi et al.: ”Describing textures in the wild”, CVPR 2014.
    • Hendrycks et al.: ”Natural adversarial examples”, CVPR 2021.
    • Wang et al.: ”Vim: Out-of-distribution with virtual-logit matching”, CVPR 2022.
    • Bendale et al.: ”Towards Open Set Deep Networks”, CVPR 2016.
    • Vaze et al.: ”Open-set Recognition: a Good Closed-set Classifier is All You Need?”, ICLR 2022.
    • Wang et al.: ”Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition.” ICML, 2022.
    • Galil et al.: “A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet”, ICLR 2023.

    For citing our paper, we would appreciate using the following bibtex entry (this will be updated once the ICML 2023 proceedings are public):


    @inproceedings{
    bitterwolf2023ninco,
    title={In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation},
    author={Julian Bitterwolf and Maximilian Mueller and Matthias Hein},
    booktitle={ICML},
    year={2023},
    url={https://proceedings.mlr.press/v202/bitterwolf23a.html}
    }

  11. IKEA-FS

    • kaggle.com
    Updated Nov 23, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brikwerk (2022). IKEA-FS [Dataset]. https://www.kaggle.com/datasets/brikwerk/ikeafs/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 23, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Brikwerk
    Description

    IKEA-FS: A dataset composed of IKEA furniture images for out-of-domain few-shot classification.

    We recommend using IKEA-FS as an out-of-domain testing target for few-shot learning. As such, we publish no training/testing splits for use with this dataset.

    Dataset Structure and Composition

    This dataset is structured as an image folder dataset, meaning each folder represents a class and contains images of the respective class.

    Total Number of Images: 1092 Total Number of Classes: 46 Images per Class: ~20-36

    Testing Setup

    In-domain Training Set: mini-imagenet training set Testing Settings: 5-way 1-shot and 5-way 5-shot

    License

    MIT

  12. O

    Mini-ImageNet

    • opendatalab.com
    zip
    Updated Jan 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Google (2024). Mini-ImageNet [Dataset]. https://opendatalab.com/OpenDataLab/Mini-ImageNet
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 8, 2024
    Dataset provided by
    Google
    License

    https://lyy.mpi-inf.mpg.de/mtl/download/https://lyy.mpi-inf.mpg.de/mtl/download/

    Description

    The mini-ImageNet dataset was proposed by Vinyals et al. for few-shot learning evaluation. Its complexity is high due to the use of ImageNet images but requires fewer resources and infrastructure than running on the full ImageNet dataset. In total, there are 100 classes with 600 samples of 84×84 color images per class. These 100 classes are divided into 64, 16, and 20 classes respectively for sampling tasks for meta-training, meta-validation, and meta-test.

  13. h

    imagenet-12k-wds

    • huggingface.co
    Updated Dec 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PyTorch Image Models (2023). imagenet-12k-wds [Dataset]. https://huggingface.co/datasets/timm/imagenet-12k-wds
    Explore at:
    Dataset updated
    Dec 15, 2023
    Dataset authored and provided by
    PyTorch Image Models
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Summary

    This is a filtered copy of the full ImageNet dataset consisting of the top 11821 (of 21841) classes by number of samples. It has been used to pretrain a number of in12k models in timm. The code and metadata for building this dataset from the original full ImageNet can be found at https://github.com/rwightman/imagenet-12k NOTE: This subset was filtered from the original fall11 ImageNet release which has been replaced by the winter21 release which removes close to 3000… See the full description on the dataset page: https://huggingface.co/datasets/timm/imagenet-12k-wds.

  14. Data from: Generic Object Decoding (fMRI on ImageNet)

    • openneuro.org
    Updated Dec 6, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tomoyasu Horikawa; Yukiyasu Kamitani (2019). Generic Object Decoding (fMRI on ImageNet) [Dataset]. http://doi.org/10.18112/openneuro.ds001246.v1.2.1
    Explore at:
    Dataset updated
    Dec 6, 2019
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Tomoyasu Horikawa; Yukiyasu Kamitani
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Generic Object Decoding (fMRI on ImageNet)

    Original paper

    Horikawa, T. & Kamitani, Y. (2017) Generic decoding of seen and imagined objects using hierarchical visual features. Nature Communications 8:15037. https://www.nature.com/articles/ncomms15037

    Overview

    In this study, fMRI data was recorded while subjects were viewing object images (image presentation experiment) or were imagining object images (imagery experiment). The image presentation experiment consisted of two distinct types of sessions: training image sessions and test image sessions. In the training image session, a total of 1,200 images from 150 object categories (8 images from each category) were each presented only once (24 runs). In the test image session, a total of 50 images from 50 object categories (1 image from each category) were presented 35 times each (35 runs). All images were taken from ImageNet (http://www.image-net.org/, Fall 2011 release), a large-scale hierarchical image database. During the image presentation experiment, subjects performed one-back image repetition task (5 trials in each run). In the imagery experiment, subjects were required to visually imagine images from 1 of the 50 categories (20 runs; 25 categories in each run; 10 samples for each category) that were presented in the test image session of the image presentation experiment. fMRI data in the training image sessions were used to train models (decoders) which predict visual features from fMRI patterns, and those in the test image sessions and the imagery experiment were used to evaluate the model performance. Predicted features for the test image sessions and imagery experiment are used to identify seen/imagined object categories from a set of computed features for numerous object images.

    Analysis demo code is available at GitHub (KamitaniLab/GenericObjectDecoding).

    Dataset

    MRI files

    The present dataset contains fMRI data from five subjects ('sub-01', 'sub-02', 'sub-03', 'sub-04', and 'sub-05'). Each subject data contains three types of MRI data each of which was collected over multiple scanning sessions.

    • 'ses-perceptionTraining': fMRI data from the training image sessions in the image presentation experiment (24 runs; 3-5 scanning sessions)
    • 'ses-perceptionTest': fMRI data from the test image sessions in the image presentation experiment (35 runs; 4-6 scanning sessions)
    • 'ses-imageryTest': fMRI data from the imagery experiment (20 runs; 3-5 scanning sessions)

    Each scanning session consisted of functional (EPI) and anatomical (inplane T2) data. The functional EPI images covered the entire brain (TR, 3000 ms; TE, 30 ms; flip angle, 80°; voxel size, 3 × 3 × 3 mm; FOV, 192 × 192 mm; number of slices, 50, slice gap, 0 mm) and inplane T2-weighted anatomical images were acquired with the same slices used for the EPI (TR, 7020 ms; TE, 69 ms; flip angle, 160°; voxel size, 0.75 × 0.75 × 3.0 mm; FOV, 192 × 192 mm). The dataset also includes a T1-weighted anatomical reference image for each subject (TR, 2250 ms; TE, 3.06 ms; TI, 900 ms; flip angle, 9°; voxel size, 1.0 × 1.0 × 1.0 mm; FOV, 256 × 256 mm). The T1-weighted images were scanned only once for each subject in a separate scanning session and are stored in 'ses-anatomy' directories. The T1-weighted images were defaced by pydeface (https://pypi.python.org/pypi/pydeface). All DICOM files are converted to Nifti-1 files by mri_convert in FreeSurfer. In addition, the dataset contains mask images of manually defined ROIs for each subject in 'sourcedata' directory (See 'README' in 'sourcedata' for more details).

    Preprocessed fMRI data

    Preprocessed fMRI data are available in derivatives/preproc-spm. See the original paper (Horikawa & Kamitani, 2017) for the details of preprocessing.

    Task event files

    Task event files (‘sub-*_ses-*_task-*_run-*_events.tsv’) contains recorded event (stimuli presentation, subject responses, etc.) during fMRI runs. In task event files for perception task (‘ses-perceptionTraining' and 'ses-perceptionTest'), each column represents:

    • 'onset': onset time (sec) of an event
    • 'duration': duration (sec) of the event
    • 'trial_no': trial (block) number of the event
    • 'event_type': type of the event ('rest': Rest block without visual stimulus, 'stimulus': Stimulus presentation block)
    • 'stimulus_id': stimulus ID of the image presented in a stimulus block ('n/a' in rest blocks)
    • 'stimulus_name': stimulus file name of the image presented in a stimulus block ('n/a' in rest blocks)
    • 'response_time': time of button press at the block, elapsed time (sec) from the beginning of each run ('n/a' when the subject did not press the button in the block)
    • Additional columns 'category_index' and 'image_index' are for internal use.

    In task event files for imagery task ('ses-imageryTest'), each column represents:

    • 'onset': onset time (sec) of an event
    • 'duration': duration (sec) of the event
    • 'trial_no': trial (block) number of the event
    • 'event_type': type of the event ('rest' and 'inter_rest': rest period, 'cue': cue presentation period, 'imagery': imagery period, 'evaluation': evaluation of imagery quality period)
    • 'category_id': ImageNet/WordNet synset ID of a synset (category) which the subject was instructed to imagine at the block ('n/a' in rest blocks)
    • 'category_name': ImageNet/WordNet synset (category) which the subject was instructed to imagine at the block ('n/a' in rest blocks)
    • 'response_time': time of button press for imagery quality evaluation at the block, elapsed time (sec) from the beginning of each run ('n/a' when the subject did not press the button in the block)
    • 'evaluation': vividness of their mental imagery evaluated by the subject (very vivid, fairly vivid, rather vivid, not vivid, or cannot recognize the target)
    • Additional column 'category_index' is for internal use.

    Image/category labels

    The stimulus images are named as 'n03626115_19498' where 'n03626115' is ImageNet/WorNet ID for a synset (category) and '19498' is image ID. The categories are named as the ImageNet/WordNet sysnet ID (e.g., 'n03626115'). The stimulus and category names are included in the task event files as 'stimulus_name' and 'category_name', respectively. For use in analysis code, the task event files also contain 'stimulus_id' and 'category_id', which are float numbers generated based on the stimulus or category names (e.g., 'n03626115_19498' --> 3626115.019498).

    The mapping between stimulus/category names and IDs:

    • stimulus_ImageNetTraining.tsv (perceptionTraining sessions)
      • The first and second column from the left is 'stimulus_name' and 'stimulus_id', respectively.
    • stimulus_ImageNetTest.tsv (perceptionTest sessions)
      • The first and second column from the left is 'stimulus_name' and 'stimulus_id', respectively.
    • category_GODImagery.tsv (imageryTest sessions)
      • The first and second column from the left is 'category_name' and 'category_id', respectively.

    Stimulus images

    Because of licensing issues, we do not include the stimulus images in the dataset. A script downloading the images from ImageNet is available at https://github.com/KamitaniLab/GenericObjectDecoding. Image features (CNN unit responses, HMAX, GIST, and SIFT) used in the original study are available at https://figshare.com/articles/Generic_Object_Decoding/7387130.

    Contact

  15. h

    mini-imagenet

    • huggingface.co
    Updated Dec 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PyTorch Image Models (2024). mini-imagenet [Dataset]. https://huggingface.co/datasets/timm/mini-imagenet
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 6, 2024
    Dataset authored and provided by
    PyTorch Image Models
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Description

    A mini version of ImageNet-1k with 100 of 1000 classes present. Unlike some 'mini' variants this one includes the original images at their original sizes. Many such subsets downsample to 84x84 or other smaller resolutions.

      Data Splits
    
    
    
    
    
      Train
    

    50000 samples from ImageNet-1k train split

      Validation
    

    10000 samples from ImageNet-1k train split

      Test
    

    5000 samples from ImageNet-1k validation split (all 50 samples per class)… See the full description on the dataset page: https://huggingface.co/datasets/timm/mini-imagenet.

  16. F

    Semantic Image-Text-Classes

    • data.uni-hannover.de
    jsonl, partaa, partab +48
    Updated Jan 20, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TIB (2022). Semantic Image-Text-Classes [Dataset]. https://data.uni-hannover.de/dataset/image-text-classes
    Explore at:
    partav, partab, partah, partam, partbk, partbb, partaj, partap, partbl, partbd, partbs, partbu, partbw, partaf, partbm, partbe, partag, tar, partbc, partbq, partat, partaq, partai, partbg, partaw, partbt, partbr, jsonl, partal, partbh, partbi, partbv, partba, partbj, partax, partay, partad, partbp, partbf, partas, partau, partak, partbn, partae, partao, partaa, partac, partaz, partbo, partar, partanAvailable download formats
    Dataset updated
    Jan 20, 2022
    Dataset authored and provided by
    TIB
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    This dataset is introduced by the paper "Understanding, Categorizing and Predicting Semantic Image-Text Relations".

    If you are using this dataset it in your work, please cite:

    @inproceedings{otto2019understanding,
    title={Understanding, Categorizing and Predicting Semantic Image-Text Relations},
    author={Otto, Christian and Springstein, Matthias and Anand, Avishek and Ewerth, Ralph},
    booktitle={In Proceedings of ACM International Conference on Multimedia Retrieval (ICMR 2019)},
    year={2019}
    }
    

    To create the full tar use the following command in the command line:

    cat train.tar.part* > train_concat.tar

    Then simply untar it via

    tar -xf train_concat.tar

    The jsonl files contain metadata of the following format:

    id, origin, CMI, SC, STAT, ITClass, text, tagged text, image_path

    License Information:

    This dataset is composed of various open access sources as described in the paper. We thank all the original authors for their work.

  17. Generic Object Decoding

    • openneuro.org
    Updated Jul 15, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tomoyasu Horikawa; Yukiyasu Kamitani (2018). Generic Object Decoding [Dataset]. https://openneuro.org/datasets/ds001246/versions/00001
    Explore at:
    Dataset updated
    Jul 15, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Tomoyasu Horikawa; Yukiyasu Kamitani
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Generic object decoding

    Original paper: Horikawa T & Kamitani Y (2017) Generic decoding of seen and imagined objects using hierarchical visual features. Nature communications 8:15037.

    Overview

    In this study, fMRI data was recorded while subjects were viewing object images (image presentation experiment) or were imagining object images (imagery experiment). The image presentation experiment consisted of two distinct types of sessions: training image sessions and test image sessions. In the training image session, a total of 1,200 images from 150 object categories (8 images from each category) were each presented only once (24 runs). In the test image session, a total of 50 images from 50 object categories (1 image from each category) were presented 35 times each (35 runs). During the image presentation experiment, subjects performed one-back image repetition task (5 trials in each run). In the imagery experiment, subjects were required to visually imagine images from 1 of the 50 categories (20 runs; 25 categories in each run; 10 samples for each category) that were presented in the test image session of the image presentation experiment. fMRI data in the training image sessions were used to train models (decoders) which predict visual features from fMRI patterns, and those in the test image sessions and the imagery experiment were used to evaluate the model performance. Predicted features for the test image sessions and imagery experiment are used to identify seen/imagined object categories from a set of computed features for numerous object images.

    Dataset

    MRI files

    The present dataset contains fMRI data from five subjects ('sub-01', 'sub-02', 'sub-03', 'sub-04', and 'sub-05'). Each subject data contains three types of MRI data each of which was collected over multiple scanning sessions.

    • 'ses-perceptionTraining': fMRI data from the training image sessions in the image presentation experiment (24 runs; 3-5 scanning sessions)
    • 'ses-perceptionTest': fMRI data from the test image sessions in the image presentation experiment (35 runs; 4-6 scanning sessions)
    • 'ses-imageryTest': fMRI data from the imagery experiment (20 runs; 3-5 scanning sessions)

    Each scanning session consisted of functional (EPI) and anatomical (inplane T2) data. The functional EPI images covered the entire brain (TR, 3000 ms; TE, 30 ms; flip angle, 80°; voxel size, 3 × 3 × 3 mm; FOV, 192 × 192 mm; number of slices, 50, slice gap, 0 mm) and inplane T2-weighted anatomical images were acquired with the same slices used for the EPI (TR, 7020 ms; TE, 69 ms; flip angle, 160°; voxel size, 0.75 × 0.75 × 3.0 mm; FOV, 192 × 192 mm). The dataset also includes a T1-weighted anatomical reference image for each subject (TR, 2250 ms; TE, 3.06 ms; TI, 900 ms; flip angle, 9°; voxel size, 1.0 × 1.0 × 1.0 mm; FOV, 256 × 256 mm). The T1-weighted images were scanned only once for each subject in a separate scanning session and are stored in 'ses-anatomy' directories. The T1-weighted images were defaced by pydeface (https://pypi.python.org/pypi/pydeface). All DICOM files are converted to Nifti-1 files by mri_convert in FreeSurfer. In addition, the dataset contains mask images of manually defined ROIs for each subject in 'sourcedata' directory (See 'README' in 'sourcedata' for more details).

    Task event files

    Task event files (‘sub-*_ses-*_task-*_run-*_events.tsv’) contains recorded event (stimuli presentation, subject responses, etc.) during fMRI runs. In task event files for perception task (‘ses-perceptionTraining' and 'ses-perceptionTest'), each column represents:

    • 'onset': onset time (sec) of an event
    • 'duration': duration (sec) of the event
    • 'trial_no': trial (block) number of the event
    • 'event_type': type of the event ('rest': Rest block without visual stimulus, 'stimulus': Stimulus presentation block)
    • ‘image_file’: file name of the stimulus image presented in a stimulus block ('null' in rest blocks)
    • 'response_time': time of button press at the block, elapsed time (sec) from the beginning of each run ('0' means that a subject did not press the button in the block)

    The name of a stimulus image file is formatted like as 'n03626115_19498.JPEG' where 'n03626115' is ImageNet/WorNet ID for a synset (category) and '19498' is image ID. Because of copyright, we do not include the stimulus images in the dataset. A script downloading the images from ImageNet is available at https://github.com/KamitaniLab/GenericObjectDecoding. Image features (CNN unit responses, HMAX, GIST, and SIFT) used in the original study are available at http://brainliner.jp/data/brainliner/Generic_Object_Decoding.

    In task event files for imagery task ('ses-imageryTest'), each column in represents:

    • 'onset': onset time (sec) of an event
    • 'duration': duration (sec) of the event
    • 'trial_no': trial (block) number of the event
    • 'event_type': type of the event ('rest' and 'inter_rest': rest period, 'cue': cue presentation period, 'imagery': imagery period, 'evaluation': evaluation of imagery quality period)
    • 'category_id': ImageNet/WordNet synset ID of a synset (category) which the subject was instructed to imagine at the block
    • 'response_time': time of button press for imagery quality evaluation at the block, elapsed time (sec) from the beginning of each run
    • 'evaluation': vividness of their mental imagery evaluated by the subject (very vivid, fairly vivid, rather vivid, not vivid, or cannot recognize the target)
  18. h

    reduced-imagenet

    • huggingface.co
    Updated Apr 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rich Wardle (2024). reduced-imagenet [Dataset]. https://huggingface.co/datasets/richwardle/reduced-imagenet
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 13, 2024
    Authors
    Rich Wardle
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Imagenet Mini Dataset

    This dataset is a subset of the Imagenet validation set containing 26,000 images. It has been curated to have equal class distributions, with 26 randomly sampled images from each class. All images have been resized to (224, 224) pixels, and are in RGB format.

      Citation
    

    If you use this dataset in your research, please cite the original Imagenet dataset: Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale… See the full description on the dataset page: https://huggingface.co/datasets/richwardle/reduced-imagenet.

  19. O

    PartImageNet

    • opendatalab.com
    • huggingface.co
    zip
    Updated Apr 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ByteDance (2023). PartImageNet [Dataset]. https://opendatalab.com/OpenDataLab/PartImageNet
    Explore at:
    zip(3002730179 bytes)Available download formats
    Dataset updated
    Apr 1, 2023
    Dataset provided by
    University of Technology Sydney
    Johns Hopkins University
    ByteDance
    Description

    PartImageNet is a large, high-quality dataset with part segmentation annotations. It consists of 158 classes from ImageNet with approximately 24′000 images. The classes are grouped into 11 super-categories and the parts split are designed according to the super-category as shown below. The number in the brackets after the category name indicates the total number of classes of the category.

  20. a

    Downsampled Open Images V4 Dataset

    • academictorrents.com
    bittorrent
    Updated Dec 19, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    None (2018). Downsampled Open Images V4 Dataset [Dataset]. https://academictorrents.com/details/9208d33aceb2ca3eb2beb70a192600c9c41efba1
    Explore at:
    bittorrent(85220313799)Available download formats
    Dataset updated
    Dec 19, 2018
    Authors
    None
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    This is the downsampled version of the Open Images V4 Dataset. The Open Images V4 dataset contains 15.4M bounding-boxes for 600 categories on 1.9M images and 30.1M human-verified image-level labels for 19794 categories. The dataset is available at this link. This total size of the full dataset is 18TB. There s also a smaller version which contains rescaled images to have at most 1024 pixels on the longest side. However, the total size of the rescaled dataset is still large (513GB for training, 12GB for validation and 36GB for testing). I provide a much smaller version of the Open Images Dataset V4, as inspired by Downsampled ImageNet datasets @PatrykChrabaszcz. These downsampled dataset are much smaller in size so everyone can download it with ease (59GB for training with 512px version and 16GB for training with 256px version). Experiments on these downsampled datasets are also much faster than the original. | Dataset | Train Size | Validation Size | Test Size | Test Challenge Size |

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Hao Zheng (2022). tiny-imagenet [Dataset]. https://huggingface.co/datasets/zh-plus/tiny-imagenet

tiny-imagenet

Tiny-ImageNet

zh-plus/tiny-imagenet

Explore at:
11 scholarly articles cite this dataset (View in Google Scholar)
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Aug 12, 2022
Authors
Hao Zheng
License

https://choosealicense.com/licenses/undefined/https://choosealicense.com/licenses/undefined/

Description

Dataset Card for tiny-imagenet

  Dataset Summary

Tiny ImageNet contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. Each class has 500 training images, 50 validation images, and 50 test images.

  Languages

The class labels in the dataset are in English.

  Dataset Structure





  Data Instances

{ 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=64x64 at 0x1A800E8E190, 'label': 15 }… See the full description on the dataset page: https://huggingface.co/datasets/zh-plus/tiny-imagenet.

Search
Clear search
Close search
Google apps
Main menu