17 datasets found
  1. h

    imagenet-r

    • huggingface.co
    Updated Jun 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Weixiong Lin (2024). imagenet-r [Dataset]. https://huggingface.co/datasets/axiong/imagenet-r
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 18, 2024
    Authors
    Weixiong Lin
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    ImageNet-R

    This repo is made to facilitate the evaluation of various pretraining models. It's constructed from the source file provided by official implementation.

      Usage
    

    from datasets import load_dataset

    dataset = load_dataset('axiong/imagenet-r')

      Dataset Summary
    

    ImageNet-R(endition) contains art, cartoons, deviantart, graffiti, embroidery, graphics, origami, paintings, patterns, plastic objects, plush objects, sculptures, sketches, tattoos, toys, and video… See the full description on the dataset page: https://huggingface.co/datasets/axiong/imagenet-r.

  2. T

    imagenet_r

    • tensorflow.org
    Updated Jun 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenet_r [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenet_r
    Explore at:
    Dataset updated
    Jun 1, 2024
    Description

    ImageNet-R is a set of images labelled with ImageNet labels that were obtained by collecting art, cartoons, deviantart, graffiti, embroidery, graphics, origami, paintings, patterns, plastic objects, plush objects, sculptures, sketches, tattoos, toys, and video game renditions of ImageNet classes. ImageNet-R has renditions of 200 ImageNet classes resulting in 30,000 images. by collecting new data and keeping only those images that ResNet-50 models fail to correctly classify. For more details please refer to the paper.

    The label space is the same as that of ImageNet2012. Each example is represented as a dictionary with the following keys:

    • 'image': The image, a (H, W, 3)-tensor.
    • 'label': An integer in the range [0, 1000).
    • 'file_name': A unique sting identifying the example within the dataset.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenet_r', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenet_r-0.2.0.png" alt="Visualization" width="500px">

  3. Data from: Tiny-ImageNet-R

    • zenodo.org
    zip
    Updated Jun 17, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin Weiss; Nasim Rahaman; Martin Weiss; Nasim Rahaman (2022). Tiny-ImageNet-R [Dataset]. http://doi.org/10.5281/zenodo.6653675
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 17, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Martin Weiss; Nasim Rahaman; Martin Weiss; Nasim Rahaman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Tiny-ImageNet-R, is a down-sampled subset of ImageNet-R(enditions) imagenet-r. It contains roughly 12,000 samples categorized in 64 classes (a subset of Tiny-ImageNet classes), spread across multiple visual domains such as art, cartoons, sculptures, origami, graffiti, and embroidery.

  4. h

    MMEB-eval-ImageNet-R-beir-v3

    • huggingface.co
    Updated Jul 27, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zilin Xiao (2025). MMEB-eval-ImageNet-R-beir-v3 [Dataset]. https://huggingface.co/datasets/MrZilinXiao/MMEB-eval-ImageNet-R-beir-v3
    Explore at:
    Dataset updated
    Jul 27, 2025
    Authors
    Zilin Xiao
    Description

    MrZilinXiao/MMEB-eval-ImageNet-R-beir-v3 dataset hosted on Hugging Face and contributed by the HF Datasets community

  5. t

    Fahad Sarfraz, Elahe Arani, Bahram Zonooz (2024). Dataset: DomainNet,...

    • service.tib.eu
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Fahad Sarfraz, Elahe Arani, Bahram Zonooz (2024). Dataset: DomainNet, ImageNet-R, ImageNet-B, and ImageNet-A. https://doi.org/10.57702/wm2cmadl [Dataset]. https://service.tib.eu/ldmservice/dataset/domainnet--imagenet-r--imagenet-b--and-imagenet-a
    Explore at:
    Dataset updated
    Dec 16, 2024
    Description

    The dataset used in the paper is a classification dataset, specifically DomainNet, ImageNet-R, ImageNet-B, and ImageNet-A.

  6. P

    ImageNet-W Dataset

    • paperswithcode.com
    Updated Dec 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhiheng Li; Ivan Evtimov; Albert Gordo; Caner Hazirbas; Tal Hassner; Cristian Canton Ferrer; Chenliang Xu; Mark Ibrahim (2022). ImageNet-W Dataset [Dataset]. https://paperswithcode.com/dataset/imagenet-w
    Explore at:
    Dataset updated
    Dec 12, 2022
    Authors
    Zhiheng Li; Ivan Evtimov; Albert Gordo; Caner Hazirbas; Tal Hassner; Cristian Canton Ferrer; Chenliang Xu; Mark Ibrahim
    Description

    ImageNet-W(atermark) is a test set to evaluate models’ reliance on the newly found watermark shortcut in ImageNet, which is used to predict the carton class. ImageNet-W is created by overlaying transparent watermarks on the ImageNet validation set. Two metrics are used to evaluate watermark shortcut reliance: (1) IN-W Gap: the top-1 accuracy drop from ImageNet to ImageNet-W, (2) Carton Gap: carton class accuracy increase from ImageNet to ImageNet-W. Combining ImageNet-W with previous out-of-distribution variants of ImageNet (e.g., Stylized ImageNet, ImageNet-R, ImageNet-9) forms a comprehensive suite of multi-shortcut evaluation on ImageNet.

  7. t

    Shahaf E. Finder, Roy Amoyal, Eran Treister, Oren Freifeld (2024). Dataset:...

    • service.tib.eu
    Updated Dec 2, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Shahaf E. Finder, Roy Amoyal, Eran Treister, Oren Freifeld (2024). Dataset: ImageNet-R/A/Sk. https://doi.org/10.57702/980uqjcg [Dataset]. https://service.tib.eu/ldmservice/dataset/imagenet-r-a-sk
    Explore at:
    Dataset updated
    Dec 2, 2024
    Description

    The dataset used in the paper is not explicitly mentioned, but it is implied to be ImageNet-R/A/Sk for ImageNet-R/A/Sk classification.

  8. h

    reduced-imagenet

    • huggingface.co
    Updated Jun 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rich Wardle (2024). reduced-imagenet [Dataset]. https://huggingface.co/datasets/richwardle/reduced-imagenet
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 22, 2024
    Authors
    Rich Wardle
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Imagenet Mini Dataset

    This dataset is a subset of the Imagenet validation set containing 26,000 images. It has been curated to have equal class distributions, with 26 randomly sampled images from each class. All images have been resized to (224, 224) pixels, and are in RGB format.

      Citation
    

    If you use this dataset in your research, please cite the original Imagenet dataset: Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale… See the full description on the dataset page: https://huggingface.co/datasets/richwardle/reduced-imagenet.

  9. CIFAR-100-R dataset

    • zenodo.org
    zip
    Updated Sep 5, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vahid Reza Khazaie; Vahid Reza Khazaie (2023). CIFAR-100-R dataset [Dataset]. http://doi.org/10.5281/zenodo.8316429
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 5, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Vahid Reza Khazaie; Vahid Reza Khazaie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Towards Realistic Out-of-Distribution Detection: A Novel Evaluation Framework for Improving Generalization in OOD Detection:

    This paper presents a novel evaluation framework for Out-of-Distribution (OOD) detection that aims to assess the performance of machine learning models in more realistic settings. We observed that the real-world requirements for testing OOD detection methods are not satisfied by the current testing protocols. They usually encourage methods to have a strong bias towards a low level of diversity in normal data. To address this limitation, we propose new OOD test datasets (CIFAR-10-R, CIFAR-100-R, and ImageNet-30-R) that can allow researchers to benchmark OOD detection performance under realistic distribution shifts. Additionally, we introduce a Generalizability Score (GS) to measure the generalization ability of a model during OOD detection. Our experiments demonstrate that improving the performance on existing benchmark datasets does not necessarily improve the usability of OOD detection models in real-world scenarios. While leveraging deep pre-trained features has been identified as a promising avenue for OOD detection research, our experiments show that state-of-the-art pre-trained models tested on our proposed datasets suffer a significant drop in performance. To address this issue, we propose a post-processing stage for adapting pre-trained features under these distribution shifts before calculating the OOD scores, which significantly enhances the performance of state-of-the-art pre-trained models on our benchmarks.

  10. a

    ImageNet LSVRC 2012 Training Set (lmdb)

    • academictorrents.com
    bittorrent
    Updated Dec 28, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Deng, J. and Dong, W. and Socher, R. and Li, L.-J. and Li, K. and Fei-Fei, L. (2019). ImageNet LSVRC 2012 Training Set (lmdb) [Dataset]. https://academictorrents.com/details/d58437a61c1adf9801df99c6a82960d076cb7312
    Explore at:
    bittorrent(150723780608)Available download formats
    Dataset updated
    Dec 28, 2019
    Dataset authored and provided by
    Deng, J. and Dong, W. and Socher, R. and Li, L.-J. and Li, K. and Fei-Fei, L.
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    You have been granted access for non-commercial research/educational use. By accessing the data, you have agreed to the following terms. You (the "Researcher") have requested permission to use the ImageNet database (the "Database") at Princeton University and Stanford University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions: 1. Researcher shall use the Database only for non-commercial research and educational purposes. 2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. 3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify Princeton University and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher s use of the Database, including but

  11. t

    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L. (2024)....

    • service.tib.eu
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L. (2024). Dataset: ImageNet Subsets. https://doi.org/10.57702/oetogsha [Dataset]. https://service.tib.eu/ldmservice/dataset/imagenet-subsets
    Explore at:
    Dataset updated
    Dec 16, 2024
    Description

    ImageNet Subsets

  12. t

    Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L. (2024)....

    • service.tib.eu
    Updated Dec 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L. (2024). Dataset: ImageNet: A Large-Scale Hierarchical Image Database. https://doi.org/10.57702/0elnaxd7 [Dataset]. https://service.tib.eu/ldmservice/dataset/imagenet--a-large-scale-hierarchical-image-database
    Explore at:
    Dataset updated
    Dec 2, 2024
    Description

    The ImageNet dataset is a large-scale image database that contains over 14 million images, each labeled with one of 21,841 categories.

  13. h

    Data from: UCIT

    • huggingface.co
    Updated May 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    HaiyangGuo (2025). UCIT [Dataset]. https://huggingface.co/datasets/HaiyangGuo/UCIT
    Explore at:
    Dataset updated
    May 28, 2025
    Authors
    HaiyangGuo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    UCIT benchmark

    This benchmark is used to train and evaluate the Continual Instruction Tuning capabilities of MLLMs and is proposed by HiDe-LLaVA (ACL 2025). This repository contains mainly the training and testing instructions for the datasets used as well as images of ImageNet-R and Flickr30k datasets. For images of other datasets, please refer to the links provided in our GitHub. If you use our benchmarks, please cite our work: @article{guo2025hide, title={Hide-llava:… See the full description on the dataset page: https://huggingface.co/datasets/HaiyangGuo/UCIT.

  14. SE_ResNeXt101_32x4d_imagenet_weights

    • kaggle.com
    zip
    Updated Jul 4, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David R. Pugh (2019). SE_ResNeXt101_32x4d_imagenet_weights [Dataset]. https://www.kaggle.com/davidrpugh/se-resnext101-32x4d-imagenet-weights
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Jul 4, 2019
    Authors
    David R. Pugh
    Description

    Dataset

    This dataset was created by David R. Pugh

    Contents

  15. f

    Classification performance of the SVM with linear and rbf kernel, when the...

    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alan Caio R. Marques; Marcos M. Raimundo; Ellen Marianne B. Cavalheiro; Luis F. P. Salles; Christiano Lyra; Fernando J. Von Zuben (2023). Classification performance of the SVM with linear and rbf kernel, when the features are extracted from the penultimate layer of an AlexNet CNN trained with an www.image-net.org dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0192011.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Alan Caio R. Marques; Marcos M. Raimundo; Ellen Marianne B. Cavalheiro; Luis F. P. Salles; Christiano Lyra; Fernando J. Von Zuben
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Rows show the performance of each learning machine (SVM with linear kernel and SVM with rbf kernel) on each image view (head, dorsum and profile). Columns show accuracy, average precision and minimum precision performance for each label on top lists. H = head view; D = dorsal view; P = profile view; SVM-L = SVM with linear kernel; SVM-R = SVM with rbf kernel.

  16. sd_ImageNet_val

    • kaggle.com
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wei li long (2024). sd_ImageNet_val [Dataset]. https://www.kaggle.com/datasets/weililong/sd-imagenet-val/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 9, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    wei li long
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    这里仅仅是测试数据集, 请所代码中的数据都改成测试集来测试即可 例子: https://github.com/CreamyLong/stable-diffusion

    \ldm\data\imagenet.py

    class ImageNetTrain(ImageNetBase):
    NAME = "ILSVRC2012_validation" #ILSVRC2012_train
    URL = "http://www.image-net.org/challenges/LSVRC/2012/"
    AT_HASH = "a306397ccf9c2ead27155983c254227c0fd938e2"
    FILES = [
      "ILSVRC2012_img_train.tar",
    ]
    SIZES = [
      147897477120,
    ]
    
    def _init_(self, process_images=True, data_root=None, **kwargs):
      self.process_images = process_images
      self.data_root = data_root
      super()._init_(**kwargs)
    
    def _prepare(self):
    
      if self.data_root:
        self.root = os.path.join(self.data_root, self.NAME)
        # print(self.root) #data/myimages/ILSVRC2012_validation
        # exit()
      else:
        # cachedir = os.environ.get("XDG_CACHE_HOME", os.path.expanduser("~/.cache"))
        # self.root = os.path.join(cachedir, "autoencoders/data", self.NAME)
        print('不要在线下载ILSVRC2012,太大了')
        exit()
    
      self.datadir = os.path.join(self.root, "data")
      # print(self.datadir) #data/myimages/ILSVRC2012_validation\data
      # exit()
    
    
      self.txt_filelist = os.path.join(self.root, "me_images.txt") 
      print('================',self.txt_filelist) 
      self.expected_length = 1281167
      self.random_crop = retrieve(self.config, "ImageNetTrain/random_crop",  default=True)
    
    
      if not tdu.is_prepared(self.root):
    
        # prep
        print("Preparing dataset {} in {}".format(self.NAME, self.root))
    
    
        datadir = self.datadir
    
        # if not os.path.exists(datadir):
        #   path = os.path.join(self.root, self.FILES[0])
        #   if not os.path.exists(path) or not os.path.getsize(path)==self.SIZES[0]:
        #     import academictorrents as at
        #     atpath = at.get(self.AT_HASH, datastore=self.root)
        #     assert atpath == path
    
        #   print("Extracting {} to {}".format(path, datadir))
        #   os.makedirs(datadir, exist_ok=True)
        #   with tarfile.open(path, "r:") as tar:
        #     tar.extractall(path=datadir)
    
        #   print("Extracting sub-tars.")
        #   subpaths = sorted(glob.glob(os.path.join(datadir, "*.tar")))
        #   for subpath in tqdm(subpaths):
        #     subdir = subpath[:-len(".tar")]
        #     os.makedirs(subdir, exist_ok=True)
        #     with tarfile.open(subpath, "r:") as tar:
        #       tar.extractall(path=subdir)
    
        # filelist = glob.glob(os.path.join(datadir, "**", "*.JPEG"))
        # filelist = glob.glob(os.path.join(datadir, "*.JPEG")) 
    
        filelist = glob.glob(os.path.join(datadir, "*", "*.JPEG"))
        filelist = [os.path.relpath(p, start=datadir) for p in filelist]
        filelist = sorted(filelist)
        filelist = "
    

    ".join(filelist)+" " with open(self.txt_filelist, "w") as f: f.write(filelist)

      tdu.mark_prepared(self.root)
    

    ${XDG_CACHE}/autoencoders/data/ILSVRC2012_{split}/data/ ├── n01440764 │ ├── n01440764_10026.JPEG │ ├── n01440764_10027.JPEG │ ├── ... ├── n01443537 │ ├── n01443537_10007.JPEG │ ├── n01443537_10014.JPEG │ ├── ... ├── ...

  17. Style Transfer for Object Detection in Art

    • kaggle.com
    Updated Mar 11, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Kadish (2021). Style Transfer for Object Detection in Art [Dataset]. https://www.kaggle.com/datasets/davidkadish/style-transfer-for-object-detection-in-art/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 11, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    David Kadish
    Description

    Context

    Despite recent advances in object detection using deep learning neural networks, these neural networks still struggle to identify objects in art images such as paintings and drawings. This challenge is known as the cross depiction problem and it stems in part from the tendency of neural networks to prioritize identification of an object's texture over its shape. In this paper we propose and evaluate a process for training neural networks to localize objects - specifically people - in art images. We generated a large dataset for training and validation by modifying the images in the COCO dataset using AdaIn style transfer (style-coco.tar.xz). This dataset was used to fine-tune a Faster R-CNN object detection network (2020-12-10_09-45-15_58672_resnet152_stylecoco_epoch_15.pth), which is then tested on the existing People-Art testing dataset (PeopleArt-Coco.tar.xz). The result is a significant improvement on the state of the art and a new way forward for creating datasets to train neural networks to process art images.

    Content

    2020-12-10_09-45-15_58672_resnet152_stylecoco_epoch_15.pth: Trained object detection network (Faster-RCNN using a ResNet152 backbone pretrained on ImageNet) for use with PyTorch PeopleArt-Coco.tar.xz: People-Art dataset with COCO-formatted annotations (original at https://github.com/BathVisArtData/PeopleArt) style-coco.tar.xz: Stylized COCO dataset containing only the person category. Used to train 2020-12-10_09-45-15_58672_resnet152_stylecoco_epoch_15.pth

    Code

    The code is available on github at https://github.com/dkadish/Style-Transfer-for-Object-Detection-in-Art

    Citing

    If you are using this code or the concept of style transfer for object detection in art, please cite our paper (https://arxiv.org/abs/2102.06529):

    D. Kadish, S. Risi, and A. S. Løvlie, “Improving Object Detection in Art Images Using Only Style Transfer,” Feb. 2021.

  18. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Weixiong Lin (2024). imagenet-r [Dataset]. https://huggingface.co/datasets/axiong/imagenet-r

imagenet-r

axiong/imagenet-r

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Jun 18, 2024
Authors
Weixiong Lin
License

MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically

Description

ImageNet-R

This repo is made to facilitate the evaluation of various pretraining models. It's constructed from the source file provided by official implementation.

  Usage

from datasets import load_dataset

dataset = load_dataset('axiong/imagenet-r')

  Dataset Summary

ImageNet-R(endition) contains art, cartoons, deviantart, graffiti, embroidery, graphics, origami, paintings, patterns, plastic objects, plush objects, sculptures, sketches, tattoos, toys, and video… See the full description on the dataset page: https://huggingface.co/datasets/axiong/imagenet-r.

Search
Clear search
Close search
Google apps
Main menu