55 datasets found
  1. T

    imagenet2012

    • tensorflow.org
    Updated Jun 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenet2012 [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenet2012
    Explore at:
    Dataset updated
    Jun 1, 2024
    Description

    ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy.

    The test split contains 100K images but no labels because no labels have been publicly released. We provide support for the test split from 2012 with the minor patch released on October 10, 2019. In order to manually download this data, a user must perform the following operations:

    1. Download the 2012 test split available here.
    2. Download the October 10, 2019 patch. There is a Google Drive link to the patch provided on the same page.
    3. Combine the two tar-balls, manually overwriting any images in the original archive with images from the patch. According to the instructions on image-net.org, this procedure overwrites just a few images.

    The resulting tar-ball may then be processed by TFDS.

    To assess the accuracy of a model on the ImageNet test split, one must run inference on all images in the split, export those results to a text file that must be uploaded to the ImageNet evaluation server. The maintainers of the ImageNet evaluation server permits a single user to submit up to 2 submissions per week in order to prevent overfitting.

    To evaluate the accuracy on the test split, one must first create an account at image-net.org. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at https://image-net.org/challenges/LSVRC/eval_server.php The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is "Classification submission (top-5 cls error)". A sample of an exported text file looks like the following:

    771 778 794 387 650
    363 691 764 923 427
    737 369 430 531 124
    755 930 755 59 168
    

    The export format is described in full in "readme.txt" within the 2013 development kit available here: https://image-net.org/data/ILSVRC/2013/ILSVRC2013_devkit.tgz Please see the section entitled "3.3 CLS-LOC submission format". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See labels.txt.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenet2012', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenet2012-5.1.0.png" alt="Visualization" width="500px">

  2. h

    tiny-imagenet

    • huggingface.co
    • datasets.activeloop.ai
    Updated Aug 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hao Zheng (2022). tiny-imagenet [Dataset]. https://huggingface.co/datasets/zh-plus/tiny-imagenet
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 12, 2022
    Authors
    Hao Zheng
    License

    https://choosealicense.com/licenses/undefined/https://choosealicense.com/licenses/undefined/

    Description

    Dataset Card for tiny-imagenet

      Dataset Summary
    

    Tiny ImageNet contains 100000 images of 200 classes (500 for each class) downsized to 64×64 colored images. Each class has 500 training images, 50 validation images, and 50 test images.

      Languages
    

    The class labels in the dataset are in English.

      Dataset Structure
    
    
    
    
    
      Data Instances
    

    { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=64x64 at 0x1A800E8E190, 'label': 15 }… See the full description on the dataset page: https://huggingface.co/datasets/zh-plus/tiny-imagenet.

  3. T

    imagenet_resized

    • tensorflow.org
    Updated Jun 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenet_resized [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenet_resized
    Explore at:
    Dataset updated
    Jun 1, 2024
    Description

    This dataset consists of the ImageNet dataset resized to fixed size. The images here are the ones provided by Chrabaszcz et. al. using the box resize method.

    For downsampled ImageNet for unsupervised learning see downsampled_imagenet.

    WARNING: The integer labels used are defined by the authors and do not match those from the other ImageNet datasets provided by Tensorflow datasets. See the original label list, and the labels used by this dataset. Additionally, the original authors 1 index there labels which we convert to 0 indexed by subtracting one.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenet_resized', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenet_resized-8x8-0.1.0.png" alt="Visualization" width="500px">

  4. h

    imagenet-22k-wds

    • huggingface.co
    Updated Jan 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PyTorch Image Models (2024). imagenet-22k-wds [Dataset]. https://huggingface.co/datasets/timm/imagenet-22k-wds
    Explore at:
    Dataset updated
    Jan 29, 2024
    Dataset authored and provided by
    PyTorch Image Models
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Summary

    This is a copy of the full ImageNet dataset consisting of all of the original 21841 clases. It also contains labels in a separate field for the '12k' subset described at at (https://github.com/rwightman/imagenet-12k, https://huggingface.co/datasets/timm/imagenet-12k-wds) This dataset is from the original fall11 ImageNet release which has been replaced by the winter21 release which removes close to 3000 synsets containing people, a number of these are of an offensive… See the full description on the dataset page: https://huggingface.co/datasets/timm/imagenet-22k-wds.

  5. T

    imagenette

    • tensorflow.org
    • opendatalab.com
    • +1more
    Updated Jun 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenette [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenette
    Explore at:
    Dataset updated
    Jun 1, 2024
    Description

    Imagenette is a subset of 10 easily classified classes from the Imagenet dataset. It was originally prepared by Jeremy Howard of FastAI. The objective behind putting together a small version of the Imagenet dataset was mainly because running new ideas/algorithms/experiments on the whole Imagenet take a lot of time.

    This version of the dataset allows researchers/practitioners to quickly try out ideas and share with others. The dataset comes in three variants:

    • Full size
    • 320 px
    • 160 px

    Note: The v2 config correspond to the new 70/30 train/valid split (released in Dec 6 2019).

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenette', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenette-full-size-v2-1.0.0.png" alt="Visualization" width="500px">

  6. h

    imagenet_1k_resized_256

    • huggingface.co
    Updated Feb 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Evan (2025). imagenet_1k_resized_256 [Dataset]. https://huggingface.co/datasets/evanarlian/imagenet_1k_resized_256
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 26, 2025
    Authors
    Evan
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Card for "imagenet_1k_resized_256"

      Dataset summary
    

    The same ImageNet dataset but all the smaller side resized to 256. A lot of pretraining workflows contain resizing images to 256 and random cropping to 224x224, this is why 256 is chosen. The resized dataset can also be downloaded much faster and consume less space than the original one. See here for detailed readme.

      Dataset Structure
    

    Below is the example of one row of data. Note that the labels in… See the full description on the dataset page: https://huggingface.co/datasets/evanarlian/imagenet_1k_resized_256.

  7. h

    imagenet-1k-64x64

    • huggingface.co
    Updated Sep 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benjamin Paine (2024). imagenet-1k-64x64 [Dataset]. https://huggingface.co/datasets/benjamin-paine/imagenet-1k-64x64
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 15, 2024
    Authors
    Benjamin Paine
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Repack Information

    This repository contains a complete repack of ILSVRC/imagenet-1k in Parquet format with the following data transformations:

    Images were center-cropped to square to the minimum height/width dimension. Images were then rescaled to 256x256 using Lanczos resampling. This dataset is available at benjamin-paine/imagenet-1k-256x256 Images were then rescaled to 128x128 using Lanczos resampling. This dataset is available at benjamin-paine/imagenet-1k-128x128. Images were… See the full description on the dataset page: https://huggingface.co/datasets/benjamin-paine/imagenet-1k-64x64.

  8. h

    mini-imagenet

    • huggingface.co
    Updated Dec 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PyTorch Image Models (2024). mini-imagenet [Dataset]. https://huggingface.co/datasets/timm/mini-imagenet
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 6, 2024
    Dataset authored and provided by
    PyTorch Image Models
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Description

    A mini version of ImageNet-1k with 100 of 1000 classes present. Unlike some 'mini' variants this one includes the original images at their original sizes. Many such subsets downsample to 84x84 or other smaller resolutions.

      Data Splits
    
    
    
    
    
      Train
    

    50000 samples from ImageNet-1k train split

      Validation
    

    10000 samples from ImageNet-1k train split

      Test
    

    5000 samples from ImageNet-1k validation split (all 50 samples per class)… See the full description on the dataset page: https://huggingface.co/datasets/timm/mini-imagenet.

  9. a

    Downsampled Open Images V4 Dataset

    • academictorrents.com
    bittorrent
    Updated Dec 19, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    None (2018). Downsampled Open Images V4 Dataset [Dataset]. https://academictorrents.com/details/9208d33aceb2ca3eb2beb70a192600c9c41efba1
    Explore at:
    bittorrent(85220313799)Available download formats
    Dataset updated
    Dec 19, 2018
    Authors
    None
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    This is the downsampled version of the Open Images V4 Dataset. The Open Images V4 dataset contains 15.4M bounding-boxes for 600 categories on 1.9M images and 30.1M human-verified image-level labels for 19794 categories. The dataset is available at this link. This total size of the full dataset is 18TB. There s also a smaller version which contains rescaled images to have at most 1024 pixels on the longest side. However, the total size of the rescaled dataset is still large (513GB for training, 12GB for validation and 36GB for testing). I provide a much smaller version of the Open Images Dataset V4, as inspired by Downsampled ImageNet datasets @PatrykChrabaszcz. These downsampled dataset are much smaller in size so everyone can download it with ease (59GB for training with 512px version and 16GB for training with 256px version). Experiments on these downsampled datasets are also much faster than the original. | Dataset | Train Size | Validation Size | Test Size | Test Challenge Size |

  10. T

    imagenet_r

    • tensorflow.org
    Updated Jun 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenet_r [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenet_r
    Explore at:
    Dataset updated
    Jun 1, 2024
    Description

    ImageNet-R is a set of images labelled with ImageNet labels that were obtained by collecting art, cartoons, deviantart, graffiti, embroidery, graphics, origami, paintings, patterns, plastic objects, plush objects, sculptures, sketches, tattoos, toys, and video game renditions of ImageNet classes. ImageNet-R has renditions of 200 ImageNet classes resulting in 30,000 images. by collecting new data and keeping only those images that ResNet-50 models fail to correctly classify. For more details please refer to the paper.

    The label space is the same as that of ImageNet2012. Each example is represented as a dictionary with the following keys:

    • 'image': The image, a (H, W, 3)-tensor.
    • 'label': An integer in the range [0, 1000).
    • 'file_name': A unique sting identifying the example within the dataset.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenet_r', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenet_r-0.2.0.png" alt="Visualization" width="500px">

  11. T

    imagenet_a

    • tensorflow.org
    Updated Jun 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenet_a [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenet_a
    Explore at:
    Dataset updated
    Jun 1, 2024
    Description

    ImageNet-A is a set of images labelled with ImageNet labels that were obtained by collecting new data and keeping only those images that ResNet-50 models fail to correctly classify. For more details please refer to the paper.

    The label space is the same as that of ImageNet2012. Each example is represented as a dictionary with the following keys:

    • 'image': The image, a (H, W, 3)-tensor.
    • 'label': An integer in the range [0, 1000).
    • 'file_name': A unique sting identifying the example within the dataset.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenet_a', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenet_a-0.1.0.png" alt="Visualization" width="500px">

  12. h

    imagenet1k-256-wds

    • huggingface.co
    Updated Jun 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adam (2024). imagenet1k-256-wds [Dataset]. https://huggingface.co/datasets/adams-story/imagenet1k-256-wds
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 22, 2024
    Authors
    Adam
    Description

    This is imagenet1k in webdataset format. Images are stored as jpg files. Every image has been resized to a maximum side length of 256. That means that if an image in the original dataset was 1000 by 500, the new size will be 256 by 128. Images with a maximum side length of under 256 were not resized. The total size of all dataset files is 57.8 GB, there are 1,281,167 rows in the training split and 50,000 rows in the validation split.

  13. f

    Classification accuracy against PGD-10 attacks on different datasets.

    • plos.figshare.com
    xls
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jie-Chao Zhao; Jin Ding; Yong-Zhi Sun; Ping Tan; Ji-En Ma; You-Tong Fang (2025). Classification accuracy against PGD-10 attacks on different datasets. [Dataset]. http://doi.org/10.1371/journal.pone.0317023.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Jie-Chao Zhao; Jin Ding; Yong-Zhi Sun; Ping Tan; Ji-En Ma; You-Tong Fang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Classification accuracy against PGD-10 attacks on different datasets.

  14. Z

    Model Zoo: A Dataset of Diverse Populations of Resnet-18 Models - Tiny...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Aug 28, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Schürholt, Konstantin (2022). Model Zoo: A Dataset of Diverse Populations of Resnet-18 Models - Tiny ImageNet [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7023277
    Explore at:
    Dataset updated
    Aug 28, 2022
    Dataset provided by
    Knyazev, Boris
    Schürholt, Konstantin
    Borth, Damian
    Giró-i-Nieto, Xavier
    Taskiran, Diyar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract

    In the last years, neural networks have evolved from laboratory environments to the state-of-the-art for many real-world problems. Our hypothesis is that neural network models (i.e., their weights and biases) evolve on unique, smooth trajectories in weight space during training. Following, a population of such neural network models (refereed to as “model zoo”) would form topological structures in weight space. We think that the geometry, curvature and smoothness of these structures contain information about the state of training and can be reveal latent properties of individual models. With such zoos, one could investigate novel approaches for (i) model analysis, (ii) discover unknown learning dynamics, (iii) learn rich representations of such populations, or (iv) exploit the model zoos for generative modelling of neural network weights and biases. Unfortunately, the lack of standardized model zoos and available benchmarks significantly increases the friction for further research about populations of neural networks. With this work, we publish a novel dataset of model zoos containing systematically generated and diverse populations of neural network models for further research. In total the proposed model zoo dataset is based on six image datasets, consist of 27 model zoos with varying hyperparameter combinations are generated and includes 50’360 unique neural network models resulting in over 2’585’360 collected model states. Additionally, to the model zoo data we provide an in-depth analysis of the zoos and provide benchmarks for multiple downstream tasks as mentioned before.

    Dataset

    This dataset is part of a larger collection of model zoos and contains the zoo of 1000 ResNet18 models trained on Tiny Imagenet. All zoos with extensive information and code can be found at www.modelzoos.cc.

    The complete zoo is 2.6TB large. Due to the size, this repository contains the checkpoints of the first 115 models at their last epoch 60. For a link to the full dataset as well as more information on the zoos and code to access and use the zoos, please see www.modelzoos.cc.

  15. O

    ImageNet-P

    • opendatalab.com
    zip
    Updated Oct 5, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of California (2018). ImageNet-P [Dataset]. https://opendatalab.com/OpenDataLab/ImageNet-P
    Explore at:
    zip(115142327882 bytes)Available download formats
    Dataset updated
    Oct 5, 2018
    Dataset provided by
    University of California
    Oregon State University
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    ImageNet-P consists of noise, blur, weather, and digital distortions. The dataset has validation perturbations; has difficulty levels; has CIFAR-10, Tiny ImageNet, ImageNet 64 × 64, standard, and Inception-sized editions; and has been designed for benchmarking not training networks. ImageNet-P departs from ImageNet-C by having perturbation sequences generated from each ImageNet validation image. Each sequence contains more than 30 frames, so to counteract an increase in dataset size and evaluation time only 10 common perturbations are used.

  16. Data from: Generic Object Decoding (fMRI on ImageNet)

    • openneuro.org
    Updated Dec 6, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tomoyasu Horikawa; Yukiyasu Kamitani (2019). Generic Object Decoding (fMRI on ImageNet) [Dataset]. http://doi.org/10.18112/openneuro.ds001246.v1.2.1
    Explore at:
    Dataset updated
    Dec 6, 2019
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Tomoyasu Horikawa; Yukiyasu Kamitani
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Generic Object Decoding (fMRI on ImageNet)

    Original paper

    Horikawa, T. & Kamitani, Y. (2017) Generic decoding of seen and imagined objects using hierarchical visual features. Nature Communications 8:15037. https://www.nature.com/articles/ncomms15037

    Overview

    In this study, fMRI data was recorded while subjects were viewing object images (image presentation experiment) or were imagining object images (imagery experiment). The image presentation experiment consisted of two distinct types of sessions: training image sessions and test image sessions. In the training image session, a total of 1,200 images from 150 object categories (8 images from each category) were each presented only once (24 runs). In the test image session, a total of 50 images from 50 object categories (1 image from each category) were presented 35 times each (35 runs). All images were taken from ImageNet (http://www.image-net.org/, Fall 2011 release), a large-scale hierarchical image database. During the image presentation experiment, subjects performed one-back image repetition task (5 trials in each run). In the imagery experiment, subjects were required to visually imagine images from 1 of the 50 categories (20 runs; 25 categories in each run; 10 samples for each category) that were presented in the test image session of the image presentation experiment. fMRI data in the training image sessions were used to train models (decoders) which predict visual features from fMRI patterns, and those in the test image sessions and the imagery experiment were used to evaluate the model performance. Predicted features for the test image sessions and imagery experiment are used to identify seen/imagined object categories from a set of computed features for numerous object images.

    Analysis demo code is available at GitHub (KamitaniLab/GenericObjectDecoding).

    Dataset

    MRI files

    The present dataset contains fMRI data from five subjects ('sub-01', 'sub-02', 'sub-03', 'sub-04', and 'sub-05'). Each subject data contains three types of MRI data each of which was collected over multiple scanning sessions.

    • 'ses-perceptionTraining': fMRI data from the training image sessions in the image presentation experiment (24 runs; 3-5 scanning sessions)
    • 'ses-perceptionTest': fMRI data from the test image sessions in the image presentation experiment (35 runs; 4-6 scanning sessions)
    • 'ses-imageryTest': fMRI data from the imagery experiment (20 runs; 3-5 scanning sessions)

    Each scanning session consisted of functional (EPI) and anatomical (inplane T2) data. The functional EPI images covered the entire brain (TR, 3000 ms; TE, 30 ms; flip angle, 80°; voxel size, 3 × 3 × 3 mm; FOV, 192 × 192 mm; number of slices, 50, slice gap, 0 mm) and inplane T2-weighted anatomical images were acquired with the same slices used for the EPI (TR, 7020 ms; TE, 69 ms; flip angle, 160°; voxel size, 0.75 × 0.75 × 3.0 mm; FOV, 192 × 192 mm). The dataset also includes a T1-weighted anatomical reference image for each subject (TR, 2250 ms; TE, 3.06 ms; TI, 900 ms; flip angle, 9°; voxel size, 1.0 × 1.0 × 1.0 mm; FOV, 256 × 256 mm). The T1-weighted images were scanned only once for each subject in a separate scanning session and are stored in 'ses-anatomy' directories. The T1-weighted images were defaced by pydeface (https://pypi.python.org/pypi/pydeface). All DICOM files are converted to Nifti-1 files by mri_convert in FreeSurfer. In addition, the dataset contains mask images of manually defined ROIs for each subject in 'sourcedata' directory (See 'README' in 'sourcedata' for more details).

    Preprocessed fMRI data

    Preprocessed fMRI data are available in derivatives/preproc-spm. See the original paper (Horikawa & Kamitani, 2017) for the details of preprocessing.

    Task event files

    Task event files (‘sub-*_ses-*_task-*_run-*_events.tsv’) contains recorded event (stimuli presentation, subject responses, etc.) during fMRI runs. In task event files for perception task (‘ses-perceptionTraining' and 'ses-perceptionTest'), each column represents:

    • 'onset': onset time (sec) of an event
    • 'duration': duration (sec) of the event
    • 'trial_no': trial (block) number of the event
    • 'event_type': type of the event ('rest': Rest block without visual stimulus, 'stimulus': Stimulus presentation block)
    • 'stimulus_id': stimulus ID of the image presented in a stimulus block ('n/a' in rest blocks)
    • 'stimulus_name': stimulus file name of the image presented in a stimulus block ('n/a' in rest blocks)
    • 'response_time': time of button press at the block, elapsed time (sec) from the beginning of each run ('n/a' when the subject did not press the button in the block)
    • Additional columns 'category_index' and 'image_index' are for internal use.

    In task event files for imagery task ('ses-imageryTest'), each column represents:

    • 'onset': onset time (sec) of an event
    • 'duration': duration (sec) of the event
    • 'trial_no': trial (block) number of the event
    • 'event_type': type of the event ('rest' and 'inter_rest': rest period, 'cue': cue presentation period, 'imagery': imagery period, 'evaluation': evaluation of imagery quality period)
    • 'category_id': ImageNet/WordNet synset ID of a synset (category) which the subject was instructed to imagine at the block ('n/a' in rest blocks)
    • 'category_name': ImageNet/WordNet synset (category) which the subject was instructed to imagine at the block ('n/a' in rest blocks)
    • 'response_time': time of button press for imagery quality evaluation at the block, elapsed time (sec) from the beginning of each run ('n/a' when the subject did not press the button in the block)
    • 'evaluation': vividness of their mental imagery evaluated by the subject (very vivid, fairly vivid, rather vivid, not vivid, or cannot recognize the target)
    • Additional column 'category_index' is for internal use.

    Image/category labels

    The stimulus images are named as 'n03626115_19498' where 'n03626115' is ImageNet/WorNet ID for a synset (category) and '19498' is image ID. The categories are named as the ImageNet/WordNet sysnet ID (e.g., 'n03626115'). The stimulus and category names are included in the task event files as 'stimulus_name' and 'category_name', respectively. For use in analysis code, the task event files also contain 'stimulus_id' and 'category_id', which are float numbers generated based on the stimulus or category names (e.g., 'n03626115_19498' --> 3626115.019498).

    The mapping between stimulus/category names and IDs:

    • stimulus_ImageNetTraining.tsv (perceptionTraining sessions)
      • The first and second column from the left is 'stimulus_name' and 'stimulus_id', respectively.
    • stimulus_ImageNetTest.tsv (perceptionTest sessions)
      • The first and second column from the left is 'stimulus_name' and 'stimulus_id', respectively.
    • category_GODImagery.tsv (imageryTest sessions)
      • The first and second column from the left is 'category_name' and 'category_id', respectively.

    Stimulus images

    Because of licensing issues, we do not include the stimulus images in the dataset. A script downloading the images from ImageNet is available at https://github.com/KamitaniLab/GenericObjectDecoding. Image features (CNN unit responses, HMAX, GIST, and SIFT) used in the original study are available at https://figshare.com/articles/Generic_Object_Decoding/7387130.

    Contact

  17. S

    A single-object version of the ImageNet2012 dataset

    • scidb.cn
    Updated Apr 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Taicheng Huang (2022). A single-object version of the ImageNet2012 dataset [Dataset]. http://doi.org/10.57760/sciencedb.01674
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 7, 2022
    Dataset provided by
    Science Data Bank
    Authors
    Taicheng Huang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The single-object version of the ImageNet2012 dataset was prepared to examine if background of objects affect performance of the DCNNs. The dataset includes 544,546 images in the training dataset and 50,000 images in the validation dataset. We removed the background of each image by setting pixels outside the bounding box to 255 (i.e., white color). For images containing multiple bounding boxes, we randomly selected one bounding box as our target. Note that the retinal size of objects remained unchanged with only the background was removed from the original images. All images belong to 1,000 categories, which are the same as the original ImageNet2012 dataset.

  18. PC Parts Images Dataset [Classification]

    • kaggle.com
    • gts.ai
    Updated Feb 5, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    asaniczka (2024). PC Parts Images Dataset [Classification] [Dataset]. http://doi.org/10.34740/kaggle/dsv/7565076
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 5, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    asaniczka
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    The images are in the ImageNet structure, with each class having its own folder containing the respective images. The images have a resolution of 256x256 pixels.

    Dataset Details:

    • Total number of classes: 14
    • Total number of images: 3279
    • Resolution: 256x256 pixels
    • Image format: JPG

    If you find this dataset useful or interesting, please don't forget to show your support by Upvoting! 🙌👍

    Data Collection Methodology:

    To create this dataset, - I searched for each PC part on Google Images and extracted the image links. - I then downloaded the full-size images from the original source and converted them to JPG format with a resolution of 256 pixels. - During the process, most images were downscaled, with only a very few being upscaled. - Finally, I manually went over all the images and deleted any that didn't fit well for image classification.

    Potential Task Ideas:

    1. Train an image classification model using popular architectures like ViT, ResNet, or EfficientNet.
    2. Perform transfer learning on this dataset using pre-trained models.
    3. Explore different data augmentation techniques to enhance model performance.
    4. Fine-tune existing models to improve classification accuracy.
    5. Compare the performance of different models on this dataset.
    6. Use the dataset as a benchmark for evaluating new image classification techniques.

    Class Naming Convention:

    All files are named in ImageNet style. ```shell Kingdom ├── class_1 │ ├── 1.jpg │ └── 2.jpg ├── class_2 │ ├── 1.jpg │ └── 2.jpg └── class_3 ├── 1.jpg └── 2.jpg

    
    **I have not divided the dataset into train,val,test so that you can decide on the split ratios.**
    
    ---
    
    Photo by <a href="https://unsplash.com/@zelebb?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Andrey Matveev</a> on <a href="https://unsplash.com/photos/a-close-up-of-two-computer-fans-on-a-yellow-background-8hkotoCEI5o?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
    
  19. S

    Data from: Deep Learning, Feature Learning, and Clustering Analysis for SEM...

    • scidb.cn
    Updated Oct 17, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rossella Aversa; Piero Coronica; Cristiano De Nobili; Stefano Cozzini (2020). Deep Learning, Feature Learning, and Clustering Analysis for SEM Image Classification [Dataset]. http://doi.org/10.11922/sciencedb.j00104.00062
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 17, 2020
    Dataset provided by
    Science Data Bank
    Authors
    Rossella Aversa; Piero Coronica; Cristiano De Nobili; Stefano Cozzini
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    One table and six figures. Table 1 shows the number of images for each label in the 1μ–2μ data set, adopting the same labelling used in [11, 12, 13], reported here for completeness: 0 = Porous sponges, 1 = Patterned surfaces, 2 = Particles, 3 = Films and coated surfaces, 4 = Powders, 5 = Tips, 6 = Nanowires, 7 = Biological, 8 = MEMS devices and electrodes, 9 = Fibres.Figure 1 shows test accuracy as a function of the number of training epochs obtained by training from scratch Inception-v3 (magenta), Inception-v4 (orange), Inception-Resnet (green), and AlexNet (black) on SEM data set. All the models were trained with the best combination of hyperparameters, according to the memory capability of the available hardware. In Figure 2, Main: Test accuracy as a function of the number of training epochs obtained when fine tuning on the SEM data set Inception-v3 (magenta) and Inception-v4 (orange) starting from the ImageNet checkpoint, and Inception-v3 (blue) from the SEM checkpoint that, as expected, converges very rapidly. Inset: Test accuracy as a function of the number of training epochs obtained when performing feature extraction of Inception-v3 (magenta), Inception-v4 (orange), and Inception-Resnet (green) on the SEM data set starting from the ImageNet checkpoint. All the models were trained with the best combination of hyperparameters, according to the memory capability of the hardware available. Figure 3 shows intrinsic Dimension of the 1μ–2μ_1001 data set, varying the sample size, computed before autoencoding (green lines) and after autoencoding (red lines). The three brightness levels for each color correspond to the percentage of points used in the linear fi t: 90%, 70%, and 50%. Figure 4 shows ddisc heatmap for a manually labelled subset of images. Figure 5 presents heatmaps of the distances obtained via Inception-v3. The image captions specify the methods used and indicate the correlation index with ddisc. Figure 6 shows NMI scores of the clustering obtained by the five hierarchical algorithms (solid lines) considered as a function of k, the number of clusters. The scores of the artificial scenarios are reported as orange (good case) and green (uniform case) dashed lines.

  20. R

    Mnist Dataset

    • universe.roboflow.com
    • tensorflow.org
    • +3more
    zip
    Updated Aug 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Popular Benchmarks (2022). Mnist Dataset [Dataset]. https://universe.roboflow.com/popular-benchmarks/mnist-cjkff/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 8, 2022
    Dataset authored and provided by
    Popular Benchmarks
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Digits
    Description

    THE MNIST DATABASE of handwritten digits

    Authors:

    • Yann LeCun, Courant Institute, NYU
    • Corinna Cortes, Google Labs, New York
    • Christopher J.C. Burges, Microsoft Research, Redmond

    Dataset Obtained From: http://yann.lecun.com/exdb/mnist/

    All images were sized 28x28 in the original dataset

    The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.

    It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.

    Version 1 (original-images_trainSetSplitBy80_20):

    • Original, raw images, with the train set split to provide 80% of its images to the training set and 20% of its images to the validation set
    • Trained from Roboflow Classification Model's ImageNet training checkpoint

    Version 2 (original-images_ModifiedClasses_trainSetSplitBy80_20):

    • Original, raw images, with the train set split to provide 80% of its images to the training set and 20% of its images to the validation set
    • Modify Classes, a Roboflow preprocessing feature, was employed to change class names from 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 to one, two, three, four, five, six, seven, eight, nine
    • Trained from the Roboflow Classification Model's ImageNet training checkpoint

    Version 3 (original-images_Original-MNIST-Splits):

    • Original images, with the original splits for MNIST: train (86% of images - 60,000 images) set and test (14% of images - 10,000 images) set only.
    • This version was not trained

    Citation:

    @article{lecun2010mnist,
     title={MNIST handwritten digit database},
     author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
     journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
     volume={2},
     year={2010}
    }
    
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2024). imagenet2012 [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenet2012

imagenet2012

Related Article
Explore at:
448 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jun 1, 2024
Description

ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy.

The test split contains 100K images but no labels because no labels have been publicly released. We provide support for the test split from 2012 with the minor patch released on October 10, 2019. In order to manually download this data, a user must perform the following operations:

  1. Download the 2012 test split available here.
  2. Download the October 10, 2019 patch. There is a Google Drive link to the patch provided on the same page.
  3. Combine the two tar-balls, manually overwriting any images in the original archive with images from the patch. According to the instructions on image-net.org, this procedure overwrites just a few images.

The resulting tar-ball may then be processed by TFDS.

To assess the accuracy of a model on the ImageNet test split, one must run inference on all images in the split, export those results to a text file that must be uploaded to the ImageNet evaluation server. The maintainers of the ImageNet evaluation server permits a single user to submit up to 2 submissions per week in order to prevent overfitting.

To evaluate the accuracy on the test split, one must first create an account at image-net.org. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at https://image-net.org/challenges/LSVRC/eval_server.php The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is "Classification submission (top-5 cls error)". A sample of an exported text file looks like the following:

771 778 794 387 650
363 691 764 923 427
737 369 430 531 124
755 930 755 59 168

The export format is described in full in "readme.txt" within the 2013 development kit available here: https://image-net.org/data/ILSVRC/2013/ILSVRC2013_devkit.tgz Please see the section entitled "3.3 CLS-LOC submission format". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See labels.txt.

To use this dataset:

import tensorflow_datasets as tfds

ds = tfds.load('imagenet2012', split='train')
for ex in ds.take(4):
 print(ex)

See the guide for more informations on tensorflow_datasets.

https://storage.googleapis.com/tfds-data/visualization/fig/imagenet2012-5.1.0.png" alt="Visualization" width="500px">

Search
Clear search
Close search
Google apps
Main menu