5 datasets found
  1. t

    Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L. (2024)....

    • service.tib.eu
    Updated Dec 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L. (2024). Dataset: ImageNet: A Large-Scale Hierarchical Image Database. https://doi.org/10.57702/0elnaxd7 [Dataset]. https://service.tib.eu/ldmservice/dataset/imagenet--a-large-scale-hierarchical-image-database
    Explore at:
    Dataset updated
    Dec 2, 2024
    Description

    The ImageNet dataset is a large-scale image database that contains over 14 million images, each labeled with one of 21,841 categories.

  2. o

    ImageNet statistics and PCA

    • explore.openaire.eu
    • zenodo.org
    Updated Jan 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alice Bizeul (2025). ImageNet statistics and PCA [Dataset]. http://doi.org/10.5281/zenodo.14589122
    Explore at:
    Dataset updated
    Jan 2, 2025
    Authors
    Alice Bizeul
    Description

    ImageNet-1k's covariance matrix's eigenvalues (eigenvalues_ipca.npy), the ratio of total variance explained by each of ImageNet-1k's principal component (eigenvalues_ratio_ipca.npy), ImageNet-1k's principal components (pc_matrix_ipca.npy) computed using the normalized training dataset. For computational reasons, only 10% of the training dataset was used for PCA and only the top 20k principal components were computed. These items were used in [1]. The ImageNet-1k dataset was presented in [2]. [1] Alice Bizeul, Thomas M. Sutter, Alain Ryser, Julius Von Kügelgen, Bernhard Schölkopf, Julia E. Vogt. Components Beat Patches: Eigenvector Masking for Visual Representation Learning. Oct, 2024. [2] Deng, Jia, et al. "Imagenet: A large-scale hierarchical image database." 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009.

  3. h

    2025-rethinkdc-imagenet-random-ipc-10

    • huggingface.co
    Updated Feb 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yang He @ CFAR A*STAR (2025). 2025-rethinkdc-imagenet-random-ipc-10 [Dataset]. https://huggingface.co/datasets/he-yang/2025-rethinkdc-imagenet-random-ipc-10
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 11, 2025
    Dataset authored and provided by
    Yang He @ CFAR A*STAR
    Description

    Dataset used for paper -> "Rethinking Dataset Compression: Shifting Focus From Labels to Images"

    Dataset created according to the paper Imagenet: A large-scale hierarchical image database.

      Basic Usage
    

    from datasets import load_dataset dataset = load_dataset("he-yang/2025-rethinkdc-imagenet-random-ipc-10")

    For more information, please refer to the Rethinking-Dataset-Compression

  4. Data from: Generic Object Decoding (fMRI on ImageNet)

    • openneuro.org
    Updated Dec 6, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tomoyasu Horikawa; Yukiyasu Kamitani (2019). Generic Object Decoding (fMRI on ImageNet) [Dataset]. http://doi.org/10.18112/openneuro.ds001246.v1.2.1
    Explore at:
    Dataset updated
    Dec 6, 2019
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Tomoyasu Horikawa; Yukiyasu Kamitani
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Generic Object Decoding (fMRI on ImageNet)

    Original paper

    Horikawa, T. & Kamitani, Y. (2017) Generic decoding of seen and imagined objects using hierarchical visual features. Nature Communications 8:15037. https://www.nature.com/articles/ncomms15037

    Overview

    In this study, fMRI data was recorded while subjects were viewing object images (image presentation experiment) or were imagining object images (imagery experiment). The image presentation experiment consisted of two distinct types of sessions: training image sessions and test image sessions. In the training image session, a total of 1,200 images from 150 object categories (8 images from each category) were each presented only once (24 runs). In the test image session, a total of 50 images from 50 object categories (1 image from each category) were presented 35 times each (35 runs). All images were taken from ImageNet (http://www.image-net.org/, Fall 2011 release), a large-scale hierarchical image database. During the image presentation experiment, subjects performed one-back image repetition task (5 trials in each run). In the imagery experiment, subjects were required to visually imagine images from 1 of the 50 categories (20 runs; 25 categories in each run; 10 samples for each category) that were presented in the test image session of the image presentation experiment. fMRI data in the training image sessions were used to train models (decoders) which predict visual features from fMRI patterns, and those in the test image sessions and the imagery experiment were used to evaluate the model performance. Predicted features for the test image sessions and imagery experiment are used to identify seen/imagined object categories from a set of computed features for numerous object images.

    Analysis demo code is available at GitHub (KamitaniLab/GenericObjectDecoding).

    Dataset

    MRI files

    The present dataset contains fMRI data from five subjects ('sub-01', 'sub-02', 'sub-03', 'sub-04', and 'sub-05'). Each subject data contains three types of MRI data each of which was collected over multiple scanning sessions.

    • 'ses-perceptionTraining': fMRI data from the training image sessions in the image presentation experiment (24 runs; 3-5 scanning sessions)
    • 'ses-perceptionTest': fMRI data from the test image sessions in the image presentation experiment (35 runs; 4-6 scanning sessions)
    • 'ses-imageryTest': fMRI data from the imagery experiment (20 runs; 3-5 scanning sessions)

    Each scanning session consisted of functional (EPI) and anatomical (inplane T2) data. The functional EPI images covered the entire brain (TR, 3000 ms; TE, 30 ms; flip angle, 80°; voxel size, 3 × 3 × 3 mm; FOV, 192 × 192 mm; number of slices, 50, slice gap, 0 mm) and inplane T2-weighted anatomical images were acquired with the same slices used for the EPI (TR, 7020 ms; TE, 69 ms; flip angle, 160°; voxel size, 0.75 × 0.75 × 3.0 mm; FOV, 192 × 192 mm). The dataset also includes a T1-weighted anatomical reference image for each subject (TR, 2250 ms; TE, 3.06 ms; TI, 900 ms; flip angle, 9°; voxel size, 1.0 × 1.0 × 1.0 mm; FOV, 256 × 256 mm). The T1-weighted images were scanned only once for each subject in a separate scanning session and are stored in 'ses-anatomy' directories. The T1-weighted images were defaced by pydeface (https://pypi.python.org/pypi/pydeface). All DICOM files are converted to Nifti-1 files by mri_convert in FreeSurfer. In addition, the dataset contains mask images of manually defined ROIs for each subject in 'sourcedata' directory (See 'README' in 'sourcedata' for more details).

    Preprocessed fMRI data

    Preprocessed fMRI data are available in derivatives/preproc-spm. See the original paper (Horikawa & Kamitani, 2017) for the details of preprocessing.

    Task event files

    Task event files (‘sub-*_ses-*_task-*_run-*_events.tsv’) contains recorded event (stimuli presentation, subject responses, etc.) during fMRI runs. In task event files for perception task (‘ses-perceptionTraining' and 'ses-perceptionTest'), each column represents:

    • 'onset': onset time (sec) of an event
    • 'duration': duration (sec) of the event
    • 'trial_no': trial (block) number of the event
    • 'event_type': type of the event ('rest': Rest block without visual stimulus, 'stimulus': Stimulus presentation block)
    • 'stimulus_id': stimulus ID of the image presented in a stimulus block ('n/a' in rest blocks)
    • 'stimulus_name': stimulus file name of the image presented in a stimulus block ('n/a' in rest blocks)
    • 'response_time': time of button press at the block, elapsed time (sec) from the beginning of each run ('n/a' when the subject did not press the button in the block)
    • Additional columns 'category_index' and 'image_index' are for internal use.

    In task event files for imagery task ('ses-imageryTest'), each column represents:

    • 'onset': onset time (sec) of an event
    • 'duration': duration (sec) of the event
    • 'trial_no': trial (block) number of the event
    • 'event_type': type of the event ('rest' and 'inter_rest': rest period, 'cue': cue presentation period, 'imagery': imagery period, 'evaluation': evaluation of imagery quality period)
    • 'category_id': ImageNet/WordNet synset ID of a synset (category) which the subject was instructed to imagine at the block ('n/a' in rest blocks)
    • 'category_name': ImageNet/WordNet synset (category) which the subject was instructed to imagine at the block ('n/a' in rest blocks)
    • 'response_time': time of button press for imagery quality evaluation at the block, elapsed time (sec) from the beginning of each run ('n/a' when the subject did not press the button in the block)
    • 'evaluation': vividness of their mental imagery evaluated by the subject (very vivid, fairly vivid, rather vivid, not vivid, or cannot recognize the target)
    • Additional column 'category_index' is for internal use.

    Image/category labels

    The stimulus images are named as 'n03626115_19498' where 'n03626115' is ImageNet/WorNet ID for a synset (category) and '19498' is image ID. The categories are named as the ImageNet/WordNet sysnet ID (e.g., 'n03626115'). The stimulus and category names are included in the task event files as 'stimulus_name' and 'category_name', respectively. For use in analysis code, the task event files also contain 'stimulus_id' and 'category_id', which are float numbers generated based on the stimulus or category names (e.g., 'n03626115_19498' --> 3626115.019498).

    The mapping between stimulus/category names and IDs:

    • stimulus_ImageNetTraining.tsv (perceptionTraining sessions)
      • The first and second column from the left is 'stimulus_name' and 'stimulus_id', respectively.
    • stimulus_ImageNetTest.tsv (perceptionTest sessions)
      • The first and second column from the left is 'stimulus_name' and 'stimulus_id', respectively.
    • category_GODImagery.tsv (imageryTest sessions)
      • The first and second column from the left is 'category_name' and 'category_id', respectively.

    Stimulus images

    Because of licensing issues, we do not include the stimulus images in the dataset. A script downloading the images from ImageNet is available at https://github.com/KamitaniLab/GenericObjectDecoding. Image features (CNN unit responses, HMAX, GIST, and SIFT) used in the original study are available at https://figshare.com/articles/Generic_Object_Decoding/7387130.

    Contact

  5. Stanford Dogs Dataset

    • kaggle.com
    • opendatalab.com
    • +2more
    Updated Nov 13, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jessica Li (2019). Stanford Dogs Dataset [Dataset]. https://www.kaggle.com/datasets/jessicali9530/stanford-dogs-dataset/suggestions
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 13, 2019
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Jessica Li
    Description

    Context

    The Stanford Dogs dataset contains images of 120 breeds of dogs from around the world. This dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It was originally collected for fine-grain image categorization, a challenging problem as certain dog breeds have near identical features or differ in colour and age.

    Content

    • Number of categories: 120
    • Number of images: 20,580
    • Annotations: Class labels, Bounding boxes

    Acknowledgements

    The original data source is found on http://vision.stanford.edu/aditya86/ImageNetDogs/ and contains additional information on the train/test splits and baseline results.

    If you use this dataset in a publication, please cite the dataset on the following papers:

    Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [pdf] [poster] [BibTex]

    Secondary: J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database. IEEE Computer Vision and Pattern Recognition (CVPR), 2009. [pdf] [BibTex]

    Banner Image from Hannah Lim on Unsplash

    Inspiration

    • Can you correctly identify dog breeds that have similar features, such as the basset hound and bloodhound?
    • Is this chihuahua young or old?
  6. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2024). Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L. (2024). Dataset: ImageNet: A Large-Scale Hierarchical Image Database. https://doi.org/10.57702/0elnaxd7 [Dataset]. https://service.tib.eu/ldmservice/dataset/imagenet--a-large-scale-hierarchical-image-database

Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., Fei-Fei, L. (2024). Dataset: ImageNet: A Large-Scale Hierarchical Image Database. https://doi.org/10.57702/0elnaxd7

Explore at:
Dataset updated
Dec 2, 2024
Description

The ImageNet dataset is a large-scale image database that contains over 14 million images, each labeled with one of 21,841 categories.

Search
Clear search
Close search
Google apps
Main menu