16 datasets found
  1. s

    Citation Trends for "ImageNet Large Scale Visual Recognition Challenge"

    • shibatadb.com
    Updated Apr 11, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yubetsu (2015). Citation Trends for "ImageNet Large Scale Visual Recognition Challenge" [Dataset]. https://www.shibatadb.com/article/ktMmmEdy
    Explore at:
    Dataset updated
    Apr 11, 2015
    Dataset authored and provided by
    Yubetsu
    License

    https://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt

    Time period covered
    2012 - 2025
    Variables measured
    New Citations per Year
    Description

    Yearly citation counts for the publication titled "ImageNet Large Scale Visual Recognition Challenge".

  2. h

    reduced-imagenet

    • huggingface.co
    Updated Apr 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rich Wardle (2024). reduced-imagenet [Dataset]. https://huggingface.co/datasets/richwardle/reduced-imagenet
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 13, 2024
    Authors
    Rich Wardle
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Imagenet Mini Dataset

    This dataset is a subset of the Imagenet validation set containing 26,000 images. It has been curated to have equal class distributions, with 26 randomly sampled images from each class. All images have been resized to (224, 224) pixels, and are in RGB format.

      Citation
    

    If you use this dataset in your research, please cite the original Imagenet dataset: Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale… See the full description on the dataset page: https://huggingface.co/datasets/richwardle/reduced-imagenet.

  3. Y

    Citation Network Graph

    • shibatadb.com
    Updated Apr 11, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yubetsu (2015). Citation Network Graph [Dataset]. https://www.shibatadb.com/article/ktMmmEdy
    Explore at:
    Dataset updated
    Apr 11, 2015
    Dataset authored and provided by
    Yubetsu
    License

    https://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt

    Description

    Network of 33 papers and 117 citation links related to "ImageNet Large Scale Visual Recognition Challenge".

  4. T

    imagenet2012

    • tensorflow.org
    Updated Jun 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenet2012 [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenet2012
    Explore at:
    Dataset updated
    Jun 1, 2024
    Description

    ILSVRC 2012, commonly known as 'ImageNet' is an image dataset organized according to the WordNet hierarchy. Each meaningful concept in WordNet, possibly described by multiple words or word phrases, is called a "synonym set" or "synset". There are more than 100,000 synsets in WordNet, majority of them are nouns (80,000+). In ImageNet, we aim to provide on average 1000 images to illustrate each synset. Images of each concept are quality-controlled and human-annotated. In its completion, we hope ImageNet will offer tens of millions of cleanly sorted images for most of the concepts in the WordNet hierarchy.

    The test split contains 100K images but no labels because no labels have been publicly released. We provide support for the test split from 2012 with the minor patch released on October 10, 2019. In order to manually download this data, a user must perform the following operations:

    1. Download the 2012 test split available here.
    2. Download the October 10, 2019 patch. There is a Google Drive link to the patch provided on the same page.
    3. Combine the two tar-balls, manually overwriting any images in the original archive with images from the patch. According to the instructions on image-net.org, this procedure overwrites just a few images.

    The resulting tar-ball may then be processed by TFDS.

    To assess the accuracy of a model on the ImageNet test split, one must run inference on all images in the split, export those results to a text file that must be uploaded to the ImageNet evaluation server. The maintainers of the ImageNet evaluation server permits a single user to submit up to 2 submissions per week in order to prevent overfitting.

    To evaluate the accuracy on the test split, one must first create an account at image-net.org. This account must be approved by the site administrator. After the account is created, one can submit the results to the test server at https://image-net.org/challenges/LSVRC/eval_server.php The submission consists of several ASCII text files corresponding to multiple tasks. The task of interest is "Classification submission (top-5 cls error)". A sample of an exported text file looks like the following:

    771 778 794 387 650
    363 691 764 923 427
    737 369 430 531 124
    755 930 755 59 168
    

    The export format is described in full in "readme.txt" within the 2013 development kit available here: https://image-net.org/data/ILSVRC/2013/ILSVRC2013_devkit.tgz Please see the section entitled "3.3 CLS-LOC submission format". Briefly, the format of the text file is 100,000 lines corresponding to each image in the test split. Each line of integers correspond to the rank-ordered, top 5 predictions for each test image. The integers are 1-indexed corresponding to the line number in the corresponding labels file. See labels.txt.

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenet2012', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenet2012-5.1.0.png" alt="Visualization" width="500px">

  5. h

    ImageNet-Subset150

    • huggingface.co
    Updated Apr 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    isaacNingLee (2024). ImageNet-Subset150 [Dataset]. https://huggingface.co/datasets/ilee0022/ImageNet-Subset150
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 12, 2024
    Authors
    isaacNingLee
    Description

    Dataset Card for Dataset Name

    This dataset is the compiled version of https://github.com/delyan-boychev/pytorch_trainers_interpretability?tab=readme-ov-file in huggingface dataset. All credits are given to the orignal author

      Citation [optional]
    

    Please cite the original author for use of the dataset BibTeX: @misc{boychev2023interpretable, title={Interpretable Computer Vision Models through Adversarial Training: Unveiling the Robustness-Interpretability Connection}… See the full description on the dataset page: https://huggingface.co/datasets/ilee0022/ImageNet-Subset150.

  6. NINCO (Out-Of-Distribution detection dataset for ImageNet)

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip
    Updated Aug 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Julian Bitterwolf; Julian Bitterwolf; Maximilian Müller; Matthias Hein; Maximilian Müller; Matthias Hein (2023). NINCO (Out-Of-Distribution detection dataset for ImageNet) [Dataset]. http://doi.org/10.5281/zenodo.8013288
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Aug 22, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Julian Bitterwolf; Julian Bitterwolf; Maximilian Müller; Matthias Hein; Maximilian Müller; Matthias Hein
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The NINCO (No ImageNet Class Objects) dataset is introduced in the ICML 2023 paper In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation. The images in this dataset are free from objects that belong to any of the 1000 classes of ImageNet-1K (ILSVRC2012), which makes NINCO suitable for evaluating out-of-distribution detection on ImageNet-1K .

    The NINCO main dataset consists of 64 OOD classes with a total of 5879 samples. These OOD classes were selected to have no categorical overlap with any classes of ImageNet-1K. Each sample was inspected individually by the authors to not contain ID objects.

    Besides NINCO, included are (in the same .tar.gz file) truly OOD versions of 11 popular OOD datasets with in total 2715 OOD samples.

    Further included are 17 OOD unit-tests, with 400 samples each.

    Code for loading and evaluating on each of the three datasets is provided at https://github.com/j-cb/NINCO.

    When using NINCO, please consider citing (besides the bibtex given below) the following data sources that were used to create NINCO:

    • Hendrycks et al.: ”Scaling out-of-distribution detection for real-world settings”, ICML, 2022.
    • Bossard et al.: ”Food-101 – mining discriminative components with random forests”, ECCV 2014.
    • Zhou et al.: ”Places: A 10 million image database for scene recognition”, IEEE PAMI 2017.
    • Huang et al.: ”Mos: Towards scaling out-of-distribution detection for large semantic space”, CVPR 2021.
    • Li et al.: ”Caltech 101 (1.0)”, 2022.
    • Ismail et al.: ”MYNursingHome: A fully-labelled image dataset for indoor object classification.”, Data in Brief (V. 32) 2020.
    • The iNaturalist project: https://www.inaturalist.org/

    When using NINCO_popular_datasets_subsamples, additionally to the above, please consider citing:

    • Cimpoi et al.: ”Describing textures in the wild”, CVPR 2014.
    • Hendrycks et al.: ”Natural adversarial examples”, CVPR 2021.
    • Wang et al.: ”Vim: Out-of-distribution with virtual-logit matching”, CVPR 2022.
    • Bendale et al.: ”Towards Open Set Deep Networks”, CVPR 2016.
    • Vaze et al.: ”Open-set Recognition: a Good Closed-set Classifier is All You Need?”, ICLR 2022.
    • Wang et al.: ”Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition.” ICML, 2022.
    • Galil et al.: “A framework for benchmarking Class-out-of-distribution detection and its application to ImageNet”, ICLR 2023.

    For citing our paper, we would appreciate using the following bibtex entry (this will be updated once the ICML 2023 proceedings are public):


    @inproceedings{
    bitterwolf2023ninco,
    title={In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation},
    author={Julian Bitterwolf and Maximilian Mueller and Matthias Hein},
    booktitle={ICML},
    year={2023},
    url={https://proceedings.mlr.press/v202/bitterwolf23a.html}
    }

  7. T

    imagenette

    • tensorflow.org
    • opendatalab.com
    • +1more
    Updated Jun 1, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). imagenette [Dataset]. https://www.tensorflow.org/datasets/catalog/imagenette
    Explore at:
    Dataset updated
    Jun 1, 2024
    Description

    Imagenette is a subset of 10 easily classified classes from the Imagenet dataset. It was originally prepared by Jeremy Howard of FastAI. The objective behind putting together a small version of the Imagenet dataset was mainly because running new ideas/algorithms/experiments on the whole Imagenet take a lot of time.

    This version of the dataset allows researchers/practitioners to quickly try out ideas and share with others. The dataset comes in three variants:

    • Full size
    • 320 px
    • 160 px

    Note: The v2 config correspond to the new 70/30 train/valid split (released in Dec 6 2019).

    To use this dataset:

    import tensorflow_datasets as tfds
    
    ds = tfds.load('imagenette', split='train')
    for ex in ds.take(4):
     print(ex)
    

    See the guide for more informations on tensorflow_datasets.

    https://storage.googleapis.com/tfds-data/visualization/fig/imagenette-full-size-v2-1.0.0.png" alt="Visualization" width="500px">

  8. R

    Mnist Dataset

    • universe.roboflow.com
    • tensorflow.org
    • +3more
    zip
    Updated Aug 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Popular Benchmarks (2022). Mnist Dataset [Dataset]. https://universe.roboflow.com/popular-benchmarks/mnist-cjkff/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 8, 2022
    Dataset authored and provided by
    Popular Benchmarks
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Digits
    Description

    THE MNIST DATABASE of handwritten digits

    Authors:

    • Yann LeCun, Courant Institute, NYU
    • Corinna Cortes, Google Labs, New York
    • Christopher J.C. Burges, Microsoft Research, Redmond

    Dataset Obtained From: http://yann.lecun.com/exdb/mnist/

    All images were sized 28x28 in the original dataset

    The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.

    It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.

    Version 1 (original-images_trainSetSplitBy80_20):

    • Original, raw images, with the train set split to provide 80% of its images to the training set and 20% of its images to the validation set
    • Trained from Roboflow Classification Model's ImageNet training checkpoint

    Version 2 (original-images_ModifiedClasses_trainSetSplitBy80_20):

    • Original, raw images, with the train set split to provide 80% of its images to the training set and 20% of its images to the validation set
    • Modify Classes, a Roboflow preprocessing feature, was employed to change class names from 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 to one, two, three, four, five, six, seven, eight, nine
    • Trained from the Roboflow Classification Model's ImageNet training checkpoint

    Version 3 (original-images_Original-MNIST-Splits):

    • Original images, with the original splits for MNIST: train (86% of images - 60,000 images) set and test (14% of images - 10,000 images) set only.
    • This version was not trained

    Citation:

    @article{lecun2010mnist,
     title={MNIST handwritten digit database},
     author={LeCun, Yann and Cortes, Corinna and Burges, CJ},
     journal={ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist},
     volume={2},
     year={2010}
    }
    
  9. h

    CUDD

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anjia Cao, CUDD [Dataset]. https://huggingface.co/datasets/caj/CUDD
    Explore at:
    Authors
    Anjia Cao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset description

    Distilled ImageNet-21K, ImageNet-1K, Tiny-ImageNet, CIFAR100, CIFAR10 by CUDD.

      Uses
    

    Use standard SRe2L (https://github.com/VILA-Lab/SRe2L) evaluation protocol, or see https://github.com/MIV-XJTU/CUDD.

      Citation
    

    @article{ma2025curriculum, title={Curriculum dataset distillation}, author={Ma, Zhiheng and Cao, Anjia and Yang, Funing and Gong, Yihong and Wei, Xing}, journal={IEEE Transactions on Image Processing}, year={2025}… See the full description on the dataset page: https://huggingface.co/datasets/caj/CUDD.

  10. Data from: MedMNIST-C: Comprehensive benchmark and improved classifier...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jul 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francesco Di Salvo; Francesco Di Salvo; Sebastian Doerrich; Sebastian Doerrich; Christian Ledig; Christian Ledig (2024). MedMNIST-C: Comprehensive benchmark and improved classifier robustness by simulating realistic image corruptions [Dataset]. http://doi.org/10.5281/zenodo.11471504
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 31, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Francesco Di Salvo; Francesco Di Salvo; Sebastian Doerrich; Sebastian Doerrich; Christian Ledig; Christian Ledig
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract: The integration of neural-network-based systems into clinical practice is limited by challenges related to domain generalization and robustness. The computer vision community established benchmarks such as ImageNet-C as a fundamental prerequisite to measure progress towards those challenges. Similar datasets are largely absent in the medical imaging community which lacks a comprehensive benchmark that spans across imaging modalities and applications. To address this gap, we create and open-source MedMNIST-C, a benchmark dataset based on the MedMNIST+ collection, covering 12 datasets and 9 imaging modalities. We simulate task and modality-specific image corruptions of varying severity to comprehensively evaluate the robustness of established algorithms against real-world artifacts and distribution shifts. We further provide quantitative evidence that our simple-to-use artificial corruptions allow for highly performant, lightweight data augmentation to enhance model robustness. Unlike traditional, generic augmentation strategies, our approach leverages domain knowledge, exhibiting significantly higher robustness when compared to widely adopted methods. By introducing MedMNIST-C and open-sourcing the corresponding library allowing for targeted data augmentations, we contribute to the development of increasingly robust methods tailored to the challenges of medical imaging. The code is available at github.com/francescodisalvo05/medmnistc-api.

    This work has been accepted at the Workshop on Advancing Data Solutions in Medical Imaging AI @ MICCAI 2024 [preprint].

    Note: Due to space constraints, we have uploaded all datasets except TissueMNIST-C. However, it can be reproduced via our APIs.

    Usage: We recommend using the demo code and tutorials available on our GitHub repository.

    Citation: If you find this work useful, please consider citing us:

    @article{disalvo2024medmnist,
     title={MedMNIST-C: Comprehensive benchmark and improved classifier robustness by simulating realistic image corruptions},
     author={Di Salvo, Francesco and Doerrich, Sebastian and Ledig, Christian},
     journal={arXiv preprint arXiv:2406.17536},
     year={2024}
    }

    Disclaimer: This repository is inspired by MedMNIST APIs and the ImageNet-C repository. Thus, please also consider citing MedMNIST, the respective source datasets (described here), and ImageNet-C.

  11. Data from: Natural Images

    • kaggle.com
    Updated Aug 12, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prasun Roy (2018). Natural Images [Dataset]. https://www.kaggle.com/prasunroy/natural-images/tasks
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 12, 2018
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Prasun Roy
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Natural Images

    This dataset is created as a benchmark dataset for the work on Effects of Degradations on Deep Neural Network Architectures.
    The source code is publicly available on GitHub.

    Description

    This dataset contains 6,899 images from 8 distinct classes compiled from various sources (see Acknowledgements). The classes include airplane, car, cat, dog, flower, fruit, motorbike and person.

    Acknowledgements

    Citation

    @article{roy2018effects,
    title={Effects of Degradations on Deep Neural Network Architectures},
    author={Roy, Prasun and Ghosh, Subhankar and Bhattacharya, Saumik and Pal, Umapada},
    journal={arXiv preprint arXiv:1807.10108},
    year={2018}
    }

  12. Style Transfer for Object Detection in Art

    • kaggle.com
    Updated Mar 11, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Kadish (2021). Style Transfer for Object Detection in Art [Dataset]. https://www.kaggle.com/datasets/davidkadish/style-transfer-for-object-detection-in-art/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 11, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    David Kadish
    Description

    Context

    Despite recent advances in object detection using deep learning neural networks, these neural networks still struggle to identify objects in art images such as paintings and drawings. This challenge is known as the cross depiction problem and it stems in part from the tendency of neural networks to prioritize identification of an object's texture over its shape. In this paper we propose and evaluate a process for training neural networks to localize objects - specifically people - in art images. We generated a large dataset for training and validation by modifying the images in the COCO dataset using AdaIn style transfer (style-coco.tar.xz). This dataset was used to fine-tune a Faster R-CNN object detection network (2020-12-10_09-45-15_58672_resnet152_stylecoco_epoch_15.pth), which is then tested on the existing People-Art testing dataset (PeopleArt-Coco.tar.xz). The result is a significant improvement on the state of the art and a new way forward for creating datasets to train neural networks to process art images.

    Content

    2020-12-10_09-45-15_58672_resnet152_stylecoco_epoch_15.pth: Trained object detection network (Faster-RCNN using a ResNet152 backbone pretrained on ImageNet) for use with PyTorch PeopleArt-Coco.tar.xz: People-Art dataset with COCO-formatted annotations (original at https://github.com/BathVisArtData/PeopleArt) style-coco.tar.xz: Stylized COCO dataset containing only the person category. Used to train 2020-12-10_09-45-15_58672_resnet152_stylecoco_epoch_15.pth

    Code

    The code is available on github at https://github.com/dkadish/Style-Transfer-for-Object-Detection-in-Art

    Citing

    If you are using this code or the concept of style transfer for object detection in art, please cite our paper (https://arxiv.org/abs/2102.06529):

    D. Kadish, S. Risi, and A. S. Løvlie, “Improving Object Detection in Art Images Using Only Style Transfer,” Feb. 2021.

  13. R

    Cifar 100 Dataset

    • universe.roboflow.com
    • opendatalab.com
    • +3more
    zip
    Updated Aug 11, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Popular Benchmarks (2022). Cifar 100 Dataset [Dataset]. https://universe.roboflow.com/popular-benchmarks/cifar100
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 11, 2022
    Dataset authored and provided by
    Popular Benchmarks
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Animals People CommonObjects
    Description

    CIFAR-100

    The CIFAR-10 and CIFAR-100 dataset contains labeled subsets of the 80 million tiny images dataset. They were collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. * More info on CIFAR-100: https://www.cs.toronto.edu/~kriz/cifar.html * TensorFlow listing of the dataset: https://www.tensorflow.org/datasets/catalog/cifar100 * GitHub repo for converting CIFAR-100 tarball files to png format: https://github.com/knjcode/cifar2png

    All images were sized 32x32 in the original dataset

    The CIFAR-10 dataset consists of 60,000 32x32 colour images in 10 classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images [in the original dataset].

    This dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. The 100 classes in the CIFAR-100 are grouped into 20 superclasses. Each image comes with a "fine" label (the class to which it belongs) and a "coarse" label (the superclass to which it belongs). However, this project does not contain the superclasses. * Superclasses version: https://universe.roboflow.com/popular-benchmarks/cifar100-with-superclasses/

    More background on the dataset: https://i.imgur.com/5w8A0Vm.png" alt="CIFAR-100 Dataset Classes and Superclassees">

    Version 1 (original-images_Original-CIFAR100-Splits):

    • Original images, with the original splits for CIFAR-100: train (83.33% of images - 50,000 images) set and test (16.67% of images - 10,000 images) set only.
    • This version was not trained

    Version 2 (original-images_trainSetSplitBy80_20):

    • Original, raw images, with the train set split to provide 80% of its images to the training set (approximately 40,000 images) and 20% of its images to the validation set (approximately 10,000 images)
    • Trained from Roboflow Classification Model's ImageNet training checkpoint
    • https://blog.roboflow.com/train-test-split/ https://i.imgur.com/kSPeKGn.png" alt="Train/Valid/Test Split Rebalancing">

    Citation:

    @TECHREPORT{Krizhevsky09learningmultiple,
      author = {Alex Krizhevsky},
      title = {Learning multiple layers of features from tiny images},
      institution = {},
      year = {2009}
    }
    
  14. d

    Exemplar Microscopy Images of Tissues

    • dknet.org
    • rrid.site
    • +2more
    Updated Oct 16, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). Exemplar Microscopy Images of Tissues [Dataset]. http://identifiers.org/RRID:SCR_021052
    Explore at:
    Dataset updated
    Oct 16, 2019
    Description

    Reference dataset of multiplexed immunofluorescence microscopy images collected at HMS Laboratory of Systems Pharmacology. Includes set of images of different types for development and benchmarking of computational methods for image processing. As of 4/2/2021, EMIT comprises tissue microarray containing cores from 34 cancer, non-neoplastic diseases, and normal tissue collected from clinical discards under IRB supervised protocol. TMA was imaged using cyclic immunofluorescence method. Additional extensions of EMIT are currently in the planning stages. Long term goal is to compose ImageNet like resource for highly multiplexed images of tissues and tumors by consolidating high quality curated datasets.

  15. r

    Exemplar Microscopy Images of Tissues

    • rrid.site
    Updated Jul 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Exemplar Microscopy Images of Tissues [Dataset]. http://identifiers.org/RRID:SCR_021052
    Explore at:
    Dataset updated
    Jul 27, 2025
    Description

    Reference dataset of multiplexed immunofluorescence microscopy images collected at HMS Laboratory of Systems Pharmacology. Includes set of images of different types for development and benchmarking of computational methods for image processing. As of 4/2/2021, EMIT comprises tissue microarray containing cores from 34 cancer, non-neoplastic diseases, and normal tissue collected from clinical discards under IRB supervised protocol. TMA was imaged using cyclic immunofluorescence method. Additional extensions of EMIT are currently in the planning stages. Long term goal is to compose ImageNet like resource for highly multiplexed images of tissues and tumors by consolidating high quality curated datasets.

  16. Z

    DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning

    • data.niaid.nih.gov
    Updated May 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Girgenti, Benjamin (2023). DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7939059
    Explore at:
    Dataset updated
    May 16, 2023
    Dataset provided by
    Girgenti, Benjamin
    Rahimi Azghadi, Mostafa
    White, Ronald D.
    Johns, Jamie
    Philippa, Bronson
    Calvert, Brendan
    Wood, Jake C.
    Banks, Wesley
    Ridd, Peter
    Olsen, Alex
    Kenny, Owen
    Whinney, James
    Konovalov, Dimitriv A.
    License

    http://www.apache.org/licenses/LICENSE-2.0http://www.apache.org/licenses/LICENSE-2.0

    Description

    DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning

    This repository makes available the source code and public dataset for the work, "DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning", published with open access by Scientific Reports: https://www.nature.com/articles/s41598-018-38343-3. The DeepWeeds dataset consists of 17,509 images capturing eight different weed species native to Australia in situ with neighbouring flora. In our work, the dataset was classified to an average accuracy of 95.7% with the ResNet50 deep convolutional neural network.

    The source code, images and annotations are licensed under CC BY 4.0 license. The contents of this repository are released under an Apache 2 license.

    Download the dataset images and our trained models

    images.zip (468 MB)

    models.zip (477 MB)

    Due to the size of the images and models they are hosted outside of the Github repository. The images and models must be downloaded into directories named "images" and "models", respectively, at the root of the repository. If you execute the python script (deepweeds.py), as instructed below, this step will be performed for you automatically.

    TensorFlow Datasets

    Alternatively, you can access the DeepWeeds dataset with TensorFlow Datasets, TensorFlow's official collection of ready-to-use datasets. DeepWeeds was officially added to the TensorFlow Datasets catalog in August 2019.

    Weeds and locations

    The selected weed species are local to pastoral grasslands across the state of Queensland. They include: "Chinee apple", "Snake weed", "Lantana", "Prickly acacia", "Siam weed", "Parthenium", "Rubber vine" and "Parkinsonia". The images were collected from weed infestations at the following sites across Queensland: "Black River", "Charters Towers", "Cluden", "Douglas", "Hervey Range", "Kelso", "McKinlay" and "Paluma". The table and figure below break down the dataset by weed, location and geographical distribution.

    Data organization

    Images are assigned unique filenames that include the date/time the image was photographed and an ID number for the instrument which produced the image. The format is like so: YYYYMMDD-HHMMSS-ID, where the ID is simply an integer from 0 to 3. The unique filenames are strings of 17 characters, such as 20170320-093423-1.

    labels

    The labels.csv file assigns species labels to each image. It is a comma separated text file in the format:

    Filename,Label,Species ... 20170207-154924-0,jpg,7,Snake weed 20170610-123859-1.jpg,1,Lantana 20180119-105722-1.jpg,8,Negative ...

    Note: The specific label subsets of training (60%), validation (20%) and testing (20%) for the five-fold cross validation used in the paper are also provided here as CSV files in the same format as "labels.csv".

    models

    We provide the most successful ResNet50 and InceptionV3 models saved in Keras' hdf5 model format. The ResNet50 model, which provided the best results, has also been converted to UFF format in order to construct a TensorRT inference engine.

    resnet.hdf5 inception.hdf5 resnet.uff

    deepweeds.py

    This python script trains and evaluates Keras' base implementation of ResNet50 and InceptionV3 on the DeepWeeds dataset, pre-trained with ImageNet weights. The performance of the networks are cross validated for 5 folds. The final classification accuracy is taken to be the average across the five folds. Similarly, the final confusion matrix from the associated paper aggregates across the five independent folds. The script also provides the ability to measure the inference speeds within the TensorFlow environment.

    The script can be executed to carry out these computations using the following commands.

    To train and evaluate the ResNet50 model with five-fold cross validation, use python3 deepweeds.py cross_validate --model resnet.

    To train and evaluate the InceptionV3 model with five-fold cross validation, use python3 deepweeds.py cross_validate --model inception.

    To measure inference times for the ResNet50 model, use python3 deepweeds.py inference --model models/resnet.hdf5.

    To measure inference times for the InceptionV3 model, use python3 deepweeds.py inference --model models/inception.hdf5.

    Dependencies

    The required Python packages to execute deepweeds.py are listed in requirements.txt.

    tensorrt

    This folder includes C++ source code for creating and executing a ResNet50 TensorRT inference engine on an NVIDIA Jetson TX2 platform. To build and run on your Jetson TX2, execute the following commands:

    cd tensorrt/src make -j4 cd ../bin ./resnet_inference

    Citations

    If you use the DeepWeeds dataset in your work, please cite it as:

    IEEE style citation: “A. Olsen, D. A. Konovalov, B. Philippa, P. Ridd, J. C. Wood, J. Johns, W. Banks, B. Girgenti, O. Kenny, J. Whinney, B. Calvert, M. Rahimi Azghadi, and R. D. White, “DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning,” Scientific Reports, vol. 9, no. 2058, 2 2019. [Online]. Available: https://doi.org/10.1038/s41598-018-38343-3

    BibTeX

    @article{DeepWeeds2019, author = {Alex Olsen and Dmitry A. Konovalov and Bronson Philippa and Peter Ridd and Jake C. Wood and Jamie Johns and Wesley Banks and Benjamin Girgenti and Owen Kenny and James Whinney and Brendan Calvert and Mostafa {Rahimi Azghadi} and Ronald D. White}, title = {{DeepWeeds: A Multiclass Weed Species Image Dataset for Deep Learning}}, journal = {Scientific Reports}, year = 2019, number = 2058, month = 2, volume = 9, issue = 1, day = 14, url = "https://doi.org/10.1038/s41598-018-38343-3", doi = "10.1038/s41598-018-38343-3" }

  17. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Yubetsu (2015). Citation Trends for "ImageNet Large Scale Visual Recognition Challenge" [Dataset]. https://www.shibatadb.com/article/ktMmmEdy

Citation Trends for "ImageNet Large Scale Visual Recognition Challenge"

Explore at:
Dataset updated
Apr 11, 2015
Dataset authored and provided by
Yubetsu
License

https://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt

Time period covered
2012 - 2025
Variables measured
New Citations per Year
Description

Yearly citation counts for the publication titled "ImageNet Large Scale Visual Recognition Challenge".

Search
Clear search
Close search
Google apps
Main menu