Facebook
TwitterThe STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. In particular, each class has fewer labeled training examples than in CIFAR-10, but a very large set of unlabeled examples is provided to learn image models prior to supervised training. The primary challenge is to make use of the unlabeled data (which comes from a similar but different distribution from the labeled data) to build a useful prior. All images were acquired from labeled examples on ImageNet.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('stl10', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
https://storage.googleapis.com/tfds-data/visualization/fig/stl10-1.0.0.png" alt="Visualization" width="500px">
Facebook
TwitterThe dataset used in the paper is MNIST, CIFAR10 and STL10. These are datasets for image classification tasks.
Facebook
Twitterhttps://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
![]() The STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. In particular, each class has fewer labeled training examples than in CIFAR-10, but a very large set of unlabeled examples is provided to learn image models prior to supervised training. The primary challenge is to make use of the unlabeled data (which comes from a similar but different distribution from the labeled data) to build a useful prior. We also expect that the higher resolution of this dataset (96x96) will make it a challenging benchmark for developing more scalable unsupervised learning methods. Overview 10 classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck. Images are 96x96 pixels, color. 500 training images (10 pre-defined folds), 800 test images per class. 100000 unlabeled images for uns
Facebook
TwitterSTL10 - Segmentation
Please consider sponsoring this repo so that we can continue to develop high-quality datasets for the AI and ML research.
To become a sponsor:
GitHub Sponsors
Buy me a coffee
You can also sponsor us by downloading our free application, Etiqueta, to your devices:
Etiqueta on iOS or Apple Chip Macs
Etiqueta on Android
This repo contains segmented images for the labeled part of the STL-10 Dataset.
If you are looking for STL10-Labeled variant of the dataset… See the full description on the dataset page: https://huggingface.co/datasets/semihyagli/STL10-Segmented.
Facebook
TwitterThe dataset used in the paper is CIFAR-10 and STL-10, which are commonly used datasets for image classification tasks.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract
In the last years, neural networks have evolved from laboratory environments to the state-of-the-art for many real-world problems. Our hypothesis is that neural network models (i.e., their weights and biases) evolve on unique, smooth trajectories in weight space during training. Following, a population of such neural network models (refereed to as “model zoo”) would form topological structures in weight space. We think that the geometry, curvature and smoothness of these structures contain information about the state of training and can be reveal latent properties of individual models. With such zoos, one could investigate novel approaches for (i) model analysis, (ii) discover unknown learning dynamics, (iii) learn rich representations of such populations, or (iv) exploit the model zoos for generative modelling of neural network weights and biases. Unfortunately, the lack of standardized model zoos and available benchmarks significantly increases the friction for further research about populations of neural networks. With this work, we publish a novel dataset of model zoos containing systematically generated and diverse populations of neural network models for further research. In total the proposed model zoo dataset is based on six image datasets, consist of 24 model zoos with varying hyperparameter combinations are generated and includes 47’360 unique neural network models resulting in over 2’415’360 collected model states. Additionally, to the model zoo data we provide an in-depth analysis of the zoos and provide benchmarks for multiple downstream tasks as mentioned before.
Dataset
This dataset is part of a larger collection of model zoos and contains the zoos trained on the labelled samples from STL10. All zoos with extensive information and code can be found at www.modelzoos.cc.
This repository contains the raw model zoos as collections of models (file names beginning with "cifar_"). Zoos are trained with small and large CNN models, in three configurations varying the seed only (seed), varying hyperparameters with fixed seeds (hyp_fix) or varying hyperparameters with random seeds (hyp_rand). Due to the large filesize, the preprocessed datasets are hosted in a separate repository. The index_dict.json files contain information on how to read the vectorized models.
For more information on the zoos and code to access and use the zoos, please see www.modelzoos.cc.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
QTL Report
Facebook
TwitterThe dataset used in this paper is a collection of images from the STL-10 dataset, preprocessed and used for training and evaluation of the proposed diffusion spectral entropy and diffusion spectral mutual information methods.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset Card for STL-10 Cleaned (Deduplicated Training Set)
Paper | Code
Dataset Description
This dataset is a modified version of the STL-10 dataset. The primary modification involves deduplicating the training set by removing any images that are exact byte-for-byte matches (based on SHA256 hash) with images present in the original STL-10 test set. The dataset comprises this cleaned training set and the original, unmodified STL-10 test set. The goal is to provide a… See the full description on the dataset page: https://huggingface.co/datasets/Shu1L0n9/CleanSTL-10.
Facebook
TwitterThe MNIST, KMNIST, FashionMNIST, STL-10 and CIFAR-10 datasets are used for few-shot learning experiments.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We introduce PiCCL (Primary Component Contrastive Learning), a self-supervised contrastive learning framework that utilizes a multiplex Siamese network structure consisting of many identical branches rather than 2 to maximize learning efficiency. PiCCL is simple and light weight, it does not use asymmetric networks, intricate pretext tasks, hard to compute loss functions or multimodal data, which are common for multiview contrastive learning frameworks and could hinder performance, simplicity, generalizability and explainability. PiCCL obtains multiple positive samples by applying the same image augmentation paradigm to the same image numerous times, the network loss is calculated using a custom designed Loss function named PiCLoss (Primary Component Loss) to take advantage of PiCCL’s unique structure while keeping it computationally lightweight. To demonstrate its strength, we benchmarked PiCCL against various state-of-the-art self-supervised algorithms on multiple datasets including CIFAR-10, CIFAR-100, and STL-10. PiCCL achieved top performance in most of our tests, with top-1 accuracy of 94%, 72%, and 97% for the 3 datasets respectively. But where PiCCL excels is in the small batch learning scenarios. When testing on STL-10 using a batch size of 8, PiCCL still achieved 93% accuracy, outperforming the competition by about 3 percentage points.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Datasets:
Facebook
TwitterThis dataset was created by siminyu7_qq
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We introduce PiCCL (Primary Component Contrastive Learning), a self-supervised contrastive learning framework that utilizes a multiplex Siamese network structure consisting of many identical branches rather than 2 to maximize learning efficiency. PiCCL is simple and light weight, it does not use asymmetric networks, intricate pretext tasks, hard to compute loss functions or multimodal data, which are common for multiview contrastive learning frameworks and could hinder performance, simplicity, generalizability and explainability. PiCCL obtains multiple positive samples by applying the same image augmentation paradigm to the same image numerous times, the network loss is calculated using a custom designed Loss function named PiCLoss (Primary Component Loss) to take advantage of PiCCL’s unique structure while keeping it computationally lightweight. To demonstrate its strength, we benchmarked PiCCL against various state-of-the-art self-supervised algorithms on multiple datasets including CIFAR-10, CIFAR-100, and STL-10. PiCCL achieved top performance in most of our tests, with top-1 accuracy of 94%, 72%, and 97% for the 3 datasets respectively. But where PiCCL excels is in the small batch learning scenarios. When testing on STL-10 using a batch size of 8, PiCCL still achieved 93% accuracy, outperforming the competition by about 3 percentage points.
Facebook
TwitterThe dataset used in the paper is a de-noising diffusion probabilistic model (DDPM) trained on CIFAR-10, CIFAR-100, STL-10, and Tiny-ImageNet.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We introduce PiCCL (Primary Component Contrastive Learning), a self-supervised contrastive learning framework that utilizes a multiplex Siamese network structure consisting of many identical branches rather than 2 to maximize learning efficiency. PiCCL is simple and light weight, it does not use asymmetric networks, intricate pretext tasks, hard to compute loss functions or multimodal data, which are common for multiview contrastive learning frameworks and could hinder performance, simplicity, generalizability and explainability. PiCCL obtains multiple positive samples by applying the same image augmentation paradigm to the same image numerous times, the network loss is calculated using a custom designed Loss function named PiCLoss (Primary Component Loss) to take advantage of PiCCL’s unique structure while keeping it computationally lightweight. To demonstrate its strength, we benchmarked PiCCL against various state-of-the-art self-supervised algorithms on multiple datasets including CIFAR-10, CIFAR-100, and STL-10. PiCCL achieved top performance in most of our tests, with top-1 accuracy of 94%, 72%, and 97% for the 3 datasets respectively. But where PiCCL excels is in the small batch learning scenarios. When testing on STL-10 using a batch size of 8, PiCCL still achieved 93% accuracy, outperforming the competition by about 3 percentage points.
Facebook
TwitterThe dataset used in the paper is not explicitly described, but it is mentioned that the authors used CIFAR-10, CIFAR-100, and STL-10 datasets for training and testing the embedding functions.
Facebook
TwitterThe dataset used in the paper is a ResNet trained on various datasets, including MNIST, Fashion MNIST, CIFAR10, STL10, and CIFAR100.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterThe STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. In particular, each class has fewer labeled training examples than in CIFAR-10, but a very large set of unlabeled examples is provided to learn image models prior to supervised training. The primary challenge is to make use of the unlabeled data (which comes from a similar but different distribution from the labeled data) to build a useful prior. All images were acquired from labeled examples on ImageNet.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('stl10', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
https://storage.googleapis.com/tfds-data/visualization/fig/stl10-1.0.0.png" alt="Visualization" width="500px">