Author: Yann LeCun, Corinna Cortes, Christopher J.C. Burges
Source: MNIST Website - Date unknown
Please cite:
The MNIST database of handwritten digits with 784 features, raw data available at: http://yann.lecun.com/exdb/mnist/. It can be split in a training set of the first 60,000 examples, and a test set of 10,000 examples
It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. The original black and white (bilevel) images from NIST were size normalized to fit in a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels as a result of the anti-aliasing technique used by the normalization algorithm. the images were centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
With some classification methods (particularly template-based methods, such as SVM and K-nearest neighbors), the error rate improves when the digits are centered by bounding box rather than center of mass. If you do this kind of pre-processing, you should report it in your publications. The MNIST database was constructed from NIST's NIST originally designated SD-3 as their training set and SD-1 as their test set. However, SD-3 is much cleaner and easier to recognize than SD-1. The reason for this can be found on the fact that SD-3 was collected among Census Bureau employees, while SD-1 was collected among high-school students. Drawing sensible conclusions from learning experiments requires that the result be independent of the choice of training set and test among the complete set of samples. Therefore it was necessary to build a new database by mixing NIST's datasets.
The MNIST training set is composed of 30,000 patterns from SD-3 and 30,000 patterns from SD-1. Our test set was composed of 5,000 patterns from SD-3 and 5,000 patterns from SD-1. The 60,000 pattern training set contained examples from approximately 250 writers. We made sure that the sets of writers of the training set and test set were disjoint. SD-1 contains 58,527 digit images written by 500 different writers. In contrast to SD-3, where blocks of data from each writer appeared in sequence, the data in SD-1 is scrambled. Writer identities for SD-1 is available and we used this information to unscramble the writers. We then split SD-1 in two: characters written by the first 250 writers went into our new training set. The remaining 250 writers were placed in our test set. Thus we had two sets with nearly 30,000 examples each. The new training set was completed with enough examples from SD-3, starting at pattern # 0, to make a full set of 60,000 training patterns. Similarly, the new test set was completed with SD-3 examples starting at pattern # 35,000 to make a full set with 60,000 test patterns. Only a subset of 10,000 test images (5,000 from SD-1 and 5,000 from SD-3) is available on this site. The full 60,000 sample training set is available.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Experimental studies support the notion of spike-based neuronal information processing in the brain, with neural circuits exhibiting a wide range of temporally-based coding strategies to rapidly and efficiently represent sensory stimuli. Accordingly, it would be desirable to apply spike-based computation to tackling real-world challenges, and in particular transferring such theory to neuromorphic systems for low-power embedded applications. Motivated by this, we propose a new supervised learning method that can train multilayer spiking neural networks to solve classification problems based on a rapid, first-to-spike decoding strategy. The proposed learning rule supports multiple spikes fired by stochastic hidden neurons, and yet is stable by relying on first-spike responses generated by a deterministic output layer. In addition to this, we also explore several distinct, spike-based encoding strategies in order to form compact representations of presented input data. We demonstrate the classification performance of the learning rule as applied to several benchmark datasets, including MNIST. The learning rule is capable of generalizing from the data, and is successful even when used with constrained network architectures containing few input and hidden layer neurons. Furthermore, we highlight a novel encoding strategy, termed “scanline encoding,” that can transform image data into compact spatiotemporal patterns for subsequent network processing. Designing constrained, but optimized, network structures and performing input dimensionality reduction has strong implications for neuromorphic applications.
http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
This dataset was created by dillsunnyb11
Released under Database: Open Database, Contents: Database Contents
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Snake Eyes is a dataset of tiny images simulating dice.
https://i.imgur.com/gaD5UtQ.png" alt="Snake Eyes example pictures">
Invariance to translation and rotation is an important attribute we would like image classifiers to have in many applications. For many problems, even if there doesn't seem to be a lot of translation in the data, augmenting it with these transformations is often beneficial. There are not many datasets where these transformations are clearly relevant, though. The "Snake Eyes" dataset seeks to provide a problem where rotation and translation are clearly a fundamental aspect of the problem, and not just something intuitively believed to be involved.
Image classifiers are frequently utilized in a pipeline where a bounding box is first extracted from the complete image, and this process might provide centered data to the classifier. Some translation might still be present in the data the classifier sees, though, making the phenomenon relevant to classification nevertheless. A Snake Eyes classifier can clearly benefit from such a pre-processing. But the point here is trying to learn how much a classifier can learn to do by itself. In special we would like to demonstrate the "built-in" invariance to translations from CNNs.
Snake Eyes contains artificial images simulating the a roll of one or two dice. The face patterns were modified to contain at most 3 black spots, making it impossible to solve the problem by merely counting them. The data was synthesized using a Python program, each image produced from a set of floating-point parameters modeling the position and angle of each dice.
https://imgur.com/gIcZVLN.png" alt="Snake Eyes face patterns, with distinctive missing pips">
The data format is binary, with records of 401 bytes. The first byte contains the class (1 to 12, notice it does not start at 0), and the other 400 bytes are the image rows. We offer 1 million images, split in 10 files with 100k records each, and an extra test set with 10,000 images.
We were inspired by the popular "tiny image" datasets often studied in ML research: MNIST, CIFAR-10 and Fashion-MNIST. Our dataset has smaller images, though, only 20x20, and 12 classes. The reduced proportions should help approximate the actual 3D and 6D manifolds of each class with the available number of data points (1 million images).
The data is artificial, with limited and very well-defined patterns, noise-free and properly anti-aliased. This is not about improving from 95% to 97% accuracy and wondering if 99% is possible with a deeper network. We don't expect less than 100% precision to be achieved with any method eventually. What we are interested to see is how do different methods compare in efficiency, how hard is it to train different models, and how the translation and rotation invariance is enforced or achieved.
We are also interested in studying the concept of manifold learning. The data has some intra-class variability due to different possible face combinations with two dice. But most of the variation comes from translation and rotation. We hope to have sampled enough data to really allow for the extraction of these manifolds in 400 dimensions, and to investigate topics such as the role of pre-training, and the relation between modeling the manifold of the whole data and of the separate classes.
Translations alone already create quite non-convex manifolds, but our classes also have the property that some linear combinations are actually a different class (e.g. two images from the "2" face make an image from the "4" class). We are curious to see how this property can make the problem more challenging to different techniques.
We are also secretly hoping to have created the image-detection version of the infamous "spiral" problem for neural networks. We are offering the prize of one ham sandwich, collected at my local café, to the first person who manages to train a neural network to solve this problem, convolutional or not, and using just traditional techniques such as logistic or ReLU activation functions and SGD training. 99% accuracy is enough. The resulting network may be susceptible to adversarial instances, this is fine, but we'll be constantly complaining about it in your ear while you eat the sandwich.
To leverage the vast literature solving the original MNIST digit recognition problem in small thumbnails, this firmware dataset maps the first 1024 bytes of malicious, benign and hacked Internet of Things and embedded software binaries (Executable and Linkable Format, ELF). The goal is to provide a drop-in replacement for MNIST techniques but relevant to weeding out malware using image recognition.
The images are reported in CSV where the filename, label class (both categorical and numerical), and the first 1024 bytes mapped into a grayscale range from 0-255 by converting first each byte to decimal (0-15) then scaling.
See additional background on ELF files, https://en.wikipedia.org/wiki/Executable_and_Linkable_Format and https://linux-audit.com/elf-binaries-on-linux-understanding-and-analysis/
The labeled ELF files repository, https://github.com/nimrodpar/Labeled-Elfs
Comparison of firmware detection using these image representations and comparing with signature-based methods as well as contrasting statistical (tree) methods with deep learning techniques
http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/
This dataset is flattern images where each image is represented in a row - Objective: Establish benchmark results for Arabic digit recognition using different classification techniques. - Objective: Compare performances of different classification techniques on Arabic and Latin digit recognition problems. - Valid comparison requires Arabic and Latin digit databases to be in the same format. - A Modified version of the ADBase (MADBase) with the same size and format as MNIST is created. - MADBase is derived from ADBase by size-normalizing each digit to a 20x20 box while preserving aspect ratio. - Size-normalization procedure results in gray levels due to anti-aliasing filter. - MADBase and MNIST have the same size and format. - MNIST is a modified version of the NIST digits database. - MNIST is available for download. I used this code to turn 70k arabic digit into a tabular data for ease of use and to waste less time in the preprocessing ```
root_dir = "MAHD"
folder_names = ['Part{:02d}'.format(i) for i in range(1, 13)]
train_test_folders = ['MAHDBase_TrainingSet', 'test']
data = [] labels = []
for tt in train_test_folders: for folder_name in folder_names: if tt == train_test_folders[1] and folder_name == 'Part03': break subfolder_path = os.path.join(root_dir, tt, folder_name) print(subfolder_path) print(os.listdir(subfolder_path)) for filename in os.listdir(subfolder_path): # check of the file fromat that it's an image if os.path.splitext(filename)[1].lower() not in '.bmp': continue # Load the image img_path = os.path.join(subfolder_path, filename) img = Image.open(img_path)
# Convert the image to grayscale and flatten it into a 1D array
img_grey = img.convert('L')
img_data = np.array(img_grey).flatten()
# Extract the label from the filename and convert it to an integer
label = int(filename.split('_')[2].replace('digit', '').split('.')[0])
# Add the image data and label to the lists
data.append(img_data)
labels.append(label)
df = pd.DataFrame(data) df['label'] = labels ``` This dataset made by https://datacenter.aucegypt.edu/shazeem with 2 datasets - ADBase - MADBase (✅ the one this dataset derived from , similar in form to mnist)
This dataset was created by kishor datta gupta
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Author: Yann LeCun, Corinna Cortes, Christopher J.C. Burges
Source: MNIST Website - Date unknown
Please cite:
The MNIST database of handwritten digits with 784 features, raw data available at: http://yann.lecun.com/exdb/mnist/. It can be split in a training set of the first 60,000 examples, and a test set of 10,000 examples
It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image. It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting. The original black and white (bilevel) images from NIST were size normalized to fit in a 20x20 pixel box while preserving their aspect ratio. The resulting images contain grey levels as a result of the anti-aliasing technique used by the normalization algorithm. the images were centered in a 28x28 image by computing the center of mass of the pixels, and translating the image so as to position this point at the center of the 28x28 field.
With some classification methods (particularly template-based methods, such as SVM and K-nearest neighbors), the error rate improves when the digits are centered by bounding box rather than center of mass. If you do this kind of pre-processing, you should report it in your publications. The MNIST database was constructed from NIST's NIST originally designated SD-3 as their training set and SD-1 as their test set. However, SD-3 is much cleaner and easier to recognize than SD-1. The reason for this can be found on the fact that SD-3 was collected among Census Bureau employees, while SD-1 was collected among high-school students. Drawing sensible conclusions from learning experiments requires that the result be independent of the choice of training set and test among the complete set of samples. Therefore it was necessary to build a new database by mixing NIST's datasets.
The MNIST training set is composed of 30,000 patterns from SD-3 and 30,000 patterns from SD-1. Our test set was composed of 5,000 patterns from SD-3 and 5,000 patterns from SD-1. The 60,000 pattern training set contained examples from approximately 250 writers. We made sure that the sets of writers of the training set and test set were disjoint. SD-1 contains 58,527 digit images written by 500 different writers. In contrast to SD-3, where blocks of data from each writer appeared in sequence, the data in SD-1 is scrambled. Writer identities for SD-1 is available and we used this information to unscramble the writers. We then split SD-1 in two: characters written by the first 250 writers went into our new training set. The remaining 250 writers were placed in our test set. Thus we had two sets with nearly 30,000 examples each. The new training set was completed with enough examples from SD-3, starting at pattern # 0, to make a full set of 60,000 training patterns. Similarly, the new test set was completed with SD-3 examples starting at pattern # 35,000 to make a full set with 60,000 test patterns. Only a subset of 10,000 test images (5,000 from SD-1 and 5,000 from SD-3) is available on this site. The full 60,000 sample training set is available.