MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains images representing the American Sign Language (ASL) alphabet from A to Z. Each alphabet class includes 200 grayscale hand gesture images, totaling 5,200 images across the entire dataset.
Each image is annotated with 21 hand landmark keypoints, enabling efficient use in computer vision, hand pose estimation, sign language recognition, and gesture classification tasks.
This dataset is suitable for:
Deep learning models for ASL recognition
Real-time gesture recognition projects
Educational tools and accessibility technologies
✅ Classes: 26 (A-Z) ✅ Images per class: 200 ✅ Total images: 5,200
Example use cases include training a CNN or integrating with hand-tracking systems like MediaPipe or OpenCV.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
My ASL dataset was created by combining my self-made dataset with other small and large datasets found across GitHub and Kaggle.
I would like to thank Khaled Jedoui, a Stanford researcher, for helping me compiling this dataset. Without him, many of the inconvenient datasets would still lay embedded deep in inaccessible research papers.
Sign language is still a relatively new and untouched field in regards to utilizing it in machine learning. Thus, datasets are small-scale and hard to utilize.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Since the 'J' and 'Z' sign are a dynamic gesture unlike the other static signs I went with a slightly unconventional approach of training the model on a so called 'J-start' and 'J-end' sign. When seen in quick succession of one-another my code has hard-coded logic of recognizing the sign as a 'J'. I used a similar approach with 'Z', training the model on a 'Z-start' and 'Z-end' and when the the software detects:
'Z-start' -> 'Z-end' -> 'Z-start' -> 'Z-end'
or
'Z-end' -> 'Z-start' -> 'Z-end' -> 'Z-start'
it returns the letter 'Z'.
For ease of use I added a 'Spacebar' and 'Backspace' sign to help potential users with interacting with a text form in a GUI.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data set consists of a set of 870 images. Each image contains a hand making the shape of an ASL letter with some variation.
http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
If you find this dataset useful, please upvote to make sure it is recommended to others! Thanks! 😃
The data set is a collection of images of alphabets from the American Sign Language, separated in 29 folders which represent the various classes.
The training data set contains 87,000 images which are 200x200 pixels. There are 29 classes, of which 26 are for the letters A-Z and 3 classes for SPACE, DELETE and NOTHING. These 3 classes are very helpful in real-time applications, and classification. The test data set contains a mere 29 images, to encourage the use of real-world test images.
https://www.nidcd.nih.gov/sites/default/files/Content%20Images/NIDCD-ASL-hands-2014.jpg" alt="enter image description here">
https://www.nidcd.nih.gov/sites/default/files/Content%20Images/NIDCD-ASL-hands-2014.jpg
What is that person saying? I was inspired to create the dataset to help solve the real-world problem of sign language recognition.
This dataset was created by NASRIN SULTHANA
This dataset was created by AlexandruArama
This dataset was created by BELIMI SNT
This dataset was created by Nitin Kumar00
http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
Sign language is one of the oldest and most natural forms of language for communication, but since most people do not know sign language and interpreters are very difficult to come by. So, I thought of using neural networks for ASL alphabet classification.
The dataset contains coloured images of hand signs representing different American sign language alphabets.
This dataset was created by Lucia Janíková
This dataset is the PyTorch friendly version of this dataset on Kaggle.
This dataset was created by Vishnu Murali
This dataset was created by Kuzivakwashe Muvezwa
This dataset was created by Nithil Sethupathi
This dataset was created by Lucas Vieira de Miranda
This dataset was created by Omar Zakaria
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created by Huỳnh Anh Dũng
Released under Apache 2.0
This dataset was created by reinel
It contains the following files:
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset can be used to apply the ideas of multi class classification using the technology of your choice. This is curated for convolution neural network. The multi class classification result will be close to 98% of accuracy if the algorithm is good enough.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains images representing the American Sign Language (ASL) alphabet from A to Z. Each alphabet class includes 200 grayscale hand gesture images, totaling 5,200 images across the entire dataset.
Each image is annotated with 21 hand landmark keypoints, enabling efficient use in computer vision, hand pose estimation, sign language recognition, and gesture classification tasks.
This dataset is suitable for:
Deep learning models for ASL recognition
Real-time gesture recognition projects
Educational tools and accessibility technologies
✅ Classes: 26 (A-Z) ✅ Images per class: 200 ✅ Total images: 5,200
Example use cases include training a CNN or integrating with hand-tracking systems like MediaPipe or OpenCV.