Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sign language is a cardinal element for communication between deaf and dumb community. Sign language has its own grammatical structure and gesticulation nature. Research on SLRT focuses a lot of attention in gesture identification. Sign language comprises of manual gestures performed by hand poses and non-manual features expressed through eye, mouth and gaze movements. The sentence-level completely labelled Indian Sign Language dataset for Sign Language Translation and Recognition (SLTR) research is developed. The ISL-CSLTR dataset assists the research community to explore intuitive insights and to build the SLTR framework for establishing communication with the deaf and dumb community using advanced deep learning and computer vision methods for SLTR purposes. This ISL-CSLTR dataset aims in contributing to the sentence level dataset created with two native signers from Navajeevan, Residential School for the Deaf, College of Spl. D.Ed & B.Ed, Vocational Centre, and Child Care & Learning Centre, Ayyalurimetta, Andhra Pradesh, India and four student volunteers from SASTRA Deemed University, Thanjavur, Tamilnadu. The ISL-CSLTR corpus consists of a large vocabulary of 700 fully annotated videos, 18863 Sentence level frames, and 1036 word level images for 100 Spoken language Sentences performed by 7 different Signers. This corpus is arranged based on signer variants and time boundaries with fully annotated details and it is made available publicly. The main objective of creating this sentence level ISL-CSLRT corpus is to explore more research outcomes in the area of SLTR. This completely labelled video corpus assists the researchers to build framework for converting spoken language sentences into sign language and vice versa. This corpus has been created to address the various challenges faced by the researchers in SLRT and significantly improves translation and recognition performance. The videos are annotated with relevant spoken language sentences provide clear and easy understanding of the corpus data. Acknowledgements: The research was funded by the Science and Engineering Research Board (SERB), India under Start-up Research Grant (SRG)/2019–2021 (Grant no. SRG/2019/001338). And also, we thank all the signers for their contribution in collecting the sign videos and the successful completion of the ISL-CSLTR corpus. We would like to thank Navajeevan, Residential School for the Deaf, College of Spl. D.Ed & B.Ed, Vocational Centre, and Child Care & Learning Centre, Ayyalurimetta, Andhra Pradesh, India for their support and contribution.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A curated Indian Sign Language (ISL) dataset featuring hand gestures, facial expressions, and motion sequences designed for AI/ML sign-language recognition, gesture classification, accessibility tools, and human–computer interaction systems.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: Indian Sign Language (ISL) is a complete language with its own grammar, syntax, vocabulary and several unique linguistic attributes. It is used by over 5 million deaf people in India. Currently, there is no publicly available dataset on ISL to evaluate Sign Language Recognition (SLR) approaches. In this work, we present the Indian Lexicon Sign Language Dataset - INCLUDE - an ISL dataset that contains 0.27 million frames across 4,287 videos over 263 word signs from 15 different word categories. INCLUDE is recorded with the help of experienced signers to provide close resemblance to natural conditions. A subset of 50 word signs is chosen across word categories to define INCLUDE-50 for rapid evaluation of SLR methods with hyperparameter tuning. The best performing model achieves an accuracy of 94.5% on the INCLUDE-50 dataset and 85.6% on the INCLUDE dataset. Download Instructions: For ease of access, we have prepared a Shell Script to download all the parts of the dataset and extract them to form the complete INCLUDE dataset.You can find the script here: http://bit.ly/include_dl
Facebook
TwitterThis dataset contains MP4 video clips of Indian Sign Language (ISL) gestures, intended for use in machine learning, gesture recognition, and accessibility-focused projects. It includes over 3000 short videos featuring alphabets (A–Z), numbers (0–9), and common words like “Hello” and “Thank You.” To meet upload limits, the dataset is split across multiple ZIP files, each with around 1000 videos. All files are in .mp4 format, with consistent resolution and duration. This dataset is useful for building ISL recognition models, real-time sign detection, and inclusive communication tools. Unzip all parts to access the full set.
Facebook
TwitterThis dataset contains 36 classes representing Indian Sign Language (ISL) characters, including digits (0–9) and alphabets (A–Z). Each class has 1,000 images, resulting in a total of 36,000 labeled samples.
The dataset is designed to support research and development in:
Computer Vision: Hand gesture recognition
Deep Learning: Image classification and CNN-based models
Human-Computer Interaction: Enabling communication tools for the deaf and hard-of-hearing community
Sign Language Translation Systems
Dataset Details:
Classes: 36 (0–9, A–Z)
Images per class: 1,000(some might need pre-processing)
Format: JPG
Use cases: Training, validation, and testing of sign language recognition models
This dataset can be used to build, train, and benchmark machine learning models for gesture recognition tasks. It contributes to bridging the communication gap by empowering developers and researchers to create real-world applications such as sign language interpreters, accessibility tools, and educational platforms.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset contains the RGB images of hand gestures of twenty ISL words, namely, ‘afraid’,’agree’,’assistance’,’bad’,’become’,’college’,’doctor’,’from’,’pain’,’pray’, ’secondary’, ’skin’, ’small’, ‘specific’, ‘stand’, ’today’, ‘warn’, ‘which’, ‘work’, ‘you’’ which are commonly used to convey messages or seek support during medical situations. All the words included in this dataset are static. The images were captured from 8 individuals including 6 males and 2 females in the age group of 9 years to 30 years. The dataset contains a 18000 images in jpg format. The images are labelled using the format ISLword_X_YYYY_Z, where: • ISLword corresponds to the words ‘afraid’, ‘agree’, ‘assistance’, ‘bad’, ‘become’, ‘college’, ‘doctor’ ,‘from’, ’pray’, ‘pain’, ‘secondary’, ‘skin’, ‘small’, ‘specific’, ‘stand’, ‘today’, ‘warn’, ‘which’, ‘work’, ‘you’. • X is an image number in the range 1 to 900. • YYYY is an identifier of the participant and is in the range of 1 to 6. • Z corresponds to 01 or 02 that identifies the sample number for each subject. For example, the file named afraid_1_user1_1 is the image sequence of the first sample of the ISL gesture of the word ‘afraid’ presented by the 1st user.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
## Overview
Indian Sign Language is a dataset for object detection tasks - it contains Sign_Language annotations for 1,866 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [MIT license](https://creativecommons.org/licenses/MIT).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This video and gloss-based dataset has been meticulously crafted to enhance the precision and resilience of ISL (Indian Sign Language) gesture recognition and generation systems. Our goal in sharing this dataset is to contribute to the research community, providing a valuable resource for fellow researchers to explore and innovate in the realm of sign language recognition and generation.Overview of the Dataset: Comprising a diverse array of ISL gesture videos and gloss datasets. The term "gloss" in this context often refers to a written or spoken description of the meaning of a sign, allowing for the representation of sign language in a written form. The dataset includes information about the corresponding spoken or written language and the gloss for each sign. Key components of a sign language gloss dataset include ISL grammar that follows a layered approach, incorporating specific spatial indices for tense and a lexicon with compounds. It follows a unique word order based on noun, verb, object, adjective, or part of a question. Marathi sign language follows the subject-object-verb (SOV) form, facilitating comprehension and adaptation to regional languages. This Marathi sign language gloss aims to become a medium for everyday communication among deaf individuals. This dataset reflects a careful curation process, simulating real-world scenarios. The original videos showcase a variety of gestures performed by a professional signer capturing a broad spectrum of sign language expressions. Incorporating Realism with green screen with controlled lighting conditions. All videos within this dataset adhere to pixels, ensuring uniformity for data presentation and facilitating streamlined pre-processing and model development stored in a format compatible with various machine and Deep learning frameworks, these videos seamlessly integrate into the research pipeline
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
This dataset contains a comprehensive collection of Indian Sign Language (ISL) hand gestures representing the 26 letters of the English alphabet, captured using the MediaPipe pose estimation technology. The dataset includes over 50,000 high-quality images of hand gestures captured from various angles, lighting conditions, and skin tones. The dataset is ideal for researchers and developers working in the field of computer vision, machine learning, and sign language recognition.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Recent advancements in sign language recognition technology have significantly improved communication for individuals who are deaf or hard of hearing. Despite these advancements, many people who use sign language still face challenges in everyday interactions due to widespread unfamiliarity with sign language. However, new technologies have greatly enhanced our ability to recognize and interpret sign language, making communication more accessible and inclusive.
This dataset includes images of common phrases in both Indian Sign Language (ISL) and American Sign Language (ASL). The images were captured using a standard laptop webcam with a resolution of 680x480 pixels and a bit depth of 24 pixels. The dataset covers 44 different phrases, each represented by 40 images. All images are stored in PNG format. Note that this dataset includes static signs only and does not contain any dynamic sign language gestures.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset includes videos files of the hand gestures of eight words (accident, call, doctor, help, hot, lose, pain, thief) from Indian sign language (ISL), commonly used to communicate during emergency situations. The data is useful for the researchers working on vision based automatic sign language recognition as well as hand gesture recognition.
All the words included in the dataset, except the word doctor are dynamic hand gestures. The videos in this dataset were collected by asking the participants to stand comfortably behind a black colored board and present the hand gestures, in front of the board. A Sony cyber shot DSC-W810 digital camera with 20.1 mega pixel resolution has been used for capturing the videos.
The videos have been collected from 26 individuals including 12 males and 14 females in the age group of 22 to 26 years. Two sample videos have been captured from each participant in an indoor environment under normal lighting conditions by placing the camera at a fixed distance. The dataset is presented in two folders with the original raw video sequences in one folder, and the cropped and downsampled video sequences in the other folder.
Facebook
TwitterThis dataset was created by Harsh0239
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of static hand gestures and lips movement for each character in the English alphabet, eight Hindi Vowels and ten Numerals as represented in Indian sign language (ISL). The dataset consists of 1,02,470 images of subjects from different age groups presenting static gestures under varied backgrounds and illumination conditions. The dataset is structured into three folders namely Kids, Teenagers and Adults. Each folder consists of sub-folders namely Full Sleeves and Half Sleeves indicating the type of clothing that the subject has worn at the time of image acquisition. In each sub-folders, images for the English alphabet, Hindi Vowels and Numerals are stored respectively in the sub-folders named with that specific character. However, for the English alphabet 'E' and Numeral '9' we have captured two different signs for each (that are used interchangeably), and it is contained in the folder namely E1 and E2 for alphabet 'E' and 9a and 9b for Numeral '9'. For the English alphabet, wherever a character is represented by a dynamic sign, the last frame of the sign is captured. For example, this is typically a case with English characters like 'J', 'H' and 'Y'. The images are stored in .jpeg format and have resolutions varying from 300 x 500 to 800 x 600, and the size is less than 100KB. The dataset is captured by a team pursuing research at Chandigarh College of Engineering and Technology, Chandigarh. The subjects have been informed about the research and Informed Participant Consent has been obtained prior to image acquisition. This dataset can be used only for research purposes either as it is or after cropping the static gestures from the image after duly referencing it. Any other use of the dataset is strictly prohibited and any illegal use is subject to the Indian court of law.
Facebook
TwitterThis dataset consists of the Indian sign language of all the alphabets and numbers in Indian hand recognition given by ISRTC(Indian Sign Research and Training Center). This dataset is in black white background for faster computing and for getting better accuracy while training the dataset.
Please give credit to this dataset if you download it.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Indian Sign Language Dataset
Welcome to the Custom Indian Sign Language Dataset! This dataset has been meticulously curated with the aim of enhancing the accuracy and robustness of ISL gesture recognition systems. By sharing this dataset, we aspire to contribute to the research community and empower fellow researchers and data scientists to explore and innovate in the domain of sign language recognition.
Dataset Overview: This dataset comprises a collection of ISL gesture images, meticulously captured and processed to simulate real-world scenarios. The original images feature a diverse range of gestures performed by myself and a few friends, representing a broad spectrum of sign language expressions.
Controlled Noise Addition: To introduce realism and variability akin to real-world communication scenarios, each image underwent controlled noise addition. This process included the incorporation of various background types such as blurry, messy, and colorful, aiming to emulate dynamic environmental conditions. By intentionally introducing controlled noise, we aimed to equip the dataset with the ability to train models that can effectively handle the challenges posed by different backgrounds and lighting conditions.
Image Specifications: All images in this dataset have been standardized to a size of 126x126 pixels. This uniformity ensures consistency in data presentation and facilitates ease of preprocessing and model development. The images are stored in a format that ensures seamless integration into various machine learning frameworks.
We encourage you to explore, analyze, and leverage this dataset for your research, experimentation, and development endeavors. By utilizing this dataset, you'll gain access to a unique collection of ISL gesture images that have been thoughtfully curated to enhance the accuracy and adaptability of sign language recognition models.
License: Please note that this dataset is provided under the [License Type], which outlines the terms and conditions for its use and distribution. Kindly ensure that you review and adhere to the license specifications when utilizing the dataset.
We invite you to join us in advancing the field of sign language recognition by leveraging this custom dataset as a valuable resource for your research and innovation. Together, let's work towards creating more inclusive and accessible technology solutions for the deaf community.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Indian Sign Language Detection is a dataset for object detection tasks - it contains A Z 0 9 annotations for 1,748 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Indian Sign Language_2025 is a dataset for object detection tasks - it contains A 8AcR annotations for 1,010 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterThe dataset includes 150+animated videos of indian sign language. The dimensions of the videos are 1280*720 with similar background. This dataset was created in response to a project on ISL. The videos are created using blender software. The dataset was created using the signs demonstrated on Indian Sign Language Research and Training Centre website which took as reference and created the dataset with 150+videos for Indian sign language.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains depth data collected through Intel RealSense Depth Camera D435i. Data corresponding to Indian Sign Language (ISL) gesture of Weekdays (Sunday-Saturday) is used.
Data is stored as comma separated values. Each line corresponds to a sign.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset includes the videos of the dynamic hand gestures for the agricultural words from Indian sign language (ISL). This dataset is used in a work on the recognition of hand gestures for the Indian sign language (ISL) words commonly used by deaf farmers. A hybrid deep learning model with convolutional long short term memory (LSTM) network has been exploited for gesture classification. The model has attained an average classification accuracy of 76.21% on the proposed dataset of ISL words from the agricultural domain. The work is published in the journal Expert Systems with Applications https://www.sciencedirect.com/science/article/abs/pii/S0957417421010009.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sign language is a cardinal element for communication between deaf and dumb community. Sign language has its own grammatical structure and gesticulation nature. Research on SLRT focuses a lot of attention in gesture identification. Sign language comprises of manual gestures performed by hand poses and non-manual features expressed through eye, mouth and gaze movements. The sentence-level completely labelled Indian Sign Language dataset for Sign Language Translation and Recognition (SLTR) research is developed. The ISL-CSLTR dataset assists the research community to explore intuitive insights and to build the SLTR framework for establishing communication with the deaf and dumb community using advanced deep learning and computer vision methods for SLTR purposes. This ISL-CSLTR dataset aims in contributing to the sentence level dataset created with two native signers from Navajeevan, Residential School for the Deaf, College of Spl. D.Ed & B.Ed, Vocational Centre, and Child Care & Learning Centre, Ayyalurimetta, Andhra Pradesh, India and four student volunteers from SASTRA Deemed University, Thanjavur, Tamilnadu. The ISL-CSLTR corpus consists of a large vocabulary of 700 fully annotated videos, 18863 Sentence level frames, and 1036 word level images for 100 Spoken language Sentences performed by 7 different Signers. This corpus is arranged based on signer variants and time boundaries with fully annotated details and it is made available publicly. The main objective of creating this sentence level ISL-CSLRT corpus is to explore more research outcomes in the area of SLTR. This completely labelled video corpus assists the researchers to build framework for converting spoken language sentences into sign language and vice versa. This corpus has been created to address the various challenges faced by the researchers in SLRT and significantly improves translation and recognition performance. The videos are annotated with relevant spoken language sentences provide clear and easy understanding of the corpus data. Acknowledgements: The research was funded by the Science and Engineering Research Board (SERB), India under Start-up Research Grant (SRG)/2019–2021 (Grant no. SRG/2019/001338). And also, we thank all the signers for their contribution in collecting the sign videos and the successful completion of the ISL-CSLTR corpus. We would like to thank Navajeevan, Residential School for the Deaf, College of Spl. D.Ed & B.Ed, Vocational Centre, and Child Care & Learning Centre, Ayyalurimetta, Andhra Pradesh, India for their support and contribution.