Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Hand Gestures Dataset for Sign Language Recognition (SLR) composed of six hand gesturesbased on American Sign Language (ASL). The dataset is divided into 6 categories which are "Hello", "Bye", "Yes", "No", "Perfect" and "Thank You". Each category is composed of 400 images where each image is preprocessed using the hand tracking module developed by https://www.computervision.zone/projects and saved with the hand skeleton landmarks on each image.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was originally created by Pablo Ochoa, Antonio Luna, Eliezer Álvarez. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/hand-gestures-recognition/hand-gestures-dataset.
This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.
Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Unlock communication with hand gesture recognition. Transforming sign language into text and speech, bridging the gap between the hearing.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This iHGS database comprises two separate datasets, genuine dataset, and skilled forgery dataset (Folder named as GENUINE and FORGERY). For a genuine dataset, 100 participants which each participant provides 10 genuine samples in each session (session 1 and session 2). They were allowed to reject a sample if it is considered incomplete or not satisfied with their movement. A total of 2000 (10×2×100) samples were gathered for this genuine dataset.
As of skilled forgery dataset, it was collected where other participants acted as skilled forgers. To imitate the signature, the forger was given a signature that has been pre-signed by the genuine owner on a piece of paper at random. They were asked to learn that signature with as much time as they needed. Then, each forger was asked to imitate the assigned signature for 10 times. Initially, a total of 1000 skilled forgery signatures were successfully collected. However, 20 skilled signature samples for two participants (10 samples each) were corrupted due to the hardware error. Thus, only 980 skilled forgery samples were finally obtained.
EMG-EPN-612 Dataset This dataset, called EMG-EPN-612, contains EMG signals of 612 people for benchmarking of hand gesture recognition systems. This dataset has been created by the Artificial Intelligence and Computer Vision Research Lab from the Escuela Politécnica Nacional, Quito-Ecuador. The data was obtained by recording, with the Myo armband, EMG signals on the forearm while users were performing five hand gestures: wave-in, wave-out, pinch, open and fist. EMGs of the hand relaxed are also included. The dataset is divided into two groups of 306 people each. One group is for training or designing hand gesture recognition models and the other is intended for testing the classification and recognition accuracy of hand gesture recognition models. In each of these two groups, each person has 50 EMGs for each of the 5 gesture recorded and also 50 EMGs for the hand relaxed. More information about this dataset can be found at: https://laboratorio-ia.epn.edu.ec/en/resources/dataset/2020_emg_dataset_612
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Asian Style Gestures Dataset is curated for the visual entertainment industry, featuring a collection of internet-collected images with resolutions ranging from 530 x 360 to 2973 x 3968 pixels. This dataset specializes in annotations of hands displaying Asian style gestures, such as nods, hearts, rock, OK, putting hands together, clasping hands, etc., utilizing bounding boxes and tags for precise identification.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
I collected this Dataset to train a model on Kathak Mudras so that, My little Sister can practice her dance form by knowing the errors and other problems that arise in her dance form. Hence, This dataset comprises images of various Asamyukta Hasta Mudras from Kathak, a classical Indian dance form. Created to support the development of AI models that recognize and analyze these gestures, the dataset is a valuable resource for cultural and technical communities.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F8258812%2F0911027fe723febc57cd7423691dadc1%2Fss.png?generation=1722876276584551&alt=media" alt="">
The dataset is structured into three main folders:
Each image is annotated with the corresponding mudra class, making it easy to use with various machine learning frameworks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises 26,000 images & 300 videos representing American Sign Language (ASL) hand gestures corresponding to the English alphabet (A-Z) & 5 gestures such as "Hello", "Thank You", "Sorry", "Yes" & "No". The dataset is organized into 26 folders, each labeled with a corresponding letter, containing multiple image samples to ensure variability in hand positioning, lighting, and individual hand differences. Similarly, the dynamic dataset contains 31 folders with 10 sample videos of each gesture, each folder also contains separate frames of the video in .jpg format. The images capture diverse hand gestures, making the dataset suitable for machine learning applications such as sign language recognition, computer vision, and deep learning-based classification tasks. When evaluated using an LSTM-based model, the dataset achieved high accuracy in sign recognition, demonstrating its effectiveness in sequential gesture learning.
10,000 Image caption data of gestures, mainly for young and middle-aged people, the collection environment includes indoor scenes and outdoor scenes, including various collection environments, various seasons, and various collection angles. The description language is English, mainly describing hand characteristics such as hand movements, gestures, image acquisition angles, gender, age, etc.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT Sign languages are natural, gestural languages that use visual channel to communicate. Deaf people develop them to overcome their inability to communicate orally. Sign language interpreters bridge the gap that deaf people face in society and provide them with an equal opportunity to thrive in all environments. However, Deaf people often struggle to communicate on a daily basis, especially in public service spaces such as hospitals, post offices, and municipal buildings. Therefore, the implementation of a tool for automatic recognition of sign language is essential to allow the autonomy of deaf people. Moreover, it is difficult to provide full-time interpreters to help deaf people in all public services and administrations.
Although surface electromyography (sEMG) provides an important potential technology for the detection of hand gestures, the related research in automatic SL recognition remains limited. To date, most works have focused on the recognition of hand gestures from images, videos, or gloves. The works of BEN HAJ AMOR et al. on EMG signals have shown that these multichannel signals contain rich and detailed information that can be exploited, in particular for the recognition of handshape and for the control prosthesis. Consequently, these successes represent a great step towards the recognition of gestures in sign language.
We build a large database of EMG data, recorded while signing the 28 characters of the Arabic sign language alphabet. This provides a valuable resource for research into how the muscles involved in signing produce the shapes needed to form the letters of the alphabet.
Instructions: The data for this project is provided as zipped NumPy arrays with custom headers. In order to load these files, you will need to have the NumPy package installed.
The respective loadz primitive allows for a straight forwardloading of the datasets. The data is organized as follows:
The data for each label (handshape) is stored in a separate folder. Each folder contains .npz files. An npz file contains the data for one record (a matrix 8x400).
For more details, please refer to the paper.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Xogta Tilmaamaha Qaabka Aasiya waxa loo habeeyey warshadaha madadaalada muuqaalka ah, oo ka muuqda ururinta sawiro internet-ka laga soo ururiyay oo leh qaraaro u dhexeeya 530 x 360 ilaa 2973 x 3968 pixels. Xog-ururintan waxay ku takhasustay sharraxaadda gacmaha muujinaya dhaqdhaqaaqyada hab-dhaqanka Aasiya, sida madax-muquunis, qalbiyo, dhagax weyn, OK, gacmaha isku-dubbaridka, gacmaha isku-qabsiga, iwm., iyaddoo la isticmaalayo sanduuqyo xidhidhyo iyo summada aqoonsiga saxda ah.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘Classify gestures by reading muscle activity.’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/kyr7plus/emg-4 on 12 November 2021.
--- Dataset description provided by original source is as follows ---
My friends and I are creating an open source prosthetic control system which would enable prosthetic devices to have multiple degrees of freedom. https://github.com/cyber-punk-me
The system is built of several components. It connects a muscle activity (EMG, Electromyography) sensor to a user Android/Android Things App. The app collects data, then a server builds a Tensorflow model specifically for this user. After that the model can be downloaded and executed on the device to control motors or other appendages.
This dataset can be used to map user residual muscle gestures to certain actions of a prosthetic such as open/close hand or rotate wrist.
For a reference please watch a video on this topic : Living with a mind-controlled robot arm
Four classes of motion were written from MYO armband with the help of our app https://github.com/cyber-punk-me/nukleos. The MYO armband has 8 sensors placed on skin surface, each measures electrical activity produced by muscles beneath.
Each dataset line has 8 consecutive readings of all 8 sensors. so 64 columns of EMG data. The last column is a resulting gesture that was made while recording the data (classes 0-3) So each line has the following structure:
[8sensors][8sensors][8sensors][8sensors][8sensors][8sensors][8sensors][8sensors][GESTURE_CLASS]
Data was recorded at 200 Hz, which means that each line is 8*(1/200) seconds = 40ms of record time.
A classifier given 64 numbers would predict a gesture class (0-3). Gesture classes were : rock - 0, scissors - 1, paper - 2, ok - 3. Rock, paper, scissors gestures are like in the game with the same name, and OK sign is index finger touching the thumb and the rest of the fingers spread. Gestures were selected pretty much randomly.
Each gesture was recorded 6 times for 20 seconds. Each time recording started with the gesture being already prepared and held. Recording stopped while the gesture was still being held. In total there is 120 seconds of each gesture being held in fixed position. All of them recorded from the same right forearm in a short timespan. Every recording of a certain gesture class was concatenated into a .csv file with a corresponding name (0-3).
Be one of the real cyber punks inventing electronic appendages. Let's help people who really need it. There's a lot of work and cool stuff ahead =)
--- Original source retains full ownership of the source dataset ---
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Drone Gesture Control Dataset is an object detection dataset that mimicks DJI's air gesture capability. This dataset consists of hand and body gesture commands that you can command your drone to either ,'take-off', 'land' and'follow'.
https://i.imgur.com/8hFYvsi.gif" alt="Drone Control">
The model for this dataset has been trained on Roboflow the Dataset tab, with exports to the OpenCV AI Kit, which is running on the drone in this example.
One could also build a model using MobileNet SSD using the Roboflow Platform deploy it to the OpenCV AI Kit. Watch the full tutorial here: https://augmentedstartups.info/AI-Drone-Tutorial
Use the fork button to copy this dataset to your own Roboflow account and export it with new preprocessing settings, or additional augmentations to make your model generalize better.
We are at the forefront of Artificial Intelligence in computer vision. We embark on fun and innovative projects in this field and create videos and courses so that everyone can be an expert in this field. Our vision is to create a world full of inventors that can turn their dreams into reality.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Asian Style Gestures Dataset is curated for the visual entertainment industry, featuring a collection of internet-collected images with resolutions ranging from 530 x 360 to 2973 x 3968 pixels. This dataset specializes in annotations of hands displaying Asian style gestures, such as nods, hearts, rock, OK, putting hands together, clasping hands, etc., utilizing bounding boxes and tags for precise identification.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Awọn data Iṣajuuwọn Ara Asia jẹ ṣiṣatunṣe fun ile-iṣẹ ere idaraya wiwo, ti n ṣe ifihan akojọpọ awọn aworan ti a kojọ intanẹẹti pẹlu awọn ipinnu ti o wa lati 530 x 360 si 2973 x 3968 awọn piksẹli. Apẹrẹ data yii ṣe amọja ni awọn asọye ti awọn ọwọ ti n ṣafihan awọn iṣesi aṣa ara Esia, gẹgẹbi awọn nods, awọn ọkan, apata, O DARA, fifi ọwọ papọ, awọn ọwọ dimọ, ati bẹbẹ lọ, lilo awọn apoti didi ati awọn afi fun idanimọ kongẹ.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
O le Asian Style Gestures Dataset ua fa'atulagaina mo le fa'alapotopotoga fa'afiafiaga va'aia, e fa'aalia ai se fa'aputuga o ata e aoina i luga ole laiga ma fa'ai'uga e amata mai i le 530 x 360 i le 2973 x 3968 pixels. O lenei fa'amaumauga e fa'apitoa i fa'amatalaga o lima o lo'o fa'aalia ai taga fa'aAsia, e pei o le luelue, fatu, papa, OK, tu'u fa'atasi lima, pipii lima, ma isi, fa'aoga pusa fusifusia ma pine mo fa'amatalaga sa'o.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
I-Asian Style Gestures Dataset ikhethelwe imboni yokuzijabulisa ebonakalayo, efaka iqoqo lezithombe eziqoqwe ku-inthanethi ezinesinqumo esisuka ku-530 x 360 kuya ku-2973 x 3968 wamaphikiseli. Le dathasethi igxile kakhulu kuzichasiselo zezandla ezibonisa ukuthinta kwesitayela sase-Asia, okufana nokunqekuzisa ikhanda, izinhliziyo, idwala, KULUNGILE, ukuhlanganisa izandla, ukuhlanganisa izandla, njll., kusetshenziswa amabhokisi abophayo nomaka ukuze uthole ukuhlonza okunembile.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Xogta Qalfoofka Key Point-ka waxaa loogu talagalay codsiyada madadaalada muuqaalka ah iyo xaqiiqada dhabta ah ee la kordhiyay (AR/VR), oo leh sawirro gudaha ah oo la soo ururiyay oo leh xallin sare oo ah 3024 x 4032 pixels. Xog-ururintan waxay diiradda saartaa calaamadinta 21 dhibcood oo muhiim ah oo ah qalfoofka gacanta, qabashada meelo gaar ah oo hal-gacan ah ama laba-gacan ah sida samaynta qaab wadne, saarista gacanta dhabanka, kala bixin, iyo in ka badan.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Dataset Key Point Skeleton jẹ apẹrẹ fun awọn ohun elo ni ere idaraya wiwo ati imudara / otito otito (AR / VR), ti o nfihan akojọpọ awọn aworan inu ile pẹlu ipinnu giga ti 3024 x 4032 awọn piksẹli. Atọka data yii dojukọ lori isamisi awọn aaye bọtini 21 ti egungun ọwọ, yiya aworan kan pato tabi awọn iduro ọwọ meji gẹgẹbi ṣiṣẹda apẹrẹ ọkan, gbigbe ọwọ si ẹrẹkẹ, nina, ati diẹ sii.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Az ázsiai stílusú gesztusok adatbázisa a vizuális szórakoztatóipar számára készült, és interneten gyűjtött képek gyűjteményét tartalmazza, 530 x 360 és 2973 x 3968 pixel közötti felbontásban. Ez az adatbázis ázsiai stílusú gesztusokat – például bólintást, szívet, ringatást, OK-t, kézösszetételt, kézösszekulcsot stb. – mutató kezek annotációira specializálódott, határoló keretek és címkék használatával a pontos azonosítás érdekében.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Hand Gestures Dataset for Sign Language Recognition (SLR) composed of six hand gesturesbased on American Sign Language (ASL). The dataset is divided into 6 categories which are "Hello", "Bye", "Yes", "No", "Perfect" and "Thank You". Each category is composed of 400 images where each image is preprocessed using the hand tracking module developed by https://www.computervision.zone/projects and saved with the hand skeleton landmarks on each image.