38 datasets found
  1. f

    Hand Gestures Dataset

    • figshare.com
    zip
    Updated Oct 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lama Ahmed (2023). Hand Gestures Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.24449197.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 27, 2023
    Dataset provided by
    figshare
    Authors
    Lama Ahmed
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Hand Gestures Dataset for Sign Language Recognition (SLR) composed of six hand gesturesbased on American Sign Language (ASL). The dataset is divided into 6 categories which are "Hello", "Bye", "Yes", "No", "Perfect" and "Thank You". Each category is composed of 400 images where each image is preprocessed using the hand tracking module developed by https://www.computervision.zone/projects and saved with the hand skeleton landmarks on each image.

  2. Hand Gestures Dataset

    • universe.roboflow.com
    zip
    Updated May 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Roboflow 100 (2023). Hand Gestures Dataset [Dataset]. https://universe.roboflow.com/roboflow-100/hand-gestures-jps7z/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 7, 2023
    Dataset provided by
    Roboflow
    Authors
    Roboflow 100
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Hand Gestures Bounding Boxes
    Description

    This dataset was originally created by Pablo Ochoa, Antonio Luna, Eliezer Álvarez. To see the current project, which may have been updated since this version, please go here: https://universe.roboflow.com/hand-gestures-recognition/hand-gestures-dataset.

    This dataset is part of RF100, an Intel-sponsored initiative to create a new object detection benchmark for model generalizability.

    Access the RF100 Github repo: https://github.com/roboflow-ai/roboflow-100-benchmark

  3. g

    Data from: Hand Gesture Recognition for Sign Language Translation

    • gts.ai
    json
    Updated Jun 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Hand Gesture Recognition for Sign Language Translation [Dataset]. https://gts.ai/case-study/page/9/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Jun 19, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Unlock communication with hand gesture recognition. Transforming sign language into text and speech, bridging the gap between the hearing.

  4. f

    In-air Hand Gesture Signature Database (iHGS Database)

    • figshare.com
    Updated Apr 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wee How Khoh; Ying Han Pang; Hui Yen Yap (2023). In-air Hand Gesture Signature Database (iHGS Database) [Dataset]. http://doi.org/10.6084/m9.figshare.16643314.v1
    Explore at:
    Dataset updated
    Apr 26, 2023
    Dataset provided by
    figshare
    Authors
    Wee How Khoh; Ying Han Pang; Hui Yen Yap
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This iHGS database comprises two separate datasets, genuine dataset, and skilled forgery dataset (Folder named as GENUINE and FORGERY). For a genuine dataset, 100 participants which each participant provides 10 genuine samples in each session (session 1 and session 2). They were allowed to reject a sample if it is considered incomplete or not satisfied with their movement. A total of 2000 (10×2×100) samples were gathered for this genuine dataset.

    As of skilled forgery dataset, it was collected where other participants acted as skilled forgers. To imitate the signature, the forger was given a signature that has been pre-signed by the genuine owner on a piece of paper at random. They were asked to learn that signature with as much time as they needed. Then, each forger was asked to imitate the assigned signature for 10 times. Initially, a total of 1000 skilled forgery signatures were successfully collected. However, 20 skilled signature samples for two participants (10 samples each) were corrupted due to the hardware error. Thus, only 980 skilled forgery samples were finally obtained.

  5. o

    EMG-EPN-612 Dataset

    • explore.openaire.eu
    • data.niaid.nih.gov
    • +1more
    Updated Sep 11, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marco E. Marco E. Benalcazar; Lorena Barona; Leonardo Valdivieso; Xavier Aguas; Jonathan Zea (2020). EMG-EPN-612 Dataset [Dataset]. http://doi.org/10.5281/zenodo.4023305
    Explore at:
    Dataset updated
    Sep 11, 2020
    Authors
    Marco E. Marco E. Benalcazar; Lorena Barona; Leonardo Valdivieso; Xavier Aguas; Jonathan Zea
    Description

    EMG-EPN-612 Dataset This dataset, called EMG-EPN-612, contains EMG signals of 612 people for benchmarking of hand gesture recognition systems. This dataset has been created by the Artificial Intelligence and Computer Vision Research Lab from the Escuela Politécnica Nacional, Quito-Ecuador. The data was obtained by recording, with the Myo armband, EMG signals on the forearm while users were performing five hand gestures: wave-in, wave-out, pinch, open and fist. EMGs of the hand relaxed are also included. The dataset is divided into two groups of 306 people each. One group is for training or designing hand gesture recognition models and the other is intended for testing the classification and recognition accuracy of hand gesture recognition models. In each of these two groups, each person has 50 EMGs for each of the 5 gesture recorded and also 50 EMGs for the hand relaxed. More information about this dataset can be found at: https://laboratorio-ia.epn.edu.ec/en/resources/dataset/2020_emg_dataset_612

  6. s

    Skup podataka o gestama u azijskom stilu

    • hr.shaip.com
    • maadaa.ai
    • +9more
    json
    Updated Dec 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Skup podataka o gestama u azijskom stilu [Dataset]. https://hr.shaip.com/offerings/gesture-pose-and-activity-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 25, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Asian Style Gestures Dataset is curated for the visual entertainment industry, featuring a collection of internet-collected images with resolutions ranging from 530 x 360 to 2973 x 3968 pixels. This dataset specializes in annotations of hands displaying Asian style gestures, such as nods, hearts, rock, OK, putting hands together, clasping hands, etc., utilizing bounding boxes and tags for precise identification.

  7. Asanyukta Kathak Mudra

    • kaggle.com
    Updated Aug 5, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ANIRUDDHA KUMAR (2024). Asanyukta Kathak Mudra [Dataset]. http://doi.org/10.34740/kaggle/ds/5499681
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 5, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    ANIRUDDHA KUMAR
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Kathak Mudra Gestures Dataset

    Overview

    I collected this Dataset to train a model on Kathak Mudras so that, My little Sister can practice her dance form by knowing the errors and other problems that arise in her dance form. Hence, This dataset comprises images of various Asamyukta Hasta Mudras from Kathak, a classical Indian dance form. Created to support the development of AI models that recognize and analyze these gestures, the dataset is a valuable resource for cultural and technical communities.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F8258812%2F0911027fe723febc57cd7423691dadc1%2Fss.png?generation=1722876276584551&alt=media" alt="">

    Dataset Summary

    • Total Images: 1560
      • Training Images: 1260
      • Validation Images: 179
      • Test Images: 130
    • Classes: 26 Asamyukta Hasta Mudras
      • Hamsapaksha (40 images)
      • Aral (37 images)
      • Ardhachandra (38 images)
      • Ardhpataka (36 images)
      • Bhramara (43 images)
      • Chandrakala (44 images)
      • Chatur (40 images)
      • Hansaasya (45 images)
      • Kangul (37 images)
      • Kapitth (43 images)
      • Kartarimukh (37 images)
      • Katak (36 images)
      • Mayur (36 images)
      • Mrighasheesh (43 images)
      • Mukul (37 images)
      • Mushti (38 images)
      • Padamkosh (38 images)
      • Pataka (36 images)
      • Sarpsheesh (38 images)
      • Shikhar (37 images)
      • Shuktund (37 images)
      • Sinhamukh (36 images)
      • Soochi (36 images)
      • Tamrachud (36 images)
      • Tripataka (40 images)
      • Trishool (37 images)

    Dataset Structure

    The dataset is structured into three main folders:

    • Train: Contains 1260 images used for training the model.
    • Valid: Contains 179 images used for validating the model during training.
    • Test: Contains 130 images used for testing the model's performance.

    Each image is annotated with the corresponding mudra class, making it easy to use with various machine learning frameworks.

    Applications and Use Cases

    • Gesture Recognition: Train machine learning models to recognize and classify different Kathak mudras.
    • Cultural Preservation: Use AI to preserve and document traditional dance forms.
    • Educational Tools: Develop tools to assist Kathak practitioners in learning and perfecting their gestures.
    • Art and Technology Integration: Explore the intersection of classical arts and modern technology.

    How to Use the Dataset

    1. Download the Dataset: Clone or download the dataset from Kaggle.
    2. Set Up Your Environment: Ensure you have the necessary dependencies installed.
    3. Train Your Model: Use the training images to train your model. Validation images can be used to tune hyperparameters and avoid overfitting.
    4. Test Your Model: Use the test images to evaluate your model's performance.
  8. m

    SignAlphaSet

    • data.mendeley.com
    Updated May 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bindu Garg (2025). SignAlphaSet [Dataset]. http://doi.org/10.17632/8fmvr9m98w.3
    Explore at:
    Dataset updated
    May 8, 2025
    Authors
    Bindu Garg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset comprises 26,000 images & 300 videos representing American Sign Language (ASL) hand gestures corresponding to the English alphabet (A-Z) & 5 gestures such as "Hello", "Thank You", "Sorry", "Yes" & "No". The dataset is organized into 26 folders, each labeled with a corresponding letter, containing multiple image samples to ensure variability in hand positioning, lighting, and individual hand differences. Similarly, the dynamic dataset contains 31 folders with 10 sample videos of each gesture, each folder also contains separate frames of the video in .jpg format. The images capture diverse hand gestures, making the dataset suitable for machine learning applications such as sign language recognition, computer vision, and deep learning-based classification tasks. When evaluated using an LSTM-based model, the dataset achieved high accuracy in sign recognition, demonstrating its effectiveness in sequential gesture learning.

  9. n

    10,000 Image Caption Data of Gestures

    • nexdata.ai
    • m.nexdata.ai
    Updated May 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2024). 10,000 Image Caption Data of Gestures [Dataset]. https://www.nexdata.ai/datasets/1287?source=Github
    Explore at:
    Dataset updated
    May 20, 2024
    Dataset provided by
    nexdata technology inc
    Authors
    Nexdata
    Variables measured
    Data size, Data format, Text length, Accuracy rate, Age distribution, Race distribution, Gender distribution, Collecting diversity, Description language, Collecting environment, and 1 more
    Description

    10,000 Image caption data of gestures, mainly for young and middle-aged people, the collection environment includes indoor scenes and outdoor scenes, including various collection environments, various seasons, and various collection angles. The description language is English, mainly describing hand characteristics such as hand movements, gestures, image acquisition angles, gender, age, etc.

  10. m

    Data from: AN EMG DATASET FOR ARABIC SIGN LANGUAGE ALPHABET LETTERS AND...

    • data.mendeley.com
    Updated Nov 10, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amina Ben Haj Amor (2023). AN EMG DATASET FOR ARABIC SIGN LANGUAGE ALPHABET LETTERS AND NUMBERS [Dataset]. http://doi.org/10.17632/ft9bhdgybs.3
    Explore at:
    Dataset updated
    Nov 10, 2023
    Authors
    Amina Ben Haj Amor
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT Sign languages are natural, gestural languages that use visual channel to communicate. Deaf people develop them to overcome their inability to communicate orally. Sign language interpreters bridge the gap that deaf people face in society and provide them with an equal opportunity to thrive in all environments. However, Deaf people often struggle to communicate on a daily basis, especially in public service spaces such as hospitals, post offices, and municipal buildings. Therefore, the implementation of a tool for automatic recognition of sign language is essential to allow the autonomy of deaf people. Moreover, it is difficult to provide full-time interpreters to help deaf people in all public services and administrations.

    Although surface electromyography (sEMG) provides an important potential technology for the detection of hand gestures, the related research in automatic SL recognition remains limited. To date, most works have focused on the recognition of hand gestures from images, videos, or gloves. The works of BEN HAJ AMOR et al. on EMG signals have shown that these multichannel signals contain rich and detailed information that can be exploited, in particular for the recognition of handshape and for the control prosthesis. Consequently, these successes represent a great step towards the recognition of gestures in sign language.

    We build a large database of EMG data, recorded while signing the 28 characters of the Arabic sign language alphabet. This provides a valuable resource for research into how the muscles involved in signing produce the shapes needed to form the letters of the alphabet.

    Instructions: The data for this project is provided as zipped NumPy arrays with custom headers. In order to load these files, you will need to have the NumPy package installed.

    The respective loadz primitive allows for a straight forwardloading of the datasets. The data is organized as follows:

    The data for each label (handshape) is stored in a separate folder. Each folder contains .npz files. An npz file contains the data for one record (a matrix 8x400).

    For more details, please refer to the paper.

  11. s

    Xogta Tilmaamaha Habka Aasiya

    • so.shaip.com
    json
    Updated Dec 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Xogta Tilmaamaha Habka Aasiya [Dataset]. https://so.shaip.com/offerings/gesture-pose-and-activity-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 25, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Xogta Tilmaamaha Qaabka Aasiya waxa loo habeeyey warshadaha madadaalada muuqaalka ah, oo ka muuqda ururinta sawiro internet-ka laga soo ururiyay oo leh qaraaro u dhexeeya 530 x 360 ilaa 2973 x 3968 pixels. Xog-ururintan waxay ku takhasustay sharraxaadda gacmaha muujinaya dhaqdhaqaaqyada hab-dhaqanka Aasiya, sida madax-muquunis, qalbiyo, dhagax weyn, OK, gacmaha isku-dubbaridka, gacmaha isku-qabsiga, iwm., iyaddoo la isticmaalayo sanduuqyo xidhidhyo iyo summada aqoonsiga saxda ah.

  12. A

    ‘Classify gestures by reading muscle activity.’ analyzed by Analyst-2

    • analyst-2.ai
    Updated Nov 13, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com) (2021). ‘Classify gestures by reading muscle activity.’ analyzed by Analyst-2 [Dataset]. https://analyst-2.ai/analysis/kaggle-classify-gestures-by-reading-muscle-activity-b729/e3e7a925/?iid=003-150&v=presentation
    Explore at:
    Dataset updated
    Nov 13, 2021
    Dataset authored and provided by
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Analysis of ‘Classify gestures by reading muscle activity.’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/kyr7plus/emg-4 on 12 November 2021.

    --- Dataset description provided by original source is as follows ---

    Context

    My friends and I are creating an open source prosthetic control system which would enable prosthetic devices to have multiple degrees of freedom. https://github.com/cyber-punk-me

    VIDEO

    The system is built of several components. It connects a muscle activity (EMG, Electromyography) sensor to a user Android/Android Things App. The app collects data, then a server builds a Tensorflow model specifically for this user. After that the model can be downloaded and executed on the device to control motors or other appendages.

    This dataset can be used to map user residual muscle gestures to certain actions of a prosthetic such as open/close hand or rotate wrist.

    For a reference please watch a video on this topic : Living with a mind-controlled robot arm

    Content

    Four classes of motion were written from MYO armband with the help of our app https://github.com/cyber-punk-me/nukleos. The MYO armband has 8 sensors placed on skin surface, each measures electrical activity produced by muscles beneath.

    Each dataset line has 8 consecutive readings of all 8 sensors. so 64 columns of EMG data. The last column is a resulting gesture that was made while recording the data (classes 0-3) So each line has the following structure:

    [8sensors][8sensors][8sensors][8sensors][8sensors][8sensors][8sensors][8sensors][GESTURE_CLASS]
    

    Data was recorded at 200 Hz, which means that each line is 8*(1/200) seconds = 40ms of record time.

    A classifier given 64 numbers would predict a gesture class (0-3). Gesture classes were : rock - 0, scissors - 1, paper - 2, ok - 3. Rock, paper, scissors gestures are like in the game with the same name, and OK sign is index finger touching the thumb and the rest of the fingers spread. Gestures were selected pretty much randomly.

    Each gesture was recorded 6 times for 20 seconds. Each time recording started with the gesture being already prepared and held. Recording stopped while the gesture was still being held. In total there is 120 seconds of each gesture being held in fixed position. All of them recorded from the same right forearm in a short timespan. Every recording of a certain gesture class was concatenated into a .csv file with a corresponding name (0-3).

    Inspiration

    Be one of the real cyber punks inventing electronic appendages. Let's help people who really need it. There's a lot of work and cool stuff ahead =)

    --- Original source retains full ownership of the source dataset ---

  13. Drone Control Dataset

    • universe.roboflow.com
    zip
    Updated Aug 11, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Augmented Startups (2021). Drone Control Dataset [Dataset]. https://universe.roboflow.com/augmented-startups/drone-control-af8y8
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 11, 2021
    Dataset provided by
    Augmented AI
    Authors
    Augmented Startups
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Actions Bounding Boxes
    Description

    Overview

    The Drone Gesture Control Dataset is an object detection dataset that mimicks DJI's air gesture capability. This dataset consists of hand and body gesture commands that you can command your drone to either ,'take-off', 'land' and'follow'.

    Example Footage

    https://i.imgur.com/8hFYvsi.gif" alt="Drone Control">

    Model Training and Inference

    The model for this dataset has been trained on Roboflow the Dataset tab, with exports to the OpenCV AI Kit, which is running on the drone in this example.

    One could also build a model using MobileNet SSD using the Roboflow Platform deploy it to the OpenCV AI Kit. Watch the full tutorial here: https://augmentedstartups.info/AI-Drone-Tutorial

    Using this Dataset

    Use the fork button to copy this dataset to your own Roboflow account and export it with new preprocessing settings, or additional augmentations to make your model generalize better.

    About Augmented Startups

    We are at the forefront of Artificial Intelligence in computer vision. We embark on fun and innovative projects in this field and create videos and courses so that everyone can be an expert in this field. Our vision is to create a world full of inventors that can turn their dreams into reality.

  14. s

    Asian Style Gestures Dataset

    • mg.shaip.com
    • de.shaip.com
    • +7more
    json
    Updated Dec 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Asian Style Gestures Dataset [Dataset]. https://mg.shaip.com/offerings/gesture-pose-and-activity-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 8, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Asian Style Gestures Dataset is curated for the visual entertainment industry, featuring a collection of internet-collected images with resolutions ranging from 530 x 360 to 2973 x 3968 pixels. This dataset specializes in annotations of hands displaying Asian style gestures, such as nods, hearts, rock, OK, putting hands together, clasping hands, etc., utilizing bounding boxes and tags for precise identification.

  15. s

    Eto Iṣagbesori Ara Asia

    • yo.shaip.com
    json
    Updated Dec 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Eto Iṣagbesori Ara Asia [Dataset]. https://yo.shaip.com/offerings/gesture-pose-and-activity-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 7, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Awọn data Iṣajuuwọn Ara Asia jẹ ṣiṣatunṣe fun ile-iṣẹ ere idaraya wiwo, ti n ṣe ifihan akojọpọ awọn aworan ti a kojọ intanẹẹti pẹlu awọn ipinnu ti o wa lati 530 x 360 si 2973 x 3968 awọn piksẹli. Apẹrẹ data yii ṣe amọja ni awọn asọye ti awọn ọwọ ti n ṣafihan awọn iṣesi aṣa ara Esia, gẹgẹbi awọn nods, awọn ọkan, apata, O DARA, fifi ọwọ papọ, awọn ọwọ dimọ, ati bẹbẹ lọ, lilo awọn apoti didi ati awọn afi fun idanimọ kongẹ.

  16. s

    Fa'amaumauga o Gaioiga a Asia

    • sm.shaip.com
    json
    Updated Dec 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Fa'amaumauga o Gaioiga a Asia [Dataset]. https://sm.shaip.com/offerings/gesture-pose-and-activity-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 5, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    O le Asian Style Gestures Dataset ua fa'atulagaina mo le fa'alapotopotoga fa'afiafiaga va'aia, e fa'aalia ai se fa'aputuga o ata e aoina i luga ole laiga ma fa'ai'uga e amata mai i le 530 x 360 i le 2973 x 3968 pixels. O lenei fa'amaumauga e fa'apitoa i fa'amatalaga o lima o lo'o fa'aalia ai taga fa'aAsia, e pei o le luelue, fatu, papa, OK, tu'u fa'atasi lima, pipii lima, ma isi, fa'aoga pusa fusifusia ma pine mo fa'amatalaga sa'o.

  17. s

    Isethi yedatha yokuthinta kwesitayela sase-Asia

    • zu.shaip.com
    json
    Updated Dec 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Isethi yedatha yokuthinta kwesitayela sase-Asia [Dataset]. https://zu.shaip.com/offerings/gesture-pose-and-activity-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 24, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    I-Asian Style Gestures Dataset ikhethelwe imboni yokuzijabulisa ebonakalayo, efaka iqoqo lezithombe eziqoqwe ku-inthanethi ezinesinqumo esisuka ku-530 x 360 kuya ku-2973 x 3968 wamaphikiseli. Le dathasethi igxile kakhulu kuzichasiselo zezandla ezibonisa ukuthinta kwesitayela sase-Asia, okufana nokunqekuzisa ikhanda, izinhliziyo, idwala, KULUNGILE, ukuhlanganisa izandla, ukuhlanganisa izandla, njll., kusetshenziswa amabhokisi abophayo nomaka ukuze uthole ukuhlonza okunembile.

  18. s

    Xogta Qalfoofka Farta Gacanta

    • so.shaip.com
    json
    Updated Dec 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Xogta Qalfoofka Farta Gacanta [Dataset]. https://so.shaip.com/offerings/gesture-pose-and-activity-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 25, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Xogta Qalfoofka Key Point-ka waxaa loogu talagalay codsiyada madadaalada muuqaalka ah iyo xaqiiqada dhabta ah ee la kordhiyay (AR/VR), oo leh sawirro gudaha ah oo la soo ururiyay oo leh xallin sare oo ah 3024 x 4032 pixels. Xog-ururintan waxay diiradda saartaa calaamadinta 21 dhibcood oo muhiim ah oo ah qalfoofka gacanta, qabashada meelo gaar ah oo hal-gacan ah ama laba-gacan ah sida samaynta qaab wadne, saarista gacanta dhabanka, kala bixin, iyo in ka badan.

  19. s

    Ọwọ Key Point Egungun Dataset

    • yo.shaip.com
    json
    Updated Dec 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Ọwọ Key Point Egungun Dataset [Dataset]. https://yo.shaip.com/offerings/gesture-pose-and-activity-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 7, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Dataset Key Point Skeleton jẹ apẹrẹ fun awọn ohun elo ni ere idaraya wiwo ati imudara / otito otito (AR / VR), ti o nfihan akojọpọ awọn aworan inu ile pẹlu ipinnu giga ti 3024 x 4032 awọn piksẹli. Atọka data yii dojukọ lori isamisi awọn aaye bọtini 21 ti egungun ọwọ, yiya aworan kan pato tabi awọn iduro ọwọ meji gẹgẹbi ṣiṣẹda apẹrẹ ọkan, gbigbe ọwọ si ẹrẹkẹ, nina, ati diẹ sii.

  20. s

    Ázsiai stílusú gesztusok adatkészlete

    • hu.shaip.com
    json
    Updated Dec 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Ázsiai stílusú gesztusok adatkészlete [Dataset]. https://hu.shaip.com/offerings/gesture-pose-and-activity-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 7, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Az ázsiai stílusú gesztusok adatbázisa a vizuális szórakoztatóipar számára készült, és interneten gyűjtött képek gyűjteményét tartalmazza, 530 x 360 és 2973 x 3968 pixel közötti felbontásban. Ez az adatbázis ázsiai stílusú gesztusokat – például bólintást, szívet, ringatást, OK-t, kézösszetételt, kézösszekulcsot stb. – mutató kezek annotációira specializálódott, határoló keretek és címkék használatával a pontos azonosítás érdekében.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Lama Ahmed (2023). Hand Gestures Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.24449197.v1

Hand Gestures Dataset

Explore at:
zipAvailable download formats
Dataset updated
Oct 27, 2023
Dataset provided by
figshare
Authors
Lama Ahmed
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Hand Gestures Dataset for Sign Language Recognition (SLR) composed of six hand gesturesbased on American Sign Language (ASL). The dataset is divided into 6 categories which are "Hello", "Bye", "Yes", "No", "Perfect" and "Thank You". Each category is composed of 400 images where each image is preprocessed using the hand tracking module developed by https://www.computervision.zone/projects and saved with the hand skeleton landmarks on each image.

Search
Clear search
Close search
Google apps
Main menu