1 dataset found
  1. American Sign Language dataset for semantic communications

    • zenodo.org
    • ieee-dataport.org
    zip
    Updated Jan 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vasileios Kouvakis; Vasileios Kouvakis; Lamprini Mitsiou; Stylianos E. Trevlakis; Stylianos E. Trevlakis; Alexandros-Apostolos A. Boulogeorgos; Alexandros-Apostolos A. Boulogeorgos; Theodoros Tsiftsis; Theodoros Tsiftsis; Lamprini Mitsiou (2025). American Sign Language dataset for semantic communications [Dataset]. http://doi.org/10.21227/2c1z-8j21
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 12, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Vasileios Kouvakis; Vasileios Kouvakis; Lamprini Mitsiou; Stylianos E. Trevlakis; Stylianos E. Trevlakis; Alexandros-Apostolos A. Boulogeorgos; Alexandros-Apostolos A. Boulogeorgos; Theodoros Tsiftsis; Theodoros Tsiftsis; Lamprini Mitsiou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    United States
    Description

    The dataset was developed as part of the NANCY project (https://nancy-project.eu/) to support tasks in the computer vision area. It is specifically designed for sign language recognition, focusing on representing joints and finger positions. The dataset comprises images of hands that represent the alphabet in American Sign Language (ASL), with the exception of the letters "J" and "Z," as these involve motion and the dataset is limited to static images. A significant feature of the dataset is the use of color-coding, where each finger is associated with a distinct color. This approach enhances the ability to extract features and distinguish between different fingers, offering significant advantages over traditional grayscale datasets like MNIST. The dataset consists of RGB images, which enhance the recognition process and support more effective learning, achieving high performance even with a relatively modest amount of training data. This format improves the ability to discriminate and extract features compared to grayscale images. Although the use of RGB images introduces additional complexity, such as increased data representation and storage requirements, the advantages in accuracy and feature extraction make it a valuable choice. The dataset is well-suited for applications involving gesture recognition, sign language interpretation, and other tasks requiring detailed analysis of joint and finger positions.

    The NANCY project has received funding from the Smart Networks and Services Joint Undertaking (SNS JU) under the European Union's Horizon Europe research and innovation programme under Grant Agreement No 101096456.

  2. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Vasileios Kouvakis; Vasileios Kouvakis; Lamprini Mitsiou; Stylianos E. Trevlakis; Stylianos E. Trevlakis; Alexandros-Apostolos A. Boulogeorgos; Alexandros-Apostolos A. Boulogeorgos; Theodoros Tsiftsis; Theodoros Tsiftsis; Lamprini Mitsiou (2025). American Sign Language dataset for semantic communications [Dataset]. http://doi.org/10.21227/2c1z-8j21
Organization logo

American Sign Language dataset for semantic communications

Explore at:
zipAvailable download formats
Dataset updated
Jan 12, 2025
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Vasileios Kouvakis; Vasileios Kouvakis; Lamprini Mitsiou; Stylianos E. Trevlakis; Stylianos E. Trevlakis; Alexandros-Apostolos A. Boulogeorgos; Alexandros-Apostolos A. Boulogeorgos; Theodoros Tsiftsis; Theodoros Tsiftsis; Lamprini Mitsiou
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Area covered
United States
Description

The dataset was developed as part of the NANCY project (https://nancy-project.eu/) to support tasks in the computer vision area. It is specifically designed for sign language recognition, focusing on representing joints and finger positions. The dataset comprises images of hands that represent the alphabet in American Sign Language (ASL), with the exception of the letters "J" and "Z," as these involve motion and the dataset is limited to static images. A significant feature of the dataset is the use of color-coding, where each finger is associated with a distinct color. This approach enhances the ability to extract features and distinguish between different fingers, offering significant advantages over traditional grayscale datasets like MNIST. The dataset consists of RGB images, which enhance the recognition process and support more effective learning, achieving high performance even with a relatively modest amount of training data. This format improves the ability to discriminate and extract features compared to grayscale images. Although the use of RGB images introduces additional complexity, such as increased data representation and storage requirements, the advantages in accuracy and feature extraction make it a valuable choice. The dataset is well-suited for applications involving gesture recognition, sign language interpretation, and other tasks requiring detailed analysis of joint and finger positions.

The NANCY project has received funding from the Smart Networks and Services Joint Undertaking (SNS JU) under the European Union's Horizon Europe research and innovation programme under Grant Agreement No 101096456.

Search
Clear search
Close search
Google apps
Main menu