Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The LSE-Health-UVigo dataset is a collection of 273 videos focused on health-related topics, presented in Spanish Sign Language (Lengua de Signos Española, LSE). The dataset offers comprehensive annotations and alignments for various linguistic elements within the videos.
The dataset was acquired in studio conditions with blue chroma-key, no shadow effects and uniform illumination, at 25 fps and FHD. The added value of the dataset is he rich and rigorous hand-made annotations. Experts interpreters and deaf people were in charge of annotating the dataset with strict criteria explained below. A previous version of this dataset with less videos and annotations was distributed for the 2022 Sign Spotting Challenge at ECCV. The description of the former dataset, LSE_eSaude_UVIGO (ECCV'22), can be found here, including the train/val/test downloadable splits for the two organized tracks (MSSL-multiple shot supervised learning, and OSLWL-one shot learning and weak labels). The results of the challenge with the description of the dataset, protocols and baseline models, as well as discussing top-winning solutions and future directions on the topic can be found in the ECCV'2022 paper.
Researchers and practitioners in machine translation, linguistics, healthcare, and sign language interpretation may find this dataset valuable for:
LSE-Health-UVigo has been annotated with the ELAN program. Annotators used three Tiers:
The annotation criteria shared by all the annotators (4) were as follows:
For Glosses and fingerspelled words:
For Translation: The general criterion for segmentation (performed by a professional interpreter involved also in the recordings) was to adapt the text in OL (oral language) to resemble as closely as possible the signed LSE (Spanish Sign Language) and to segment complete phrases, or smaller particles if it results in more semantically coherent segments. Additionally, due to discursive and grammatical differences between OL and LSE, 8 specific types of annotations were defined and marked in brackets:
This dataset is a collaborative effort of the next research goups and entities:
Gratitude is extended to them for their contributions and support.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository lists the datasets used for developing the experiments of the paper titled: Snapture - A Novel Neural Architecture for Combined Static and Dynamic Hand Gesture Recognition (Open access) by Hassan Ali, Doreen Jirak and Stefan Wermter. The study was conducted in the Knowledge Technology (WTM) group at the University of Hamburg.
This dataset is was recorded at the Knowledge Technology (WTM) group at the University of Hamburg and can be requested here. The dataset is public and was not collected as part of this study.
This dataset was recorded as part of the ChaLearn Looking at People Challenge and can be downloaded from here. The dataset is public and was not collected as part of this study. The attached montalbano_segments.csv file can be used to create gesture segmentations of the test subset of this dataset.
This work was partially supported by the DFG under project CML (TRR 169) and BMWK under project KI-SIGS and EU under project TERAIS.
To cite our paper, you can copy the following into your .bib
file
@Article{Ali2023, author={Ali, Hassan and Jirak, Doreen and Wermter, Stefan}, title={Snapture---a Novel Neural Architecture for Combined Static and Dynamic Hand Gesture Recognition}, journal={Cognitive Computation}, year={2023}, month={Jul}, day={17}, issn={1866-9964}, doi={10.1007/s12559-023-10174-z}, url={https://doi.org/10.1007/s12559-023-10174-z} }
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The LSE-Health-UVigo dataset is a collection of 273 videos focused on health-related topics, presented in Spanish Sign Language (Lengua de Signos Española, LSE). The dataset offers comprehensive annotations and alignments for various linguistic elements within the videos.
The dataset was acquired in studio conditions with blue chroma-key, no shadow effects and uniform illumination, at 25 fps and FHD. The added value of the dataset is he rich and rigorous hand-made annotations. Experts interpreters and deaf people were in charge of annotating the dataset with strict criteria explained below. A previous version of this dataset with less videos and annotations was distributed for the 2022 Sign Spotting Challenge at ECCV. The description of the former dataset, LSE_eSaude_UVIGO (ECCV'22), can be found here, including the train/val/test downloadable splits for the two organized tracks (MSSL-multiple shot supervised learning, and OSLWL-one shot learning and weak labels). The results of the challenge with the description of the dataset, protocols and baseline models, as well as discussing top-winning solutions and future directions on the topic can be found in the ECCV'2022 paper.
Researchers and practitioners in machine translation, linguistics, healthcare, and sign language interpretation may find this dataset valuable for:
LSE-Health-UVigo has been annotated with the ELAN program. Annotators used three Tiers:
The annotation criteria shared by all the annotators (4) were as follows:
For Glosses and fingerspelled words:
For Translation: The general criterion for segmentation (performed by a professional interpreter involved also in the recordings) was to adapt the text in OL (oral language) to resemble as closely as possible the signed LSE (Spanish Sign Language) and to segment complete phrases, or smaller particles if it results in more semantically coherent segments. Additionally, due to discursive and grammatical differences between OL and LSE, 8 specific types of annotations were defined and marked in brackets:
This dataset is a collaborative effort of the next research goups and entities:
Gratitude is extended to them for their contributions and support.