The 20BN-SOMETHING-SOMETHING dataset is a large collection of labeled video clips that show humans performing pre-defined basic actions with everyday objects. The dataset was created by a large number of crowd workers. It allows machine learning models to develop fine-grained understanding of basic actions that occur in the physical world. It contains 108,499 videos, with 86,017 in the training set, 11,522 in the validation set and 10,960 in the test set. There are 174 labels.
⚠️ Attention: This is the outdated V1 of the dataset. V2 is available here.
This is a subset of the full JESTER dataset, which is a large collection of densely-labeled video clips that show humans performing pre-definded hand gestures in front of a laptop camera or webcam. The dataset was created by a large number of crowd workers. It allows for training robust machine learning models to recognize human hand gestures. For the complete dataset go here. Reference
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Jester Event-Based Gesture Dataset (Converted Subset)
Description
This dataset is a custom event-based version of a subset of the original JESTER hand gesture dataset. The JESTER dataset is a large-scale, densely labeled video dataset depicting humans performing predefined hand gestures in front of webcams or laptop cameras. It was originally created by crowd workers and is widely used for training robust machine learning models for gesture recognition.
Conversion… See the full description on the dataset page: https://huggingface.co/datasets/Ishara5/20bn-jester-event.
This dataset was created by SunilSatyaramPatel
https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
The Something-Something dataset (version 2) is a collection of 220,847 labeled video clips of humans performing pre-defined, basic actions with everyday objects. It is designed to train machine learning models in fine-grained understanding of human hand gestures like putting something into something, turning something upside down and covering something with something.
Jester Gesture Recognition dataset includes 148,092 labeled video clips of humans performing basic, pre-defined hand gestures in front of a laptop camera or webcam. It is designed for training machine learning models to recognize human hand gestures like sliding two fingers down, swiping left or right and drumming fingers.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The 20BN-SOMETHING-SOMETHING dataset is a large collection of labeled video clips that show humans performing pre-defined basic actions with everyday objects. The dataset was created by a large number of crowd workers. It allows machine learning models to develop fine-grained understanding of basic actions that occur in the physical world. It contains 108,499 videos, with 86,017 in the training set, 11,522 in the validation set and 10,960 in the test set. There are 174 labels.
⚠️ Attention: This is the outdated V1 of the dataset. V2 is available here.