13 datasets found
  1. O

    IEMOCAP(the Interactive Emotional Dyadic Motion Capture)

    • opendatalab.com
    zip
    Updated Apr 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Southern California (2023). IEMOCAP(the Interactive Emotional Dyadic Motion Capture) [Dataset]. https://opendatalab.com/OpenDataLab/IEMOCAP_the_Interactive_Emotional_etc
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 23, 2023
    Dataset provided by
    University of Southern California
    License

    https://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdfhttps://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdf

    Description

    The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database is an acted, multimodal and multispeaker database, recently collected at SAIL lab at USC. It contains approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions. It consists of dyadic sessions where actors perform improvisations or scripted scenarios, specifically selected to elicit emotional expressions. IEMOCAP database is annotated by multiple annotators into categorical labels, such as anger, happiness, sadness, neutrality, as well as dimensional labels such as valence, activation and dominance. The detailed motion capture information, the interactive setting to elicit authentic emotions, and the size of the database make this corpus a valuable addition to the existing databases in the community for the study and modeling of multimodal and expressive human communication.

  2. f

    A reduced-dimensionality approach to uncovering dyadic modes of body motion...

    • figshare.com
    • data.niaid.nih.gov
    • +1more
    pdf
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guy Gaziv; Lior Noy; Yuvalal Liron; Uri Alon (2023). A reduced-dimensionality approach to uncovering dyadic modes of body motion in conversations [Dataset]. http://doi.org/10.1371/journal.pone.0170786
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Guy Gaziv; Lior Noy; Yuvalal Liron; Uri Alon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Face-to-face conversations are central to human communication and a fascinating example of joint action. Beyond verbal content, one of the primary ways in which information is conveyed in conversations is body language. Body motion in natural conversations has been difficult to study precisely due to the large number of coordinates at play. There is need for fresh approaches to analyze and understand the data, in order to ask whether dyads show basic building blocks of coupled motion. Here we present a method for analyzing body motion during joint action using depth-sensing cameras, and use it to analyze a sample of scientific conversations. Our method consists of three steps: defining modes of body motion of individual participants, defining dyadic modes made of combinations of these individual modes, and lastly defining motion motifs as dyadic modes that occur significantly more often than expected given the single-person motion statistics. As a proof-of-concept, we analyze the motion of 12 dyads of scientists measured using two Microsoft Kinect cameras. In our sample, we find that out of many possible modes, only two were motion motifs: synchronized parallel torso motion in which the participants swayed from side to side in sync, and still segments where neither person moved. We find evidence of dyad individuality in the use of motion modes. For a randomly selected subset of 5 dyads, this individuality was maintained for at least 6 months. The present approach to simplify complex motion data and to define motion motifs may be used to understand other joint tasks and interactions. The analysis tools developed here and the motion dataset are publicly available.

  3. f

    Motion Plan Changes Predictably in Dyadic Reaching

    • plos.figshare.com
    ai
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Atsushi Takagi; Niek Beckers; Etienne Burdet (2023). Motion Plan Changes Predictably in Dyadic Reaching [Dataset]. http://doi.org/10.1371/journal.pone.0167314
    Explore at:
    aiAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Atsushi Takagi; Niek Beckers; Etienne Burdet
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Parents can effortlessly assist their child to walk, but the mechanism behind such physical coordination is still unknown. Studies have suggested that physical coordination is achieved by interacting humans who update their movement or motion plan in response to the partner’s behaviour. Here, we tested rigidly coupled pairs in a joint reaching task to observe such changes in the partners’ motion plans. However, the joint reaching movements were surprisingly consistent across different trials. A computational model that we developed demonstrated that the two partners had a distinct motion plan, which did not change with time. These results suggest that rigidly coupled pairs accomplish joint reaching movements by relying on a pre-programmed motion plan that is independent of the partner’s behaviour.

  4. Z

    Data from: A relative-motion method for parsing spatio-temporal behaviour of...

    • data.niaid.nih.gov
    Updated Oct 7, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ludovica Luisa Vissat (2021). A relative-motion method for parsing spatio-temporal behaviour of dyads using GPS relocation data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4139756
    Explore at:
    Dataset updated
    Oct 7, 2021
    Dataset provided by
    Ludovica Luisa Vissat
    Jason K. Blackburn
    Wayne M. Getz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In this paper, we introduce a novel method for classifying and computing the frequencies of movement modes of intra- and interspecific dyads, focusing in particular on distance-mediated approach, retreat, following and side by side movement modes. Besides distance, other factors such as time of day, season, sex, or age can be included in the analysis to assess if they cause frequencies of movement modes to deviate from random. By subdividing the data according to selected factors, our method allows us to identify those responsible for (or correlated with) significant differences in the behaviour of dyadic pairs. We demonstrate and validate our method using both simulated and empirical data. Our simulated data were obtained from a relative-motion, biased random-walk (RM-BRW) model with attraction and repulsion components. Our empirical data were GPS relocation data collected from African elephants in Etosha National Park, Namibia. The simulated data were primarily used to validate our method while the empirical data were used to illustrate the types of behavioural assessment that our methodology reveals. Our method facilitates automated, observer-bias-free analysis of the locomotive interactions of dyads using GPS relocation data, which are becoming increasingly ubiquitous as telemetry and related technologies improve. It should open up a whole new vista of behavioural-interaction type analyses to movement and behavioural ecologists.

  5. D

    Validation of Markerless MoCap in dyadic team sports tasks

    • dataverse.nl
    txt, zip
    Updated Jul 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander Oonk; Alexander Oonk; Matthias Kemp; Matthias Kemp; Koen Lemmink; Koen Lemmink; Tom Buurke; Tom Buurke (2025). Validation of Markerless MoCap in dyadic team sports tasks [Dataset]. http://doi.org/10.34894/LZPY3B
    Explore at:
    txt(1991), zip(2244845006), zip(263080252), zip(272850086)Available download formats
    Dataset updated
    Jul 3, 2025
    Dataset provided by
    DataverseNL
    Authors
    Alexander Oonk; Alexander Oonk; Matthias Kemp; Matthias Kemp; Koen Lemmink; Koen Lemmink; Tom Buurke; Tom Buurke
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The current project holds raw and processed data from 12 couples performing multiperson and single person sports related movements. The movements are captures with 8 Sony camera's from Theia3D (markerless motion capture system) and 16 Vicon camera's (marker-based motion capture system). The dataset is more widely explained in the article "Examining the Concurrent Validity of Markerless Motion Capture in Dual-Athlete Team Sports Movements" by Oonk et al. (2025).

  6. f

    Dyadic output for the “Blau” object generated from BSANet.rda.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Genkin; Cheng Wang; George Berry; Matthew E. Brashears (2023). Dyadic output for the “Blau” object generated from BSANet.rda. [Dataset]. http://doi.org/10.1371/journal.pone.0204990.t010
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Michael Genkin; Cheng Wang; George Berry; Matthew E. Brashears
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dyadic output for the “Blau” object generated from BSANet.rda.

  7. transcribed-VAD_labeled-emotive-audio

    • kaggle.com
    Updated Jul 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stephan Schweitzer (2025). transcribed-VAD_labeled-emotive-audio [Dataset]. https://www.kaggle.com/datasets/stephanschweitzer/transcribed-vad-labeled-emotive-audio/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 6, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Stephan Schweitzer
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Processing Pipeline

    • Transcription: Generated using OpenAI Whisper speech recognition model
    • VAD Scoring: - VAD Scoring: Voice Activity Detection and emotion dimensionality (Valence, Arousal, Dominance) computed using audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim model.

    • Metadata: Includes audio duration, transcript length, character counts, and VAD scores

    Data Format

    An example of a field in the metadata file is as follows { "file_id": "emovdb_amused_1-15_0001_1933", "original_path": "..\data_collection\tts_data\processed\emovdb\amused_1-15_0001.wav", "dataset": "emovdb", "status": "success", "error": null, "processed_audio_path": "None", "transcript_path": "processed_datasets\transcripts\emovdb_amused_1-15_0001_1933.json", "vad_path": "processed_datasets\vad_scores\emovdb_amused_1-15_0001_1933.json", "text": "Author of the Danger Trail, Phillips Deals, etc.", "language": "en", "audio_duration": 4.384671201814059, "text_length": 48, "valence": 0.7305971384048462, "arousal": 0.704948365688324, "dominance": 0.6887099146842957, "vad_confidence": 0.9830486676764241 },

    License and Attribution

    This work is licensed under CC BY-NC-SA 4.0.

    Required Citations: - CREMA-D: Cao, H., Cooper, D. G., Keutmann, M. K., Gur, R. C., Nenkova, A., & Verma, R. (2014). CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset. IEEE Transactions on Affective Computing, 5(4), 377-390.

    • EmoV-DB: Adigwe, A., Tits, N., Haddad, K. E., Ostadabbas, S., & Dutoit, T. (2018). The emotional voices database: Towards controlling the emotion dimension in voice generation systems. arXiv preprint arXiv:1806.09514.

    • IEMOCAP: Busso, C., Bulut, M., Lee, C. C., Kazemzadeh, A., Mower, E., Kim, S., Chang, J. N., Lee, S., & Narayanan, S. S. (2008). IEMOCAP: Interactive emotional dyadic motion capture database. Language Resources and Evaluation, 42(4), 335-359.

    • RAVDESS: Livingstone, S. R., & Russo, F. A. (2018). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13(5), e0196391.

    Limitations

    • Non-commercial use only
    • Subject to the licensing terms of source datasets
  8. Z

    GENEA Challenge 2023 Dataset Files

    • data.niaid.nih.gov
    Updated Jul 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Youngwoo Yoon (2023). GENEA Challenge 2023 Dataset Files [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8199132
    Explore at:
    Dataset updated
    Jul 31, 2023
    Dataset provided by
    Taras Kucherenko
    Rajmund Nagy
    Youngwoo Yoon
    Description

    This Zenodo repository contains the main dataset for the GENEA Challenge 2023, which is based on the Talking With Hands 16.2M dataset.

    Notation:

    Please take note of the following nomenclature when reading this document:

    main agent refers to the speaker in the dyadic interaction for which the systems generated motions.

    interlocutor refers to the speaker in front of the main agent.

    Contents:

    The “genea2023_trn" and "genea2023_val" zip files contain audio files (in WAV format), time-aligned transcriptions (in TSV format), and motion files (in BVH format) for the training and validation datasets, respectively.

    The "genea2023_test" zip file contains audio files (in WAV format) and transcriptions (in TSV format) for the test set, but no motion. The corresponding test motion is available at:

    https://zenodo.org/record/8146027

    Each zip file also contains a "metadata.csv" file that contains information for all files regarding the speaker ID and whether or not the motion files contain finger motion.

    Note that the speech audio in the data sometimes has been replaced by silence for the purpose of anonymisation.

    In the test set, files with indices from 0 to 40 correspond to "matched" interactions (the core test set), where main agent and interlocutor data come from the same conversation, whilst file indices from 41 to 69 correspond to "mismatched" interactions (the extended test set), where main agent and interlocutor data come from different conversations.

    Folder structure:

    main-agent/ (main agent): Encapsulates BVH, TSV, WAV data subfolders for the main agent.

    interloctr/ (interlocutor): Encapsulates BVH, TSV, WAV data subfolders for the interlocutor.

    bvh/ (motion): Time-aligned 3D full-body motion-capture data in BVH format from a speaking and gesticulating actor. Each file is a single person, but each data sample contains files for both the main agent and the interlocutor.

    wav/ (audio): Recorded audio data in WAV format from a speaking and gesticulating actor with a close-talking microphone. Parts of the audio recordings have been muted to omit personally identifiable information.

    tsv/ (text): Word-level time-aligned text transcriptions of the above audio recordings in TSV format (tab-separated values). For privacy reasons, the transcriptions do not include references to personally identifiable information, similar to the audio files.

    Data processing scripts:

    We provide a number of optional scripts for encoding and processing the challenge data:

    Audio: Scripts for extracting basic audio features, such as spectrograms, prosodic features, and mel-frequency cepstral coefficients (MFCCs) can be found at this link.

    Text: A script to encode text transcriptions to word vectors using FastText is available here: tsv2wordvectors.py

    Motion: If you wish to encode the joint angles from the BVH files to and from an exponential map representation, you can use scripts by Simon Alexanderson based on the PyMo library, which are available here:

    bvh2features.py

    features2bvh.py

    Attribution:

    If you use this material, please cite our latest paper on the GENEA Challenge 2023. At the time of writing (2023-07-25) this is our ACM ICMI 2023 paper:

    Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. In Proceedings of the ACM International Conference on Multimodal Interaction (ICMI ’23). ACM.

    Also, please cite the paper about the original dataset from Meta Research:

    Gilwoo Lee, Zhiwei Deng, Shugao Ma, Takaaki Shiratori, Siddhartha S. Srinivasa, and Yaser Sheikh. 2019. Talking With Hands 16.2M: A large-scale dataset of synchronized body-finger motion and audio for conversational motion analysis and synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV ’19). IEEE, 763–772.

    The motion and audio files are based on the Talking With Hands 16.2M dataset at https://github.com/facebookresearch/TalkingWithHands32M/. The material is available under a CC BY NC 4.0 Attribution-NonCommercial 4.0 International license, with the text provided in LICENSE.txt.

    To find more GENEA Challenge 2023 material on the web, please see:

    https://genea-workshop.github.io/2023/challenge/

    If you have any questions or comments, please contact:

    The GENEA Challenge organisers

  9. GENEA Challenge 2023 submitted BVH files

    • zenodo.org
    • data.niaid.nih.gov
    csv, txt, zip
    Updated Jul 25, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Taras Kucherenko; Rajmund Nagy; Youngwoo Yoon; Taras Kucherenko; Rajmund Nagy; Youngwoo Yoon (2023). GENEA Challenge 2023 submitted BVH files [Dataset]. http://doi.org/10.5281/zenodo.8146028
    Explore at:
    txt, csv, zipAvailable download formats
    Dataset updated
    Jul 25, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Taras Kucherenko; Rajmund Nagy; Youngwoo Yoon; Taras Kucherenko; Rajmund Nagy; Youngwoo Yoon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This Zenodo repository contains 3D motion in the Biovision Hierarchy (BVH) format for all test-set motion submitted by teams participating in the GENEA Challenge 2023.

    Contents:

    The "genea_2023_test_bvh" zip file corresponds to the BVH files themselves (2GB).

    The repository also contains the time stamps for cutting out the evaluation segments we used (matched and mismatched) for the two kinds of studies we conducted (monadic and dyadic) in the files "monadic_segment_selection_info.csv" and "dyadic_segment_selection_info.csv".

    Attribution:

    If you use this material, please cite our latest paper on the GENEA Challenge 2023. At the time of writing (2023-07-11) this is our ACM ICMI 2023 paper:

    Taras Kucherenko, Rajmund Nagy, Youngwoo Yoon, Jieyeon Woo, Teodor Nikolov, Mihail Tsakov, and Gustav Eje Henter. 2023. The GENEA Challenge 2023: A large-scale evaluation of gesture generation models in monadic and dyadic settings. In Proceedings of the ACM International Conference on Multimodal Interaction (ICMI ’23). ACM.

    Condition NA in the data contains motion from the Talking With Hands 16.2M dataset at https://github.com/facebookresearch/TalkingWithHands32M/. These are licensed under a CC BY NC 4.0 international license. The remaining material is available under a CC BY 4.0 international license, with the text provided in LICENSE.txt.

    To find more GENEA Challenge 2023 material on the web, please see:

    * https://genea-workshop.github.io/2023/challenge/

    If you have any questions or comments, please contact:

    * The GENEA Challenge organisers

  10. t

    GENEA Challenge 2020

    • service.tib.eu
    Updated Jan 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). GENEA Challenge 2020 [Dataset]. https://service.tib.eu/ldmservice/dataset/genea-challenge-2020
    Explore at:
    Dataset updated
    Jan 3, 2025
    Description

    The GENEA Challenge 2020 dataset is a large-scale open challenge for data-driven automatic co-speech gesture generation. It consists of 50 hours of full-body motion capture, including fingers, of different persons engaging in a dyadic conversation.

  11. f

    Table_3_Movement Synchrony Over Time: What Is in the Trajectory of Dyadic...

    • frontiersin.figshare.com
    xlsx
    Updated Jun 4, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tünde Erdös; Paul Jansen (2023). Table_3_Movement Synchrony Over Time: What Is in the Trajectory of Dyadic Interactions in Workplace Coaching?.XLSX [Dataset]. http://doi.org/10.3389/fpsyg.2022.845394.s003
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    Frontiers
    Authors
    Tünde Erdös; Paul Jansen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundCoaching is increasingly viewed as a dyadic exchange of verbal and non-verbal interactions driving clients' progress. Yet, little is known about how the trajectory of dyadic interactions plays out in workplace coaching.MethodThis paper provides a multiple-step exploratory investigation of movement synchrony (MS) of dyads in workplace coaching. We analyzed a publicly available dataset of 173 video-taped dyads. Specifically, we averaged MS per session/dyad to explore the temporal patterns of MS across (a) the cluster of dyads that completed 10 sessions, and (b) a set of 173 dyadic interactions with a varied number of sessions. Additionally, we linked that pattern to several demographic predictors. The results indicate a differential downward trend of MS.ResultsDemographic factors do not predict best fitting MS curve types, and only client age and coach experience show a small but significant correlation.DiscussionWe provide contextualized interpretations of these findings and propose conceptual considerations and recommendations for future coaching process research and practice.

  12. f

    Let’s decide together: Differences between individual and joint delay...

    • plos.figshare.com
    pdf
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Diana Schwenke; Maja Dshemuchadse; Cordula Vesper; Martin Georg Bleichner; Stefan Scherbaum (2023). Let’s decide together: Differences between individual and joint delay discounting [Dataset]. http://doi.org/10.1371/journal.pone.0176003
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Diana Schwenke; Maja Dshemuchadse; Cordula Vesper; Martin Georg Bleichner; Stefan Scherbaum
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This study addressed the question whether or not social collaboration has an effect on delay discounting, the tendency to prefer sooner but smaller over later but larger delivered rewards. We applied a novel paradigm in which participants executed choices between two gains in an individual and in a dyadic decision-making condition. We observed how participants reached mutual consent via joystick movement coordination and found lower discounting and a higher decisions’ efficiency. In order to establish the underlying mechanism for dyadic variation, we further tested whether these differences emerge from social facilitation or inner group interchange.

  13. f

    Reliability coefficients for temperature values and number of pixels for...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anastasia Topalidou; Garik Markarian; Soo Downe (2023). Reliability coefficients for temperature values and number of pixels for each ROI. [Dataset]. http://doi.org/10.1371/journal.pone.0226755.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Anastasia Topalidou; Garik Markarian; Soo Downe
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ICC, SEM and MDC for the two free hand polygon image analyses undertaken using the ThermaCam Researcher Professional 2.10 software.

  14. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
University of Southern California (2023). IEMOCAP(the Interactive Emotional Dyadic Motion Capture) [Dataset]. https://opendatalab.com/OpenDataLab/IEMOCAP_the_Interactive_Emotional_etc

IEMOCAP(the Interactive Emotional Dyadic Motion Capture)

OpenDataLab/IEMOCAP_the_Interactive_Emotional_etc

Explore at:
56 scholarly articles cite this dataset (View in Google Scholar)
zipAvailable download formats
Dataset updated
Apr 23, 2023
Dataset provided by
University of Southern California
License

https://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdfhttps://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdf

Description

The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database is an acted, multimodal and multispeaker database, recently collected at SAIL lab at USC. It contains approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions. It consists of dyadic sessions where actors perform improvisations or scripted scenarios, specifically selected to elicit emotional expressions. IEMOCAP database is annotated by multiple annotators into categorical labels, such as anger, happiness, sadness, neutrality, as well as dimensional labels such as valence, activation and dominance. The detailed motion capture information, the interactive setting to elicit authentic emotions, and the size of the database make this corpus a valuable addition to the existing databases in the community for the study and modeling of multimodal and expressive human communication.

Search
Clear search
Close search
Google apps
Main menu