44 datasets found
  1. h

    IEMOCAP

    • huggingface.co
    Updated Aug 9, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AbstractTTS group (2024). IEMOCAP [Dataset]. https://huggingface.co/datasets/AbstractTTS/IEMOCAP
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 9, 2024
    Dataset authored and provided by
    AbstractTTS group
    Description

    AbstractTTS/IEMOCAP dataset hosted on Hugging Face and contributed by the HF Datasets community

  2. h

    IEMOCAP

    • huggingface.co
    Updated Jun 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aaryan Bhagat (2025). IEMOCAP [Dataset]. https://huggingface.co/datasets/Berzerker/IEMOCAP
    Explore at:
    Dataset updated
    Jun 8, 2025
    Authors
    Aaryan Bhagat
    Description

    Processed IEMOCAP dataset released by Dai, W., Zheng, D., Yu, F., Zhang, Y., & Hou, Y. (2025, February 12). A Novel Approach to for Multimodal Emotion Recognition : Multimodal semantic information fusion. arXiv.org. https://arxiv.org/abs/2502.08573

  3. h

    IEMOCAP_Text

    • huggingface.co
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zahra Dehghani (2025). IEMOCAP_Text [Dataset]. https://huggingface.co/datasets/Zahra99/IEMOCAP_Text
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 26, 2025
    Authors
    Zahra Dehghani
    Description

    Dataset Card for "IEMOCAP_Text"

    This dataset obtained from IEMOCAP dataset. For more information go to IEMOCAP webpage. This dataset contains 5 most common classes includes angry, happy, excitement, neutral and sad. Based on articles in this field, we merge excitement and happy classes. Our dataset contaions 5531 utterances and it splits based on the sessions. More Information needed

  4. f

    Summary of previous research on video-based emotion recognition using...

    • plos.figshare.com
    xls
    Updated Jun 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nikodem Rybak; Daniel J. Angus (2023). Summary of previous research on video-based emotion recognition using IEMOCAP database. [Dataset]. http://doi.org/10.1371/journal.pone.0251186.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 11, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Nikodem Rybak; Daniel J. Angus
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Summary of previous research on video-based emotion recognition using IEMOCAP database.

  5. Test splits for CREMA-D, emoDB, IEMOCAP, MELD, RAVDESS

    • zenodo.org
    bin, csv +2
    Updated Nov 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hagen Wierstorf; Anna Derington; Hagen Wierstorf; Anna Derington (2023). Test splits for CREMA-D, emoDB, IEMOCAP, MELD, RAVDESS [Dataset]. http://doi.org/10.5281/zenodo.10229583
    Explore at:
    csv, txt, bin, text/x-pythonAvailable download formats
    Dataset updated
    Nov 30, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Hagen Wierstorf; Anna Derington; Hagen Wierstorf; Anna Derington
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Nov 30, 2023
    Description

    Test splits for the categorical emotion datasets CREMA-D, emoDB, IEMOCAP, MELD, RAVDESS used inside audEERING.

    For each dataset, a CSV file is provided listing the file names included in the test split.

    The test splits were designed trying to balance gender and emotional categories as good as possible.

  6. O

    IEMOCAP(the Interactive Emotional Dyadic Motion Capture)

    • opendatalab.com
    zip
    Updated Apr 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Southern California (2023). IEMOCAP(the Interactive Emotional Dyadic Motion Capture) [Dataset]. https://opendatalab.com/OpenDataLab/IEMOCAP_the_Interactive_Emotional_etc
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 23, 2023
    Dataset provided by
    University of Southern California
    License

    https://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdfhttps://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdf

    Description

    The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database is an acted, multimodal and multispeaker database, recently collected at SAIL lab at USC. It contains approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions. It consists of dyadic sessions where actors perform improvisations or scripted scenarios, specifically selected to elicit emotional expressions. IEMOCAP database is annotated by multiple annotators into categorical labels, such as anger, happiness, sadness, neutrality, as well as dimensional labels such as valence, activation and dominance. The detailed motion capture information, the interactive setting to elicit authentic emotions, and the size of the database make this corpus a valuable addition to the existing databases in the community for the study and modeling of multimodal and expressive human communication.

  7. f

    Statistics of the MELD, EmoryNLP, DailyDialog, and IEMOCAP.

    • plos.figshare.com
    xls
    Updated Jan 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Hussain; Caikou Chen; Sami S. Albouq; Khlood Shinan; Fatmah Alanazi; Muhammad Waseem Iqbal; M. Usman Ashraf (2025). Statistics of the MELD, EmoryNLP, DailyDialog, and IEMOCAP. [Dataset]. http://doi.org/10.1371/journal.pone.0312867.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 24, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Muhammad Hussain; Caikou Chen; Sami S. Albouq; Khlood Shinan; Fatmah Alanazi; Muhammad Waseem Iqbal; M. Usman Ashraf
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Statistics of the MELD, EmoryNLP, DailyDialog, and IEMOCAP.

  8. IEMOCAP & MELD

    • kaggle.com
    Updated Jun 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rumaiya (2024). IEMOCAP & MELD [Dataset]. https://www.kaggle.com/datasets/rumaiya/iemocap
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 20, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Rumaiya
    Description

    Dataset

    This dataset was created by Rumaiya

    Contents

  9. f

    Training and test instances for the IEMOCAP corpus.

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panikos Heracleous; Akio Yoneyama (2023). Training and test instances for the IEMOCAP corpus. [Dataset]. http://doi.org/10.1371/journal.pone.0220386.t026
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Panikos Heracleous; Akio Yoneyama
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Training and test instances for the IEMOCAP corpus.

  10. iemocap-features

    • kaggle.com
    Updated Apr 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AyaOsama21 (2024). iemocap-features [Dataset]. https://www.kaggle.com/datasets/ayaosama21/iemocap-features/discussion?sort=undefined
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 25, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    AyaOsama21
    Description

    Dataset

    This dataset was created by AyaOsama21

    Contents

  11. i

    Processed Multimodal Features for Emotion Analysis (IEMOCAP-based)

    • ieee-dataport.org
    Updated May 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nischal Mandal (2025). Processed Multimodal Features for Emotion Analysis (IEMOCAP-based) [Dataset]. https://ieee-dataport.org/documents/processed-multimodal-features-emotion-analysis-iemocap-based
    Explore at:
    Dataset updated
    May 5, 2025
    Authors
    Nischal Mandal
    Description

    audio statistics

  12. h

    iemocap

    • huggingface.co
    Updated Jul 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Massive Text Embedding Benchmark (2025). iemocap [Dataset]. https://huggingface.co/datasets/mteb/iemocap
    Explore at:
    Dataset updated
    Jul 6, 2025
    Dataset authored and provided by
    Massive Text Embedding Benchmark
    Description

    mteb/iemocap dataset hosted on Hugging Face and contributed by the HF Datasets community

  13. f

    Recalls for speech emotion recognition using IEMOCAP and DNN.

    • figshare.com
    xls
    Updated Jun 20, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panikos Heracleous; Akio Yoneyama (2023). Recalls for speech emotion recognition using IEMOCAP and DNN. [Dataset]. http://doi.org/10.1371/journal.pone.0220386.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 20, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Panikos Heracleous; Akio Yoneyama
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recalls for speech emotion recognition using IEMOCAP and DNN.

  14. f

    Precision of speech emotion recognition using IEMOCAP and CNN.

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panikos Heracleous; Akio Yoneyama (2023). Precision of speech emotion recognition using IEMOCAP and CNN. [Dataset]. http://doi.org/10.1371/journal.pone.0220386.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Panikos Heracleous; Akio Yoneyama
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Precision of speech emotion recognition using IEMOCAP and CNN.

  15. f

    F1-scores for speech emotion recognition using a common model set and CNN.

    • plos.figshare.com
    xls
    Updated Jun 11, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panikos Heracleous; Akio Yoneyama (2023). F1-scores for speech emotion recognition using a common model set and CNN. [Dataset]. http://doi.org/10.1371/journal.pone.0220386.t025
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 11, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Panikos Heracleous; Akio Yoneyama
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    F1-scores for speech emotion recognition using a common model set and CNN.

  16. h

    IEMOCAP

    • huggingface.co
    Updated Aug 3, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carol Wasef (2024). IEMOCAP [Dataset]. https://huggingface.co/datasets/cairocode/IEMOCAP
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 3, 2024
    Authors
    Carol Wasef
    Description

    cairocode/IEMOCAP dataset hosted on Hugging Face and contributed by the HF Datasets community

  17. f

    Confusion matrix [%] using IEMOCAP and DNN with MFCC/SDC features.

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panikos Heracleous; Akio Yoneyama (2023). Confusion matrix [%] using IEMOCAP and DNN with MFCC/SDC features. [Dataset]. http://doi.org/10.1371/journal.pone.0220386.t010
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Panikos Heracleous; Akio Yoneyama
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Confusion matrix [%] using IEMOCAP and DNN with MFCC/SDC features.

  18. h

    IEMOCAP

    • huggingface.co
    Updated Apr 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wiktor Jakubowski (2025). IEMOCAP [Dataset]. https://huggingface.co/datasets/WiktorJakubowski/IEMOCAP
    Explore at:
    Dataset updated
    Apr 8, 2025
    Authors
    Wiktor Jakubowski
    Description

    WiktorJakubowski/IEMOCAP dataset hosted on Hugging Face and contributed by the HF Datasets community

  19. IEMOCAP_feature_dataset

    • kaggle.com
    Updated Oct 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bảo Xuyên Nguyễn Lê (2023). IEMOCAP_feature_dataset [Dataset]. https://www.kaggle.com/boxuynnguynl/iemocap-feature-dataset/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 24, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Bảo Xuyên Nguyễn Lê
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset

    This dataset was created by Bảo Xuyên Nguyễn Lê

    Released under Apache 2.0

    Contents

  20. transcribed-VAD_labeled-emotive-audio

    • kaggle.com
    Updated Jul 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stephan Schweitzer (2025). transcribed-VAD_labeled-emotive-audio [Dataset]. https://www.kaggle.com/datasets/stephanschweitzer/transcribed-vad-labeled-emotive-audio/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 6, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Stephan Schweitzer
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Processing Pipeline

    • Transcription: Generated using OpenAI Whisper speech recognition model
    • VAD Scoring: - VAD Scoring: Voice Activity Detection and emotion dimensionality (Valence, Arousal, Dominance) computed using audeering/wav2vec2-large-robust-12-ft-emotion-msp-dim model.

    • Metadata: Includes audio duration, transcript length, character counts, and VAD scores

    Data Format

    An example of a field in the metadata file is as follows { "file_id": "emovdb_amused_1-15_0001_1933", "original_path": "..\data_collection\tts_data\processed\emovdb\amused_1-15_0001.wav", "dataset": "emovdb", "status": "success", "error": null, "processed_audio_path": "None", "transcript_path": "processed_datasets\transcripts\emovdb_amused_1-15_0001_1933.json", "vad_path": "processed_datasets\vad_scores\emovdb_amused_1-15_0001_1933.json", "text": "Author of the Danger Trail, Phillips Deals, etc.", "language": "en", "audio_duration": 4.384671201814059, "text_length": 48, "valence": 0.7305971384048462, "arousal": 0.704948365688324, "dominance": 0.6887099146842957, "vad_confidence": 0.9830486676764241 },

    License and Attribution

    This work is licensed under CC BY-NC-SA 4.0.

    Required Citations: - CREMA-D: Cao, H., Cooper, D. G., Keutmann, M. K., Gur, R. C., Nenkova, A., & Verma, R. (2014). CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset. IEEE Transactions on Affective Computing, 5(4), 377-390.

    • EmoV-DB: Adigwe, A., Tits, N., Haddad, K. E., Ostadabbas, S., & Dutoit, T. (2018). The emotional voices database: Towards controlling the emotion dimension in voice generation systems. arXiv preprint arXiv:1806.09514.

    • IEMOCAP: Busso, C., Bulut, M., Lee, C. C., Kazemzadeh, A., Mower, E., Kim, S., Chang, J. N., Lee, S., & Narayanan, S. S. (2008). IEMOCAP: Interactive emotional dyadic motion capture database. Language Resources and Evaluation, 42(4), 335-359.

    • RAVDESS: Livingstone, S. R., & Russo, F. A. (2018). The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE, 13(5), e0196391.

    Limitations

    • Non-commercial use only
    • Subject to the licensing terms of source datasets
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
AbstractTTS group (2024). IEMOCAP [Dataset]. https://huggingface.co/datasets/AbstractTTS/IEMOCAP

IEMOCAP

AbstractTTS/IEMOCAP

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Aug 9, 2024
Dataset authored and provided by
AbstractTTS group
Description

AbstractTTS/IEMOCAP dataset hosted on Hugging Face and contributed by the HF Datasets community

Search
Clear search
Close search
Google apps
Main menu