8 datasets found
  1. P

    CMU-MOSEI Dataset

    • paperswithcode.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AmirAli Bagher Zadeh; Paul Pu Liang; Soujanya Poria; Erik Cambria; Louis-Philippe Morency, CMU-MOSEI Dataset [Dataset]. https://paperswithcode.com/dataset/cmu-mosei
    Explore at:
    Authors
    AmirAli Bagher Zadeh; Paul Pu Liang; Soujanya Poria; Erik Cambria; Louis-Philippe Morency
    Description

    CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) is the largest dataset of sentence-level sentiment analysis and emotion recognition in online videos. CMU-MOSEI contains over 12 hours of annotated video from over 1000 speakers and 250 topics.

  2. CMU_MOSEI

    • kaggle.com
    Updated Dec 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Samar Warsi (2024). CMU_MOSEI [Dataset]. https://www.kaggle.com/datasets/samarwarsi/cmu-mosei
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 13, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Samar Warsi
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    CMU-MOSEI is a comprehensive multimodal dataset designed to analyze emotions and sentiment in online videos. It's a valuable resource for researchers and developers working on automatic emotion recognition and sentiment analysis.

    Key Features: Over 23,500 video clips from 1000+ speakers, covering diverse topics and monologues.

    Multimodal data:

    Acoustics: Features extracted from audio (CMU_MOSEI_COVAREP.csd) Labels: Annotations for sentiment intensity and emotion categories (CMU_MOSEI_Labels.csd) Language: Phonetic, word-level, and word vector representations (CMU_MOSEI_*.csd files under languages folder)

    Visuals: Features extracted from facial expressions (CMU_MOSEI_Visual*.csd files under visuals folder)

    Balanced for gender: The dataset ensures equal representation from male and female speakers.

    Unlocking Insights: By exploring the various modalities within CMU-MOSEI, researchers can investigate the relationship between speech, facial expressions, and emotions expressed in online videos.

    Download: The dataset is freely available for download at: http://immortal.multicomp.cs.cmu.edu/CMU-MOSEI/

    Start exploring the world of emotions in videos with CMU-MOSEI!

  3. h

    cmu-mosei-comp-seq

    • huggingface.co
    Updated Jul 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Reeha Parkar (2025). cmu-mosei-comp-seq [Dataset]. https://huggingface.co/datasets/reeha-parkar/cmu-mosei-comp-seq
    Explore at:
    Dataset updated
    Jul 4, 2025
    Authors
    Reeha Parkar
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    CMU-MOSEI: Computational Sequences (Unofficial Mirror)

    This repository provides a mirror of the official computational sequence files from the CMU-MOSEI dataset, which are required for multimodal sentiment and emotion research. The original download links are currently down, so this mirror is provided for the research community.

    Note: This is an unofficial mirror. All data originates from Carnegie Mellon University and original authors. If you are a dataset creator and want this… See the full description on the dataset page: https://huggingface.co/datasets/reeha-parkar/cmu-mosei-comp-seq.

  4. f

    CMU-MOSEI dataset information.

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jun 16, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ji Mingyu; Zhou Jiawei; Wei Ning (2023). CMU-MOSEI dataset information. [Dataset]. http://doi.org/10.1371/journal.pone.0273936.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 16, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Ji Mingyu; Zhou Jiawei; Wei Ning
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CMU-MOSEI dataset information.

  5. S

    CMU-MOSEI

    • scidb.cn
    Updated Aug 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    申锦涛 (2024). CMU-MOSEI [Dataset]. http://doi.org/10.57760/sciencedb.09218
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 16, 2024
    Dataset provided by
    Science Data Bank
    Authors
    申锦涛
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    CMU-MOSEI数据集在CMU-MOSI的基础上扩展了数据量,包括了来自5000个视频的22856个视频片段,并且丰富了说话者的多样性,涵盖了更多主题。其中每个片段都是一个独立的多模态示例,其中图像、文本和音频占比也是均匀的,同样被标记为[-3,+3]的情感得分。The CMU-MOSEI dataset expands the data base of the CMU-MOSI to include 22,856 video clips from 5,000 videos, as well as enriching the diversity of speakers and covering more topics. Each segment is an independent multimodal example with an even proportion of image, text, and audio, also labeled with an emotion score of [-3,+3].

  6. m

    CDCnet

    • data.mendeley.com
    Updated Aug 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    congbing he (2024). CDCnet [Dataset]. http://doi.org/10.17632/mf8cdvzjr7.1
    Explore at:
    Dataset updated
    Aug 5, 2024
    Authors
    congbing he
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This refers to the feature vectors obtained after feature extraction from the multimodal datasets IEMOCAP, MELD, CMU-MOSEI, Twitter2019, CrisisMMD, and DMD.

  7. f

    Comparative experiments of multimodal sentiment analysis models on the...

    • figshare.com
    xls
    Updated Jun 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ji Mingyu; Zhou Jiawei; Wei Ning (2023). Comparative experiments of multimodal sentiment analysis models on the dataset CMU-MOSEI. [Dataset]. http://doi.org/10.1371/journal.pone.0273936.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 16, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Ji Mingyu; Zhou Jiawei; Wei Ning
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparative experiments of multimodal sentiment analysis models on the dataset CMU-MOSEI.

  8. f

    Comparative experiments of multimodal sentiment analysis models on the...

    • plos.figshare.com
    xls
    Updated Jun 16, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ji Mingyu; Zhou Jiawei; Wei Ning (2023). Comparative experiments of multimodal sentiment analysis models on the dataset CMU-MOSEI. [Dataset]. http://doi.org/10.1371/journal.pone.0273936.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 16, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Ji Mingyu; Zhou Jiawei; Wei Ning
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparative experiments of multimodal sentiment analysis models on the dataset CMU-MOSEI.

  9. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
AmirAli Bagher Zadeh; Paul Pu Liang; Soujanya Poria; Erik Cambria; Louis-Philippe Morency, CMU-MOSEI Dataset [Dataset]. https://paperswithcode.com/dataset/cmu-mosei

CMU-MOSEI Dataset

Explore at:
Authors
AmirAli Bagher Zadeh; Paul Pu Liang; Soujanya Poria; Erik Cambria; Louis-Philippe Morency
Description

CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) is the largest dataset of sentence-level sentiment analysis and emotion recognition in online videos. CMU-MOSEI contains over 12 hours of annotated video from over 1000 speakers and 250 topics.

Search
Clear search
Close search
Google apps
Main menu