Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
CMU-MOSEI: Computational Sequences (Unofficial Mirror)
This repository provides a mirror of the official computational sequence files from the CMU-MOSEI dataset, which are required for multimodal sentiment and emotion research. The original download links are currently down, so this mirror is provided for the research community.
Note: This is an unofficial mirror. All data originates from Carnegie Mellon University and original authors. If you are a dataset creator and want this… See the full description on the dataset page: https://huggingface.co/datasets/reeha-parkar/cmu-mosei-comp-seq.
This dataset was created by Armin Kgarj
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CMU-MOSEI dataset information.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
CMU-MOSEI Custom Unaligned Dataset
Dataset Description
This dataset represents a custom preprocessed version of the CMU-MOSEI (Multimodal Opinion Sentiment and Emotion Intensity) dataset with variable-length temporal sequences preserved for enhanced multimodal emotion recognition research. Unlike traditional fixed-alignment preprocessing approaches that truncate sequences to uniform lengths, this dataset maintains the natural temporal dynamics of multimodal… See the full description on the dataset page: https://huggingface.co/datasets/reeha-parkar/custom-unaligned-CMU-MOSEI.
http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
MOSI: Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis in Online Opinion Videos
People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how information from different modality sources relate to each other. This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinionlevel Sentiment Intensity dataset (MOSI). The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. Furthermore, we present baselines for future studies in this direction as well as a new multimodal fusion approach that jointly models spoken words and visual gestures.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Comparative experiments of multimodal sentiment analysis models on the dataset CMU-MOSEI.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
CMU-MOSEI数据集在CMU-MOSI的基础上扩展了数据量,包括了来自5000个视频的22856个视频片段,并且丰富了说话者的多样性,涵盖了更多主题。其中每个片段都是一个独立的多模态示例,其中图像、文本和音频占比也是均匀的,同样被标记为[-3,+3]的情感得分。The CMU-MOSEI dataset expands the data base of the CMU-MOSI to include 22,856 video clips from 5,000 videos, as well as enriching the diversity of speakers and covering more topics. Each segment is an independent multimodal example with an even proportion of image, text, and audio, also labeled with an emotion score of [-3,+3].
🧠 CMU-MOSEI Balanced Subset by Modality This dataset is a compact, balanced subset of CMU-MOSEI, representing only the samples specified in balanced_emotion_by_mean.csv. Each modality (audio, text, vision, labels) has been extracted separately and contains only the relevant data based on the specified video_ids. This makes it ideal for lightweight multimodal learning, benchmarking, and fine-grained feature analysis. 📁 Folder Structure dataset_root/ ├── acoustics/ │ └──… See the full description on the dataset page: https://huggingface.co/datasets/shinnew/CMU-MOSEI_sample.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This refers to the feature vectors obtained after feature extraction from the multimodal datasets IEMOCAP, MELD, CMU-MOSEI, Twitter2019, CrisisMMD, and DMD.
This dataset was created by Armin Kgarj
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
CMU-MOSEI: Computational Sequences (Unofficial Mirror)
This repository provides a mirror of the official computational sequence files from the CMU-MOSEI dataset, which are required for multimodal sentiment and emotion research. The original download links are currently down, so this mirror is provided for the research community.
Note: This is an unofficial mirror. All data originates from Carnegie Mellon University and original authors. If you are a dataset creator and want this… See the full description on the dataset page: https://huggingface.co/datasets/reeha-parkar/cmu-mosei-comp-seq.