Facebook
TwitterAbstractTTS/IEMOCAP dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twitterhttps://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdfhttps://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdf
The Interactive Emotional Dyadic Motion Capture (IEMOCAP) database is an acted, multimodal and multispeaker database, recently collected at SAIL lab at USC. It contains approximately 12 hours of audiovisual data, including video, speech, motion capture of face, text transcriptions. It consists of dyadic sessions where actors perform improvisations or scripted scenarios, specifically selected to elicit emotional expressions. IEMOCAP database is annotated by multiple annotators into categorical labels, such as anger, happiness, sadness, neutrality, as well as dimensional labels such as valence, activation and dominance. The detailed motion capture information, the interactive setting to elicit authentic emotions, and the size of the database make this corpus a valuable addition to the existing databases in the community for the study and modeling of multimodal and expressive human communication.
Facebook
TwitterThis dataset contains IEMOCAP emotion speech database metadata in dataframe, and the path to each .wav file. The dataframe columns are:
| Column | Type | Values | Description |
|---|---|---|---|
| 'session' | int | 1, 2, 3, 4, 5 | dialogue sessions in the database |
| 'method' | str | 'script', 'impro' | the method of emotion elicitation |
| 'gender' | str | 'M', 'F' | gender of the speaker |
| 'emotion' | str | 'neu', 'fru', 'sad', 'sur', 'ang', 'hap', 'exc', 'fea', 'dis','oth' | annotated emotion |
| 'n_annotators' | int | - | number of annotators |
| 'agreement' | int | 2, 3, 4 | number of annotators who agree to this label |
| 'path' | str | 'path/to/file/ | path to the .wav file, relative to "IEMOCAP_full_release/" directory |
For more info on IEMOCAP, visit IEMOCAP Database homepage.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of previous research on video-based emotion recognition using IEMOCAP database.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Test splits for the categorical emotion datasets CREMA-D, emoDB, IEMOCAP, MELD, RAVDESS used inside audEERING.
For each dataset, a CSV file is provided listing the file names included in the test split.
The test splits were designed trying to balance gender and emotional categories as good as possible.
Facebook
Twitteryspark0519/iemocap-phi dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterThis dataset was created by Venkatesh726
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Training and test instances for the IEMOCAP corpus.
Facebook
TwitterStatistics of the MELD, EmoryNLP, DailyDialog, and IEMOCAP.
Facebook
Twitterwindcrossroad/IEMOCAP dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Recalls for speech emotion recognition using IEMOCAP and DNN.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Chinese and English speech emotion captioning dataset based on ESD and IEMOCAP dataset
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Precision of speech emotion recognition using IEMOCAP and CNN.
Facebook
TwitterNoename/iemocap-emotion-scores dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Confusion matrix [%] using IEMOCAP and DNN with MFCC/SDC features.
Facebook
Twitteruclgroup8/iemocap-embeddings-light dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twitteruclgroup8/iemocap-embeddings dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
Twitteryspark0519/iemocap-augment dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
F1-scores for speech emotion recognition using a common model set and CNN.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These datasets are commonly used for building and training models based on multitask learning, where both emotion recognition and sentiment analysis are jointly optimized. Models developed using these datasets can significantly enhance human-computer interaction by improving the emotional sensitivity of systems.
Facebook
TwitterAbstractTTS/IEMOCAP dataset hosted on Hugging Face and contributed by the HF Datasets community