100+ datasets found
  1. h

    memotion

    • huggingface.co
    Updated Oct 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yiming Liu (2024). memotion [Dataset]. https://huggingface.co/datasets/Leonardo6/memotion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 25, 2024
    Authors
    Yiming Liu
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Leonardo6/memotion dataset hosted on Hugging Face and contributed by the HF Datasets community

  2. Memotion 7k AI vs Human Annotations

    • kaggle.com
    Updated Jan 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sirojiddin Boboqulov (2025). Memotion 7k AI vs Human Annotations [Dataset]. https://www.kaggle.com/datasets/sirojiddinboboqulov/memotion-7k-ai-vs-human-annotations/versions/2
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 27, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Sirojiddin Boboqulov
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Dataset Title Memotion-Expanded: Multimodal Sarcasm Detection with AI-Human Annotation Comparisons

    Author: Sirojiddin Bobokulov, University of Bremen License: CC-BY 4.0 (inherits original Memotion 7k permissions)

    Dataset Description This dataset extends the Memotion 7k benchmark with AI-generated annotations and explanations for sarcasm detection. It contains 6,905 entries comparing:

    Human annotations (original multimodal labels from Memotion 7k)

    AI annotations for both multimodal (text + image) and unimodal (text-only) conditions

    Model explanations (≤20 words) justifying AI predictions

    Key Attributes Column Description Example image_name Filename of meme image image_1.jpg text_corrected Transcribed/cleaned meme text "LOOK THERE MY FRIEND LIGHTYEAR..." multimodal_annotation_ai AI sarcasm label (multimodal) general multimodal_explanation_ai AI rationale (text + image) "Making fun of Facebook trends..." unimodal_annotation_ai AI sarcasm label (text-only) twisted_meaning unimodal_explanation_ai AI rationale (text-only) "Uses exaggeration to mock..." multimodal_annotation_humans Original human label general Sarcasm Categories:

    general, twisted_meaning, very_twisted, non-sarcastic

    Key Contributions AI-Human Comparison: Directly compare multimodal AI vs. unimodal AI vs. human sarcasm judgments.

    Explanation Alignment: Study how AI rationales (e.g., "exaggeration to mock") align with human annotations.

    Modality Impact: Analyze performance differences between text-only vs. text+image conditions.

    Use Cases Sarcasm Detection: Train models using human/AI annotations as silver labels.

    Explainability Research: Evaluate if AI explanations match human sarcasm perception.

    Modality Studies: Quantify how visual context affects sarcasm detection accuracy.

    Dataset Structure python Copy Sample Entry: { "image_name": "image_1.jpg", "text_corrected": "LOOK THERE MY FRIEND...FACEBOOK imgflip.com", "multimodal_annotation_ai": "general", "multimodal_explanation_ai": "Making fun of Facebook trends and followers", "unimodal_annotation_ai": "twisted_meaning", "unimodal_explanation_ai": "The text uses exaggeration to sarcastically mock...", "multimodal_annotation_humans": "general" } Ethical Considerations Image Licensing: Host only anonymized image URLs (no direct redistribution).

    Bias Mitigation: Annotate model confidence scores for low-frequency categories.

    Citations Original Memotion 7k:

    bibtex Copy @inproceedings{chhavi2020memotion, title={Memotion Analysis 1.0 @SemEval 2020}, author={Sharma, Chhavi et al.}, booktitle={Proceedings of SemEval-2020}, year={2020} } This Extension:

    bibtex Copy @dataset{bobokulov2024memotion_ai, author = {Sirojiddin Bobokulov}, title = {Memotion-Expanded: AI-Human Sarcasm Annotation Comparisons}, year = {2024}, note = {Extended from Memotion 7k (https://www.kaggle.com/datasets/williamscott701/memotion-dataset-7k)} }

  3. P

    SemEval-2020 Task-8 Dataset

    • paperswithcode.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chhavi Sharma; Deepesh Bhageria; William Scott; Srinivas PYKL; Amitava Das; Tanmoy Chakraborty; Viswanath Pulabaigari; Bjorn Gamback, SemEval-2020 Task-8 Dataset [Dataset]. https://paperswithcode.com/dataset/memotion-analysis
    Explore at:
    Authors
    Chhavi Sharma; Deepesh Bhageria; William Scott; Srinivas PYKL; Amitava Das; Tanmoy Chakraborty; Viswanath Pulabaigari; Bjorn Gamback
    Description

    A multimodal dataset for sentiment analysis on internet memes.

  4. h

    emotion-417k

    • huggingface.co
    Updated Jan 29, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    XuehangCang (2025). emotion-417k [Dataset]. https://huggingface.co/datasets/XuehangCang/emotion-417k
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 29, 2025
    Authors
    XuehangCang
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Dataset Card for "emotion"

      Dataset Summary
    

    Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.

      Dataset Structure
    
    
    
    
    
      Data Instances
    

    An example looks as follows. { "text": "im feeling quite sad and sorry for myself but ill snap out of it soon", "label": 0 }

      Data Fields
    

    The data fields are:

    text: a string feature.… See the full description on the dataset page: https://huggingface.co/datasets/XuehangCang/emotion-417k.

  5. R

    Emotion Detection Dataset

    • universe.roboflow.com
    zip
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Computer Vision Projects (2025). Emotion Detection Dataset [Dataset]. https://universe.roboflow.com/computer-vision-projects-zhogq/emotion-detection-y0svj
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 26, 2025
    Dataset authored and provided by
    Computer Vision Projects
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions Bounding Boxes
    Description

    Emotion Detection Model for Facial Expressions

    Project Description:

    In this project, we developed an Emotion Detection Model using a curated dataset of 715 facial images, aiming to accurately recognize and categorize expressions into five distinct emotion classes. The emotion classes include Happy, Sad, Fearful, Angry, and Neutral.

    Objectives: - Train a robust machine learning model capable of accurately detecting and classifying facial expressions in real-time. - Implement emotion detection to enhance user experience in applications such as human-computer interaction, virtual assistants, and emotion-aware systems.

    Methodology: 1. Data Collection and Preprocessing: - Assembled a diverse dataset of 715 images featuring individuals expressing different emotions. - Employed Roboflow for efficient data preprocessing, handling image augmentation and normalization.

    1. Model Architecture:

      • Utilized a convolutional neural network (CNN) architecture to capture spatial hierarchies in facial features.
      • Implemented a multi-class classification approach to categorize images into the predefined emotion classes.
    2. Training and Validation:

      • Split the dataset into training and validation sets for model training and evaluation.
      • Fine-tuned the model parameters to optimize accuracy and generalization.
    3. Model Evaluation:

      • Evaluated the model's performance on an independent test set to assess its ability to generalize to unseen data.
      • Analyzed confusion matrices and classification reports to understand the model's strengths and areas for improvement.
    4. Deployment and Integration:

      • Deployed the trained emotion detection model for real-time inference.
      • Integrated the model into applications, allowing users to interact with systems based on detected emotions.

    Results: The developed Emotion Detection Model demonstrates high accuracy in recognizing and classifying facial expressions across the defined emotion classes. This project lays the foundation for integrating emotion-aware systems into various applications, fostering more intuitive and responsive interactions.

  6. f

    SMILE Twitter Emotion dataset

    • figshare.com
    txt
    Updated Apr 21, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bo Wang; Adam Tsakalidis; Maria Liakata; Arkaitz Zubiaga; Rob Procter; Eric Jensen (2016). SMILE Twitter Emotion dataset [Dataset]. http://doi.org/10.6084/m9.figshare.3187909.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Apr 21, 2016
    Dataset provided by
    figshare
    Authors
    Bo Wang; Adam Tsakalidis; Maria Liakata; Arkaitz Zubiaga; Rob Procter; Eric Jensen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is collected and annotated for the SMILE project http://www.culturesmile.org. This collection of tweets mentioning 13 Twitter handles associated with British museums was gathered between May 2013 and June 2015. It was created for the purpose of classifying emotions, expressed on Twitter towards arts and cultural experiences in museums. It contains 3,085 tweets, with 5 emotions namely anger, disgust, happiness, surprise and sadness. Please see our paper "SMILE: Twitter Emotion Classification using Domain Adaptation" for more details of the dataset.License: The annotations are provided under a CC-BY license, while Twitter retains the ownership and rights of the content of the tweets.

  7. R

    Facial Emotion Recognition Dataset

    • universe.roboflow.com
    zip
    Updated Mar 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    uni (2025). Facial Emotion Recognition Dataset [Dataset]. https://universe.roboflow.com/uni-o612z/facial-emotion-recognition
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 26, 2025
    Dataset authored and provided by
    uni
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions Bounding Boxes
    Description

    Facial Emotion Recognition

    ## Overview
    
    Facial Emotion Recognition is a dataset for object detection tasks - it contains Emotions annotations for 4,540 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  8. Z

    Data from: Gilman-Adhered FilmClip Emotion Dataset (GAFED): Tailored Clips...

    • data.niaid.nih.gov
    • produccioncientifica.ugr.es
    Updated Nov 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francisco M. Garcia-Moreno (2023). Gilman-Adhered FilmClip Emotion Dataset (GAFED): Tailored Clips for Emotional Elicitation [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8431527
    Explore at:
    Dataset updated
    Nov 10, 2023
    Dataset provided by
    Francisco M. Garcia-Moreno
    Marta Badenes-Sastre
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Gilman-Adhered FilmClip Emotion Dataset (GAFED): Tailored Clips for Emotional Elicitation

    Description:

    Introducing the Gilman-Adhered FilmClip Emotion Dataset (GAFED) - a cutting-edge compilation of video clips curated explicitly based on the guidelines set by Gilman et al. (2017). This dataset is meticulously structured, leveraging both the realms of film and psychological research. The objective is clear: to induce specific emotional responses with utmost precision and reproducibility. Perfectly tuned for researchers, therapists, and educators, GAFED facilitates an in-depth exploration into the human emotional spectrum using the medium of film.

    Dataset Highlights:

    Gilman's Guidelines: GAFED's foundation is built upon the rigorous criteria and insights provided by Gilman et al., ensuring methodological accuracy and relevance in emotional elicitation.

    Film Titles: Each selected film's title provides an immersive backdrop to the emotions sought to be evoked.

    Emotion Label: A focused emotional response is designated for each clip, reinforcing the consistency in elicitation.

    Clip Duration: Standardized duration of every clip ensures a uniform exposure, leading to consistent response measurements.

    Curated with Precision: Every film clip in GAFED has been reviewed and handpicked, echoing Gilman et al.'s principles, thereby cementing their efficacy in triggering the intended emotion.

    Emotion-Eliciting Video Clips within Dataset:

        Film
        Targeted Emotion
        Duration (seconds)
    
    
    
    
        The Lover
        Baseline
        43
    
    
        American History X
        Anger
        106
    
    
        Cry Freedom
        Sadness
        166
    
    
        Alive
        Happiness
        310
    
    
        Scream
        Fear
        395
    

    The crowning feature of GAFED is its identification of "key moments". These crucial timestamps serve as a bridge between cinema and emotion, guiding researchers to intervals teeming with emotional potency.

    Key Emotional Moments within Dataset:

        Film
        Targeted Emotion
        Key moment timestamps (seconds)
    
    
    
    
        American History X
        Anger
        36, 57, 68
    
    
        Cry Freedom
        Sadness
        112, 132, 154
    
    
        Alive
        Happiness
        227, 270, 289
    
    
        Scream
        Fear
        23, 42, 79, 226, 279, 299, 334
    

    Based on: Gilman, T. L., et al. (2017). A film set for the elicitation of emotion in research. Behavior Research Methods, 49(6).

    GAFED isn't merely a dataset; it's an amalgamation of cinema and psychology, encapsulating the vastness of human emotion. Tailored to perfection and adhering to Gilman et al.'s insights, it stands as a beacon for researchers exploring the depths of human emotion through film.

  9. P

    BanglaEmotion Dataset

    • paperswithcode.com
    Updated Jul 17, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). BanglaEmotion Dataset [Dataset]. https://paperswithcode.com/dataset/banglaemotion
    Explore at:
    Dataset updated
    Jul 17, 2019
    Description

    BanglaEmotion is a manually annotated Bangla Emotion corpus, which incorporates the diversity of fine-grained emotion expressions in social-media text. More fine-grained emotion labels are considered such as Sadness, Happiness, Disgust, Surprise, Fear and Anger - which are, according to Paul Ekman (1999), the six basic emotion categories. For this task, a large amount of raw text data are collected from the user’s comments on two different Facebook groups (Ekattor TV and Airport Magistrates) and from the public post of a popular blogger and activist Dr. Imran H Sarker. These comments are mostly reactions to ongoing socio-political issues and towards the economic success and failure of Bangladesh. A total of 32923 comments are scraped from the three sources aforementioned above. Out of these, a total of 6314 comments were annotated into the six categories. The distribution of the annotated corpus is as follows:

    sad = 1341 happy = 1908 disgust = 703 surprise = 562 fear = 384 angry = 1416

    A balanced set is also provided from the above data and split the dataset into training and test set of equal ratio. A proportion of 5:1 is used for training and evaluation purposes. More information on the dataset and the experiments on it could be found in our paper (related links below).

  10. R

    Emotion Recognition Dataset

    • universe.roboflow.com
    zip
    Updated Feb 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VietnameseGerman University (2025). Emotion Recognition Dataset [Dataset]. https://universe.roboflow.com/vietnamesegerman-university-mavjh/emotion-recognition-rjl9w
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 18, 2025
    Dataset authored and provided by
    VietnameseGerman University
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Mental Health Monitoring: The emotion recognition model could be used in a mental health tracking app to analyze users' facial expressions during video diaries or calls, providing insights into their emotional state over time.

    2. Customer Service Improvement: Businesses could use this model to monitor customer interactions in stores, analysing the facial expressions of customers to gauge their satisfaction level or immediate reaction to products or services.

    3. Educational and Learning Enhancement: This model could be used in an interactive learning platform to observe students' emotional responses to different learning materials, enabling tailored educational experiences.

    4. Online Content Testing: Marketing or content creation teams could utilize this model to test viewers' emotional reactions to different advertisements or content pieces, improving the impact of their messaging.

    5. Social Robotics: The emotion recognition model could be incorporated in social robots or AI assistants to identify human emotions and respond accordingly, improving their overall user interaction and experience.

  11. R

    Facial Emotion Dataset

    • universe.roboflow.com
    zip
    Updated Oct 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kashif (2024). Facial Emotion Dataset [Dataset]. https://universe.roboflow.com/kashif-wtg2e/facial-emotion-nsmeg
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 9, 2024
    Dataset authored and provided by
    Kashif
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions Bounding Boxes
    Description

    Facial Emotion

    ## Overview
    
    Facial Emotion is a dataset for object detection tasks - it contains Emotions annotations for 642 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  12. Data from: A Multimodal Dataset for Mixed Emotion Recognition

    • zenodo.org
    Updated May 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pei Yang; Niqi Liu; Xinge Liu; Yezhi Shu; Wenqi Ji; Ziqi Ren; Jenny Sheng; Minjing Yu; Ran Yi; Dan Zhang; Yong-Jin Liu; Pei Yang; Niqi Liu; Xinge Liu; Yezhi Shu; Wenqi Ji; Ziqi Ren; Jenny Sheng; Minjing Yu; Ran Yi; Dan Zhang; Yong-Jin Liu (2024). A Multimodal Dataset for Mixed Emotion Recognition [Dataset]. http://doi.org/10.5281/zenodo.11194571
    Explore at:
    Dataset updated
    May 25, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Pei Yang; Niqi Liu; Xinge Liu; Yezhi Shu; Wenqi Ji; Ziqi Ren; Jenny Sheng; Minjing Yu; Ran Yi; Dan Zhang; Yong-Jin Liu; Pei Yang; Niqi Liu; Xinge Liu; Yezhi Shu; Wenqi Ji; Ziqi Ren; Jenny Sheng; Minjing Yu; Ran Yi; Dan Zhang; Yong-Jin Liu
    Description

    ABSTRACT: Mixed emotions have attracted increasing interest recently, but existing datasets rarely focus on mixed emotion recognition from multimodal signals, hindering the affective computing of mixed emotions. On this basis, we present a multimodal dataset with four kinds of signals recorded while watching mixed and non-mixed emotion videos. To ensure effective emotion induction, we first implemented a rule-based video filtering step to select the videos that could elicit stronger positive, negative, and mixed emotions. Then, an experiment with 80 participants was conducted, in which the data of EEG, GSR, PPG, and frontal face videos were recorded while they watched the selected video clips. We also recorded the subjective emotional rating on PANAS, VAD, and amusement-disgust dimensions. In total, the dataset consists of multimodal signal data and self-assessment data from 73 participants. We also present technical validations for emotion induction and mixed emotion classification from physiological signals and face videos. The average accuracy of the 3-class classification (i.e., positive, negative, and mixed) can reach 80.96\% when using SVM and features from all modalities, which indicates the possibility of identifying mixed emotional states.

  13. c

    emotion analysis based on text Dataset

    • cubig.ai
    Updated Feb 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CUBIG (2025). emotion analysis based on text Dataset [Dataset]. https://cubig.ai/store/products/139/emotion-analysis-based-on-text-dataset
    Explore at:
    Dataset updated
    Feb 25, 2025
    Dataset authored and provided by
    CUBIG
    License

    https://cubig.ai/store/terms-of-servicehttps://cubig.ai/store/terms-of-service

    Measurement technique
    Synthetic data generation using AI techniques for model training, Privacy-preserving data transformation via differential privacy
    Description

    1) Data introduction • Emotion-analysis dataset is data for analyzing the emotions of text.

    2) Data utilization (1) Emotion-analysis data has characteristics that: • Contains a variety of texts that convey emotions ranging from happiness to anger to sadness. The goal is to build an efficient model for detecting emotions in text. (2) Emotion-analysis data can be used to: • Sentiment classification models: This dataset can be used to train machine learning models that classify text based on sentiment, which helps companies and researchers understand public opinion and sentiment trends. • Market research: Researchers can analyze sentiment data to understand consumer preferences and market trends and support data-driven decision making.

  14. u

    Video Emotion Recognition Dataset

    • unidata.pro
    json, mp4
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata L.L.C-FZ, Video Emotion Recognition Dataset [Dataset]. https://unidata.pro/datasets/video-emotion-recognition-dataset/
    Explore at:
    json, mp4Available download formats
    Dataset authored and provided by
    Unidata L.L.C-FZ
    Description

    Video dataset capturing diverse facial expressions and emotions from 1000+ people, suitable for emotion recognition AI training

  15. Z

    Data from: Hi, KIA: A Speech Emotion Recognition Dataset for Wake-Up Words

    • data.niaid.nih.gov
    • zenodo.org
    Updated Nov 11, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SeungHeon Doh (2022). Hi, KIA: A Speech Emotion Recognition Dataset for Wake-Up Words [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6989809
    Explore at:
    Dataset updated
    Nov 11, 2022
    Dataset provided by
    Hyeon-Jeong Suk
    Gyunpyo Lee
    Juhan Nam
    Hyung seok Jun
    Taesu Kim
    SeungHeon Doh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Hi,KIA dataset is a shared short Wakeup Word database focusing on perceived emotion in speech The dataset contains 488 Wakeup Word speech.

    For more detailed information about the dataset, please refer to our paper: Hi, KIA: A Speech Emotion Recognition Dataset for Wake-Up Words

    File Description

    wav/: wav files.

    Filename f{gender}_{pid}_{scene}_{trial}_{emotion}.wav The first letter was used to express emotion.

    annotation/: Information related to annotation and human validation of the entire speech

    split: 8fold data split with {train, valid, test}.csv

    handcraft: Features used for data EDA and baseline performance

    best_weights: wav2vec2.0 context network finetuning weights for re-implementation. Due to file size, we attach only fold M1, F5

    Reference

    Hi, KIA: A Speech Emotion Recognition Dataset for Wake-Up Words [ArXiv]

    @inproceedings{kim2022hi,
     title={Hi, KIA: A Speech Emotion Recognition Dataset for Wake-Up Words},
     author={Taesu Kim, SeungHeon Doh, Gyunpyo Lee, Hyung seok Jun, Juhan Nam, Hyeon-Jeong Suk},
     booktitle={Proceedings of the 14th Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)},
     year={2022}
    }
    
  16. R

    Emotion Baby Dataset

    • universe.roboflow.com
    zip
    Updated May 8, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    -qhqpu (2023). Emotion Baby Dataset [Dataset]. https://universe.roboflow.com/-qhqpu/emotion-baby
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 8, 2023
    Dataset authored and provided by
    -qhqpu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotion Bounding Boxes
    Description

    Emotion Baby

    ## Overview
    
    Emotion Baby is a dataset for object detection tasks - it contains Emotion annotations for 451 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  17. R

    Emotion Classification Yolo Dataset

    • universe.roboflow.com
    zip
    Updated Jun 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thanans training (2024). Emotion Classification Yolo Dataset [Dataset]. https://universe.roboflow.com/thanans-training/emotion-classification-yolo
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 12, 2024
    Dataset authored and provided by
    Thanans training
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions
    Description

    Emotion Classification YOLO

    ## Overview
    
    Emotion Classification YOLO is a dataset for classification tasks - it contains Emotions annotations for 6,880 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  18. u

    Data from: Masked Emotion FilmClip Dataset (MEFD): Emotion Elicitation with...

    • produccioncientifica.ugr.es
    • data.niaid.nih.gov
    • +1more
    Updated 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francisco M. Garcia-Moreno; Marta Badenes-Sastre; Francisco M. Garcia-Moreno; Marta Badenes-Sastre (2020). Masked Emotion FilmClip Dataset (MEFD): Emotion Elicitation with Facial Coverings [Dataset]. https://produccioncientifica.ugr.es/documentos/668fc432b9e7c03b01bd613d
    Explore at:
    Dataset updated
    2020
    Authors
    Francisco M. Garcia-Moreno; Marta Badenes-Sastre; Francisco M. Garcia-Moreno; Marta Badenes-Sastre
    Description

    Masked Emotion FilmClip Dataset (MEFD): Emotion Elicitation with Facial Coverings

    The Masked Emotion FilmClip Dataset (MEFD) stands as an avant-garde assembly of emotion-inducing video clips tailored for a unique niche - the elicitation of emotions in individuals wearing facial masks. This dataset emerges in response to the global need to understand emotional cues and expressions in the backdrop of widespread facial mask usage. Assembled by leveraging the synergies between cinematography and psychological research, MEFD serves as an invaluable trove for researchers, especially those in AI, seeking to decode emotions even when a significant portion of the face is concealed. Dataset Highlights:

    Facial Masks: All subjects in the video clips are seen wearing facial masks, replicating real-world scenarios and augmenting the dataset's relevance. Film Titles: The title of each selected film enriching the context of the emotional narrative. Emotion Label: Clear emotion classification associated with every clip, ensuring replicability in emotional elicitation. Clip Duration: Precise duration details ensuring standardized exposure and consistent emotion elicitation. Curated with Expertise: Clips have undergone rigorous evaluation by seasoned psychologists and film critics, affirming their effectiveness in eliciting the designated emotion. Consent and Ethics: The dataset respects and upholds privacy and ethical standards. Every participant provided informed consent. This endeavor has received the green light from the Ethics Committee at the University of Granada, documented under the reference: 2100/CEIH/2021. Emotion-Eliciting Video Clips within Dataset: Film Targeted Emotion Duration (seconds) The Lover Baseline 43 American History X Anger 106 Cry Freedom Sadness 166 Alive Happiness 310 Scream Fear 395 A paramount feature of MEFD is its emphasis on "key moments". These timestamps, a product of collective expertise from psychologists and film critics, guide the researcher to intervals of heightened emotional resonance within the clips, especially challenging to discern with masked faces. Key Emotional Moments within Dataset: Film Targeted Emotion Key moment timestamps (seconds) American History X Anger 36, 57, 68 Cry Freedom Sadness 112, 132, 154 Alive Happiness 227, 270, 289 Scream Fear 23, 42, 79, 226, 279, 299, 334

    DATA STRUCTURE----------------- SADNESS_XXX.CSVtimestamp emotion1625062890.938222 NEUTRAL --> Initial time start for the neutral video1625062932.567609 SADNESS --> Initial time start for the EMOTION video Notes:** Subject id 15: FEAR label started to fast; Neutral data very few-----------------

    The ethical consent for this dataset was provided by La Comisión de Ética en Investigación de la Universidad de Granada, as documented in the approval titled: 'DETECCIÓN AUTOMÁTICA DE LAS EMOCIONES BÁSICAS Y SU INFLUENCIA EN LA TOMA DE DECISIONES MEDIANTE WEARABLES Y MACHINE LEARNING' registered under 2100/CEIH/2021. MEFD is more than just a dataset; it is a testament to human resilience and adaptability. As facial masks become ubiquitous, understanding the nuances of masked emotional expressions becomes imperative. MEFD rises to this challenge, bridging gaps and pioneering a new frontier in emotion research.

  19. AFFEC Multimodal Dataset

    • zenodo.org
    json, zip
    Updated Mar 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meisam Jamshidi Seikavandi; Meisam Jamshidi Seikavandi; Laurits Dixen; Laurits Dixen; Paolo Burelli; Paolo Burelli (2025). AFFEC Multimodal Dataset [Dataset]. http://doi.org/10.5281/zenodo.14794876
    Explore at:
    zip, jsonAvailable download formats
    Dataset updated
    Mar 18, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Meisam Jamshidi Seikavandi; Meisam Jamshidi Seikavandi; Laurits Dixen; Laurits Dixen; Paolo Burelli; Paolo Burelli
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    May 1, 2024
    Description

    Dataset: AFFEC - Advancing Face-to-Face Emotion Communication Dataset

    Overview

    The AFFEC (Advancing Face-to-Face Emotion Communication) dataset is a multimodal dataset designed for emotion recognition research. It captures dynamic human interactions through electroencephalography (EEG), eye-tracking, galvanic skin response (GSR), facial movements, and self-annotations, enabling the study of felt and perceived emotions in real-world face-to-face interactions. The dataset comprises 84 simulated emotional dialogues, 72 participants, and over 5,000 trials, annotated with more than 20,000 emotion labels.

    Dataset Structure

    The dataset follows the Brain Imaging Data Structure (BIDS) format and consists of the following components:

    Root Folder:

    • sub-* : Individual subject folders (e.g., sub-aerj, sub-mdl, sub-xx2)
    • dataset_description.json: General dataset metadata
    • participants.json and participants.tsv: Participant demographics and attributes
    • task-fer_events.json: Event annotations for the FER task
    • README.md: This documentation file

    Subject Folders (sub-):

    Each subject folder contains:

    • Behavioral Data (beh/): Physiological recordings (eye tracking, GSR, facial analysis, cursor tracking) in JSON and TSV formats.
    • EEG Data (eeg/): EEG recordings in .edf and corresponding metadata in .json.
    • Event Files (*.tsv): Trial event data for the emotion recognition task.
    • Channel Descriptions (*_channels.tsv): EEG channel information.

    Data Modalities and Channels

    1. Eye Tracking Data

    • Channels: 16 (fixation points, left/right eye gaze coordinates, gaze validity)
    • Sampling Rate: 62 Hz
    • Trials: 5632
    • File Example: sub-

    2. Pupil Data

    • Channels: 21 (pupil diameter, eye position, pupil validity flags)
    • Sampling Rate: 149 Hz
    • Trials: 5632
    • File Example: sub-

    3. Cursor Tracking Data

    • Channels: 4 (cursor X, cursor Y, cursor state)
    • Sampling Rate: 62 Hz
    • Trials: 5632
    • File Example: sub-

    4. Face Analysis Data

    • Channels: Over 200 (2D/3D facial landmarks, gaze detection, facial action units)
    • Sampling Rate: 40 Hz
    • Trials: 5680
    • File Example: sub-

    5. Electrodermal Activity (EDA) and Physiological Sensors

    • Channels: 40 (GSR, body temperature, accelerometer data)
    • Sampling Rate: 50 Hz
    • Trials: 5438
    • File Example: sub-

    6. EEG Data

    • Channels: 63 (EEG electrodes following the 10-20 placement scheme)
    • Sampling Rate: 256 Hz
    • Reference: Left earlobe
    • Trials: 5632
    • File Example: sub-

    7. Self-Annotations

    • Trials: 5807
    • Annotations Per Trial: 4
    • Event Markers: Onset time, duration, trial type, emotion labels
    • File Example: task-fer_events.json

    Experimental Setup

    Participants engaged in a Facial Emotion Recognition (FER) task, where they watched emotionally expressive video stimuli while their physiological and behavioral responses were recorded. Participants provided self-reported ratings for both perceived and felt emotions, differentiating between the emotions they believed the video conveyed and their internal affective experience.

    The dataset enables the study of individual differences in emotional perception and expression by incorporating Big Five personality trait assessments and demographic variables.

    Usage Notes

    • The dataset is formatted in ASCII/UTF-8 encoding.
    • Each modality is stored in JSON, TSV, or EDF format as per BIDS standards.
    • Researchers should cite this dataset appropriately in publications.

    Applications

    AFFEC is well-suited for research in:

    • Affective Computing
    • Human-Agent Interaction
    • Emotion Recognition and Classification
    • Multimodal Signal Processing
    • Neuroscience and Cognitive Modeling
    • Healthcare and Mental Health Monitoring

    Acknowledgments

    This dataset was collected with the support of brAIn lab, IT University of Copenhagen.
    Special thanks to all participants and research staff involved in data collection.

    License

    This dataset is shared under the Creative Commons CC0 License.

    Contact

    For questions or collaboration inquiries, please contact [brainlab-staff@o365team.itu.dk].

  20. u

    Speech Emotion Recognition Dataset

    • unidata.pro
    amr, wav/mpeg
    Updated Feb 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata L.L.C-FZ (2025). Speech Emotion Recognition Dataset [Dataset]. https://unidata.pro/datasets/speech-emotion-recognition/
    Explore at:
    amr, wav/mpegAvailable download formats
    Dataset updated
    Feb 26, 2025
    Dataset authored and provided by
    Unidata L.L.C-FZ
    Description

    Voice dataset featuring identical English texts spoken in four emotional tones for speech emotion recognition and AI training

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Yiming Liu (2024). memotion [Dataset]. https://huggingface.co/datasets/Leonardo6/memotion

memotion

Leonardo6/memotion

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Oct 25, 2024
Authors
Yiming Liu
License

Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically

Description

Leonardo6/memotion dataset hosted on Hugging Face and contributed by the HF Datasets community

Search
Clear search
Close search
Google apps
Main menu