100+ datasets found
  1. Emotic Dataset

    • kaggle.com
    Updated Jun 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    RAHUL YADAV (2024). Emotic Dataset [Dataset]. https://www.kaggle.com/datasets/rahulyadav01/emotic
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 25, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    RAHUL YADAV
    Description

    The annotations of the EMOTIC dataset combine two types of emotion representation systems: Discrete Categories and Continuous Dimensions.

    Acknowledgements

    Original images and annotations were collected by :

    R. Kosti, J.M. Álvarez, A. Recasens and A. Lapedriza, "Context based emotion recognition using emotic dataset", IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2019.

  2. h

    emotion-417k

    • huggingface.co
    Updated Jan 29, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    XuehangCang (2025). emotion-417k [Dataset]. https://huggingface.co/datasets/XuehangCang/emotion-417k
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 29, 2025
    Authors
    XuehangCang
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Dataset Card for "emotion"

      Dataset Summary
    

    Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.

      Dataset Structure
    
    
    
    
    
      Data Instances
    

    An example looks as follows. { "text": "im feeling quite sad and sorry for myself but ill snap out of it soon", "label": 0 }

      Data Fields
    

    The data fields are:

    text: a string feature.… See the full description on the dataset page: https://huggingface.co/datasets/XuehangCang/emotion-417k.

  3. Balanced Emotic Dataset (75×75, RGB)

    • kaggle.com
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    dolly prajapati 182 (2025). Balanced Emotic Dataset (75×75, RGB) [Dataset]. https://www.kaggle.com/datasets/dollyprajapati182/balance-emotic/suggestions
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 29, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    dolly prajapati 182
    Description

    The Balanced Emotic Dataset is a uniformly processed, class-balanced, and augmented version of the original Emotic Dataset Emotion Dataset. This dataset is tailored for deep learning and machine learning applications in Facial Emotion Recognition (FER). It addresses class imbalance and standardizes input dimensions to boost model performance and ensure fair evaluation across classes.

    🎯 Purpose The goal of this dataset is to balance the representation of seven basic emotions, enabling the training of fairer and more robust FER models. Each emotion class contains an equal number of images, facilitating consistent model learning and evaluation across all classes.

    🧾 Dataset Characteristics Source: Based on the Emotic Dataset

    Image Format: RGB .png

    Image Size: 75 × 75 pixels

    Emotion Classes:

    angry disgusted fearful happy neutral sad surprised

    ⚙️ Preprocessing Pipeline Each image in the dataset has been preprocessed using the following steps:

    ✅ Converted to RGB

    ✅ Resized to 75×75 pixels

    ✅ Augmented using:

    Random rotation

    Horizontal flip

    Brightness adjustment

    Contrast enhancement

    Sharpness modification

    This results in a clean, uniform, and diverse dataset ideal for FER tasks.

    Testing (10%): 898 images

    Training (80% of remainder): 6472 images

    Validation (20% of remainder): 1618 images

    ✅ Advantages ⚖️ Balanced Classes: Equal images across all seven emotions

    🧠 Model-Friendly: Grayscale, resized format reduces preprocessing overhead

    🚀 Augmented: Improves model generalization and robustness

    📦 Split Ready: Train/Val/Test folders structured per class

    📊 Great for Benchmarking: Ideal for training CNNs, Transformers, and ensemble models for FER

  4. P

    OMG-Emotion Dataset

    • paperswithcode.com
    Updated Apr 27, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pablo Barros; Nikhil Churamani; Egor Lakomkin; Henrique Siqueira; Alexander Sutherland; Stefan Wermter (2020). OMG-Emotion Dataset [Dataset]. https://paperswithcode.com/dataset/omg-emotion
    Explore at:
    Dataset updated
    Apr 27, 2020
    Authors
    Pablo Barros; Nikhil Churamani; Egor Lakomkin; Henrique Siqueira; Alexander Sutherland; Stefan Wermter
    Description

    The One-Minute Gradual-Emotional Behavior dataset (OMG-Emotion) dataset is composed of Youtube videos which are around a minute in length and are annotated taking into consideration a continuous emotional behavior. The videos were selected using a crawler technique that uses specific keywords based on long-term emotional behaviors such as "monologues", "auditions", "dialogues" and "emotional scenes".

    It contains 567 emotion videos with an average length of 1 minute, collected from a variety of Youtube channels. The videos were separated into clips based on utterances, and each utterance was annotated by at least five independent subjects using the Amazon Mechanical Turk tool.

  5. f

    Facial Emotion Detection Dataset

    • salford.figshare.com
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Alameer (2025). Facial Emotion Detection Dataset [Dataset]. http://doi.org/10.17866/rd.salford.22495669.v2
    Explore at:
    Dataset updated
    Apr 29, 2025
    Dataset provided by
    University of Salford
    Authors
    Ali Alameer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Facial Emotion Detection Dataset is a collection of images of individuals with two different emotions - happy and sad. The dataset was captured using a mobile phone camera and contains photos taken from different angles and backgrounds.

    The dataset contains a total of 637 photos with an additional dataset of 127 from previous work. Out of the total, 402 images are of happy faces, and 366 images are of sad faces. Each individual had a minimum of 10 images of both expressions.

    The project faced challenges in terms of time constraints and people's constraints, which limited the number of individuals who participated. Despite the limitations, the dataset can be used for deep learning projects and real-time emotion detection models. Future work can expand the dataset by capturing more images to improve the accuracy of the model. The dataset can also be used to create a custom object detection model to evaluate other types of emotional expressions.

  6. R

    Emotion Baby Dataset

    • universe.roboflow.com
    zip
    Updated May 8, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    -qhqpu (2023). Emotion Baby Dataset [Dataset]. https://universe.roboflow.com/-qhqpu/emotion-baby
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 8, 2023
    Dataset authored and provided by
    -qhqpu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotion Bounding Boxes
    Description

    Emotion Baby

    ## Overview
    
    Emotion Baby is a dataset for object detection tasks - it contains Emotion annotations for 451 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  7. f

    Music Emotion Dataset with 2496 Songs for Music Emotion Recognition...

    • figshare.com
    bin
    Updated Feb 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qilin Li (2025). Music Emotion Dataset with 2496 Songs for Music Emotion Recognition (Memo2496) [Dataset]. http://doi.org/10.6084/m9.figshare.25827034.v3
    Explore at:
    binAvailable download formats
    Dataset updated
    Feb 14, 2025
    Dataset provided by
    figshare
    Authors
    Qilin Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Music emotion recognition delineates and categorises the spectrum of emotions expressed within musical compositions by conducting a comprehensive analysis of fundamental attributes, including melody, rhythm, and timbre. This task is pivotal for the tailoring of music recommendations, the enhancement of music production, the facilitation of psychotherapeutic interventions, and the execution of market analyses, among other applications. The cornerstone is the establishment of a music emotion recognition dataset annotated with reliable emotional labels, furnishing machine learning algorithms with essential training and validation tools, thereby underpinning the precision and dependability of emotion detection. The Music Emotion Dataset with 2496 Songs (Memo2496) dataset, comprising 2496 instrumental musical pieces annotated with valence-arousal (VA) labels and acoustic features, is introduced to advance music emotion recognition and affective computing. The dataset is meticulously annotated by 30 music experts proficient in music theory and devoid of cognitive impairments, ensuring an unbiased perspective. The annotation methodology and experimental paradigm are grounded in previously validated studies, guaranteeing the integrity and high calibre of the data annotations.Memo2496 R1 updated by Qilin Li @12Feb20251. Remove some unannotated music raw data, now the music contained in MusicRawData.zip file are all annotated music.2. The ‘Music Raw Data.zip’ file on FigShare has been updated to contain 2496 songs, consistent with the corpus described in the manuscript. The metadata fields on “Title”, “Contributing Artists”, “Genre”, and/or “Album” have been removed to ensure the songs remain anonymous.3. Adjusted the file structure, now the files on FigShare are placed in folders named ‘Music Raw Data’, ‘Annotations’, ‘Features’, and ‘Data Processing Utilities’ to reflect the format of the Data Records section in the manuscript.Memo2496 R2 updated by Qilin Li @14Feb2025The source of each song's download platform has been added in ‘songs_info_all.csv’ to enable users to search within the platform itself if necessary. This approach aims to balance the privacy requirements of the data with the potential needs of the dataset's users.

  8. A

    ‘Emotion Dataset for Emotion Recognition Tasks’ analyzed by Analyst-2

    • analyst-2.ai
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com), ‘Emotion Dataset for Emotion Recognition Tasks’ analyzed by Analyst-2 [Dataset]. https://analyst-2.ai/analysis/kaggle-emotion-dataset-for-emotion-recognition-tasks-865f/92eb9d37/?iid=000-362&v=presentation
    Explore at:
    Dataset authored and provided by
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Analysis of ‘Emotion Dataset for Emotion Recognition Tasks’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/parulpandey/emotion-dataset on 13 February 2022.

    --- Dataset description provided by original source is as follows ---

    A dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper below.

    The authors constructed a set of hashtags to collect a separate dataset of English tweets from the Twitter API belonging to eight basic emotions, including anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. The data has already been preprocessed based on the approach described in their paper.

    An example of 'train' looks as follows. { "label": 0, "text": "im feeling quite sad and sorry for myself but ill snap out of it soon" }

    Starter Notebook

    Exploratory Data Analysis of the emotion dataset

    Acknowledgements

    @inproceedings{saravia-etal-2018-carer,
      title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
      author = "Saravia, Elvis and
       Liu, Hsien-Chi Toby and
       Huang, Yen-Hao and
       Wu, Junlin and
       Chen, Yi-Shin",
      booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
      month = oct # "-" # nov,
      year = "2018",
      address = "Brussels, Belgium",
      publisher = "Association for Computational Linguistics",
      url = "https://www.aclweb.org/anthology/D18-1404",
      doi = "10.18653/v1/D18-1404",
      pages = "3687--3697",
      abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
    }
    

    --- Original source retains full ownership of the source dataset ---

  9. R

    Emotion Recognition Dataset

    • universe.roboflow.com
    zip
    Updated Feb 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VietnameseGerman University (2025). Emotion Recognition Dataset [Dataset]. https://universe.roboflow.com/vietnamesegerman-university-mavjh/emotion-recognition-rjl9w
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 18, 2025
    Dataset authored and provided by
    VietnameseGerman University
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Mental Health Monitoring: The emotion recognition model could be used in a mental health tracking app to analyze users' facial expressions during video diaries or calls, providing insights into their emotional state over time.

    2. Customer Service Improvement: Businesses could use this model to monitor customer interactions in stores, analysing the facial expressions of customers to gauge their satisfaction level or immediate reaction to products or services.

    3. Educational and Learning Enhancement: This model could be used in an interactive learning platform to observe students' emotional responses to different learning materials, enabling tailored educational experiences.

    4. Online Content Testing: Marketing or content creation teams could utilize this model to test viewers' emotional reactions to different advertisements or content pieces, improving the impact of their messaging.

    5. Social Robotics: The emotion recognition model could be incorporated in social robots or AI assistants to identify human emotions and respond accordingly, improving their overall user interaction and experience.

  10. h

    Emotion-dataset

    • huggingface.co
    Updated Dec 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alt Data (2024). Emotion-dataset [Dataset]. https://huggingface.co/datasets/altdata/Emotion-dataset
    Explore at:
    Dataset updated
    Dec 18, 2024
    Dataset authored and provided by
    Alt Data
    Description

    altdata/Emotion-dataset dataset hosted on Hugging Face and contributed by the HF Datasets community

  11. Data from: A Multimodal Dataset for Mixed Emotion Recognition

    • zenodo.org
    Updated May 25, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pei Yang; Niqi Liu; Xinge Liu; Yezhi Shu; Wenqi Ji; Ziqi Ren; Jenny Sheng; Minjing Yu; Ran Yi; Dan Zhang; Yong-Jin Liu; Pei Yang; Niqi Liu; Xinge Liu; Yezhi Shu; Wenqi Ji; Ziqi Ren; Jenny Sheng; Minjing Yu; Ran Yi; Dan Zhang; Yong-Jin Liu (2024). A Multimodal Dataset for Mixed Emotion Recognition [Dataset]. http://doi.org/10.5281/zenodo.11194571
    Explore at:
    Dataset updated
    May 25, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Pei Yang; Niqi Liu; Xinge Liu; Yezhi Shu; Wenqi Ji; Ziqi Ren; Jenny Sheng; Minjing Yu; Ran Yi; Dan Zhang; Yong-Jin Liu; Pei Yang; Niqi Liu; Xinge Liu; Yezhi Shu; Wenqi Ji; Ziqi Ren; Jenny Sheng; Minjing Yu; Ran Yi; Dan Zhang; Yong-Jin Liu
    Description

    ABSTRACT: Mixed emotions have attracted increasing interest recently, but existing datasets rarely focus on mixed emotion recognition from multimodal signals, hindering the affective computing of mixed emotions. On this basis, we present a multimodal dataset with four kinds of signals recorded while watching mixed and non-mixed emotion videos. To ensure effective emotion induction, we first implemented a rule-based video filtering step to select the videos that could elicit stronger positive, negative, and mixed emotions. Then, an experiment with 80 participants was conducted, in which the data of EEG, GSR, PPG, and frontal face videos were recorded while they watched the selected video clips. We also recorded the subjective emotional rating on PANAS, VAD, and amusement-disgust dimensions. In total, the dataset consists of multimodal signal data and self-assessment data from 73 participants. We also present technical validations for emotion induction and mixed emotion classification from physiological signals and face videos. The average accuracy of the 3-class classification (i.e., positive, negative, and mixed) can reach 80.96\% when using SVM and features from all modalities, which indicates the possibility of identifying mixed emotional states.

  12. Dataset on Emotion with Naturalistic Stimuli (DENS)

    • openneuro.org
    Updated Jul 8, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sudhakar Mishra; Md. Asif; Uma Shanker Tiwary; Narayanan Srinivasan (2023). Dataset on Emotion with Naturalistic Stimuli (DENS) [Dataset]. http://doi.org/10.18112/openneuro.ds003751.v1.0.6
    Explore at:
    Dataset updated
    Jul 8, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Sudhakar Mishra; Md. Asif; Uma Shanker Tiwary; Narayanan Srinivasan
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview

    This is the "Emotion" dataset. The Dataset is recorded with naturalistic paradigm

    In brief, it contains EEG, ECG and EMG data for 40 subjects emotionally stimulated using naturalistic emotion stimuli. The stimuli are multimedia videos providing context to understand the situated conceptualization of emotions. For details, see the Details about the experiment section.

    Citing this dataset

    Please cite as follows: Sudhakar Mishra and Md. Asif and Uma Shanker Tiwary and Narayanan Srinivasan(2023). Dataset on Emotion with Naturalistic Stimuli (DENS). OpenNeuro. [Dataset] doi: https://doi.org/10.18112/openneuro.ds003751.v1.0.5

    For more information, see the dataset_description.json file.

    License

    This eeg_emotion dataset is made available under the Open Database License: See the LICENSE file. A human readable information can be found at:

    https://opendatacommons.org/licenses/odbl/summary/

    Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/

    Dataset Description

    Dataset_description file described the metadata for the dataset. Participants related details are described in participants.json and participants.tsv files. Each subject directory contains two directories- beh and eeg. A tsv file inside beh folder having entries about the feedbacks given by subject on self-assessment scales-valence, arousal, dominance, liking, familiarity, relevance and emotion category. In addition, it contains the information about the time-stamp of mouse click and other details. The eeg folder inside subject directory contains the raw eeg data in .set & .fdt format along with the information about task events in _task-emotion_events.tsv file. The stimuli directory contains stimuli which were used during the experiment. In addition, feedback excel sheet participant_details.xlsx filled by participants is also added. The code directory contains the python code for data collection, python code for data validation and matlab file for pre-processing the raw data.

  13. h

    multimodal-emotion-dataset

    • huggingface.co
    Updated Apr 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prajwal (2025). multimodal-emotion-dataset [Dataset]. https://huggingface.co/datasets/prajubhao/multimodal-emotion-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 18, 2025
    Authors
    Prajwal
    Description

    Multimodal Emotion Recognition Dataset

    This dataset contains 1000 samples of short text messages with associated emojis and sentiment labels. It is designed for fine-tuning models on multimodal emotion recognition tasks, where both text and emojis contribute to predicting the emotion.

      Dataset Fields
    

    text: A short sentence or phrase. emojis: A list of emojis used in the text. sentiment: Sentiment label with three possible values: positive negative neutral… See the full description on the dataset page: https://huggingface.co/datasets/prajubhao/multimodal-emotion-dataset.

  14. R

    Facial Emotion Recognition Dataset

    • universe.roboflow.com
    zip
    Updated Mar 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    uni (2025). Facial Emotion Recognition Dataset [Dataset]. https://universe.roboflow.com/uni-o612z/facial-emotion-recognition
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 26, 2025
    Dataset authored and provided by
    uni
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions Bounding Boxes
    Description

    Facial Emotion Recognition

    ## Overview
    
    Facial Emotion Recognition is a dataset for object detection tasks - it contains Emotions annotations for 4,540 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  15. Behavioral Emotion Recognition Dataset (BERD)

    • zenodo.org
    Updated Dec 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Youngwug Cho; Youngwug Cho; Myeongul Jung; Myeongul Jung; Jung-Eun Bae; Jung-Eun Bae; Kwanguk Kim; Kwanguk Kim (2024). Behavioral Emotion Recognition Dataset (BERD) [Dataset]. http://doi.org/10.5281/zenodo.12577086
    Explore at:
    Dataset updated
    Dec 30, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Youngwug Cho; Youngwug Cho; Myeongul Jung; Myeongul Jung; Jung-Eun Bae; Jung-Eun Bae; Kwanguk Kim; Kwanguk Kim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset has been updated to a new version (https://zenodo.org/records/14568959).

    The expertise dataset has been added.

    Abstract: The Behavioral Emotion Recognition Dataset (BERD) was developed as part of the research study titled "Behavioral Research Methodologies for Bodily Emotion Recognition." This dataset comprises motion capture data collected from participants performing emotional body movements under various experimental conditions. It is designed to facilitate the development and evaluation of automatic emotion recognition (AER) systems using bodily movement data. The dataset offers insights into the effects of participant acting expertise, motion capture device types, and emotional stimuli on bodily emotion recognition accuracy.

    Key Features:

    1. (Devices) Motion Capture Devices:

    • Marker-based motion capture (Optitrack system with 18 infrared cameras).
    • Pose estimation using RGB videos.
    • Kinect motion capture.
    • Mobile phone motion capture (iPhone 12 with ARKit).

    2. (Stimulus) Emotional Stimulus:

    • Word instructions (e.g., "happy," "sad").
    • Picture stimuli (Karolinska Directed Emotional Faces dataset).
    • Video stimuli (validated emotional film clips).

    3. Emotions Recorded:

    • Seven categories: happy, sad, surprised, angry, disgusted, fearful, and neutral.

    4. Data Format:

    • Skeletal data represented as 3D joint coordinates.
    • Sampling rate: 30 frames per second.
    • File format: CSV.

    Potential Applications:

    • Developing deep learning models for bodily emotion recognition.
    • Studying the impact of data collection conditions on emotion recognition accuracy.

    Citation: If you use this dataset in your research, please cite it as follows:

  16. Z

    Data from: Gilman-Adhered FilmClip Emotion Dataset (GAFED): Tailored Clips...

    • data.niaid.nih.gov
    • produccioncientifica.ugr.es
    Updated Nov 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francisco M. Garcia-Moreno (2023). Gilman-Adhered FilmClip Emotion Dataset (GAFED): Tailored Clips for Emotional Elicitation [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8431527
    Explore at:
    Dataset updated
    Nov 10, 2023
    Dataset provided by
    Francisco M. Garcia-Moreno
    Marta Badenes-Sastre
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Gilman-Adhered FilmClip Emotion Dataset (GAFED): Tailored Clips for Emotional Elicitation

    Description:

    Introducing the Gilman-Adhered FilmClip Emotion Dataset (GAFED) - a cutting-edge compilation of video clips curated explicitly based on the guidelines set by Gilman et al. (2017). This dataset is meticulously structured, leveraging both the realms of film and psychological research. The objective is clear: to induce specific emotional responses with utmost precision and reproducibility. Perfectly tuned for researchers, therapists, and educators, GAFED facilitates an in-depth exploration into the human emotional spectrum using the medium of film.

    Dataset Highlights:

    Gilman's Guidelines: GAFED's foundation is built upon the rigorous criteria and insights provided by Gilman et al., ensuring methodological accuracy and relevance in emotional elicitation.

    Film Titles: Each selected film's title provides an immersive backdrop to the emotions sought to be evoked.

    Emotion Label: A focused emotional response is designated for each clip, reinforcing the consistency in elicitation.

    Clip Duration: Standardized duration of every clip ensures a uniform exposure, leading to consistent response measurements.

    Curated with Precision: Every film clip in GAFED has been reviewed and handpicked, echoing Gilman et al.'s principles, thereby cementing their efficacy in triggering the intended emotion.

    Emotion-Eliciting Video Clips within Dataset:

        Film
        Targeted Emotion
        Duration (seconds)
    
    
    
    
        The Lover
        Baseline
        43
    
    
        American History X
        Anger
        106
    
    
        Cry Freedom
        Sadness
        166
    
    
        Alive
        Happiness
        310
    
    
        Scream
        Fear
        395
    

    The crowning feature of GAFED is its identification of "key moments". These crucial timestamps serve as a bridge between cinema and emotion, guiding researchers to intervals teeming with emotional potency.

    Key Emotional Moments within Dataset:

        Film
        Targeted Emotion
        Key moment timestamps (seconds)
    
    
    
    
        American History X
        Anger
        36, 57, 68
    
    
        Cry Freedom
        Sadness
        112, 132, 154
    
    
        Alive
        Happiness
        227, 270, 289
    
    
        Scream
        Fear
        23, 42, 79, 226, 279, 299, 334
    

    Based on: Gilman, T. L., et al. (2017). A film set for the elicitation of emotion in research. Behavior Research Methods, 49(6).

    GAFED isn't merely a dataset; it's an amalgamation of cinema and psychology, encapsulating the vastness of human emotion. Tailored to perfection and adhering to Gilman et al.'s insights, it stands as a beacon for researchers exploring the depths of human emotion through film.

  17. i

    Moroccan Dialect Emotion Recognition Dataset

    • ieee-dataport.org
    Updated Jul 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mhamed-amine Soumiaa (2024). Moroccan Dialect Emotion Recognition Dataset [Dataset]. https://ieee-dataport.org/documents/moroccan-dialect-emotion-recognition-dataset
    Explore at:
    Dataset updated
    Jul 15, 2024
    Authors
    Mhamed-amine Soumiaa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Morocco
    Description

    Sad

  18. S

    Chinese Natural Speech Complex Emotion Dataset

    • scidb.cn
    Updated Feb 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiaolong Wu; Mingxing Xu; Askar Hamdulla; Thomas Fang Zheng (2025). Chinese Natural Speech Complex Emotion Dataset [Dataset]. http://doi.org/10.57760/sciencedb.20968
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 24, 2025
    Dataset provided by
    Science Data Bank
    Authors
    Xiaolong Wu; Mingxing Xu; Askar Hamdulla; Thomas Fang Zheng
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Although Chinese speech affective computing has received increasing attention, existing datasets still have defects such as lack of naturalness, single pronunciation style, and unreliable annotation, which seriously hinder the research in this field. To address these issues, this paper introduces the first Chinese Natural Speech Complex Emotion Dataset (CNSCED) to provide natural data resources for Chinese speech affective computing. CNSCED was collected from publicly broadcasted civil dispute and interview television programs in China, reflecting the authentic emotional characteristics of Chinese people in daily life. The dataset includes 14 hours of speech data from 454 speakers of various ages, totaling 15777 samples. Based on the inherent complexity and ambiguity of natural emotions, this paper proposes an emotion vector annotation method. This method utilizes a vector composed of six meta-emotional dimensions (angry, sad, aroused, happy, surprise, and fear) of different intensities to describe any single or complex emotion. The CNSCED released two subtasks: complex emotion classification and complex emotion intensity regression. In the experimental section, we evaluated the CNSCED dataset using deep neural network models and provided a baseline result. To the best of our knowledge, CNSCED is the first public Chinese natural speech complex emotion dataset, which can be used for scientific research free of charge.

  19. u

    Video Emotion Recognition Dataset

    • unidata.pro
    json, mp4
    Updated Mar 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata L.L.C-FZ (2025). Video Emotion Recognition Dataset [Dataset]. https://unidata.pro/datasets/video-emotion-recognition-dataset/
    Explore at:
    json, mp4Available download formats
    Dataset updated
    Mar 19, 2025
    Dataset authored and provided by
    Unidata L.L.C-FZ
    Description

    Video dataset capturing diverse facial expressions and emotions from 1000+ people, suitable for emotion recognition AI training

  20. i

    Sign Language Emotion datasets

    • ieee-dataport.org
    Updated Oct 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiangtao Zhang (2022). Sign Language Emotion datasets [Dataset]. https://ieee-dataport.org/documents/sign-language-emotion-datasets
    Explore at:
    Dataset updated
    Oct 22, 2022
    Authors
    Jiangtao Zhang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    low-negative

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
RAHUL YADAV (2024). Emotic Dataset [Dataset]. https://www.kaggle.com/datasets/rahulyadav01/emotic
Organization logo

Emotic Dataset

Emotion in context dataset(EMOTIC)

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Jun 25, 2024
Dataset provided by
Kagglehttp://kaggle.com/
Authors
RAHUL YADAV
Description

The annotations of the EMOTIC dataset combine two types of emotion representation systems: Discrete Categories and Continuous Dimensions.

Acknowledgements

Original images and annotations were collected by :

R. Kosti, J.M. Álvarez, A. Recasens and A. Lapedriza, "Context based emotion recognition using emotic dataset", IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2019.

Search
Clear search
Close search
Google apps
Main menu