100+ datasets found
  1. h

    emotion

    • huggingface.co
    Updated Jul 14, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DAIR.AI (2020). emotion [Dataset]. https://huggingface.co/datasets/dair-ai/emotion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 14, 2020
    Dataset provided by
    DAIR.AI
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Card for "emotion"

      Dataset Summary
    

    Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.

      Supported Tasks and Leaderboards
    

    More Information Needed

      Languages
    

    More Information Needed

      Dataset Structure
    
    
    
    
    
      Data Instances
    

    An example looks as follows. { "text": "im feeling quite sad and sorry for myself but… See the full description on the dataset page: https://huggingface.co/datasets/dair-ai/emotion.

  2. o

    Go Emotions: Google Emotions Dataset

    • opendatabay.com
    .undefined
    Updated Jun 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Datasimple (2025). Go Emotions: Google Emotions Dataset [Dataset]. https://www.opendatabay.com/data/ai-ml/c98ae93a-abde-4a9f-ad5e-97ed418f598f
    Explore at:
    .undefinedAvailable download formats
    Dataset updated
    Jun 8, 2025
    Dataset authored and provided by
    Datasimple
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Reviews & Ratings
    Description

    The Google AI GoEmotions dataset consists of comments from Reddit users with labels of their emotional coloring. GoEmotions is designed to train neural networks to perform deep analysis of the tonality of texts. Most of the existing emotion classification datasets cover certain areas (for example, news headlines and movie subtitles), are small in size and use a scale of only six basic emotions (anger, surprise, disgust, joy, fear, and sadness). The expansion of the emotional spectrum considered in datasets could make it possible to create more sensitive chatbots, models for detecting dangerous behavior on the Internet, as well as improve customer support services.

    The categories of emotions were identified by Google together with psychologists and include 12 positive,, 11 negative, 4 ambiguous emotions, and 1 neutral, which makes the dataset suitable for solving tasks that require subtle differentiation between different emotions.

    Source: https://arxiv.org/pdf/2005.00547.pdf Github: https://github.com/google-research/google-research/tree/master/goemotions

    Original Data Source: Go Emotions: Google Emotions Dataset

  3. h

    emotions-dataset

    • huggingface.co
    Updated May 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    boltuix (2025). emotions-dataset [Dataset]. https://huggingface.co/datasets/boltuix/emotions-dataset
    Explore at:
    Dataset updated
    May 25, 2025
    Authors
    boltuix
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    🌟 Emotions Dataset — Infuse Your AI with Human Feelings! 😊😢😔

    Tap into the Soul of Human Emotions šŸ’–The Emotions Dataset is your key to unlocking emotional intelligence in AI. With 131,306 text entries labeled across 13 vivid emotions 😊😢😔, this dataset empowers you to build empathetic chatbots šŸ¤–, mental health tools 🩺, social media analyzers šŸ“±, and more!

    The Emotions Dataset is a carefully curated collection designed to elevate emotion classification, sentiment… See the full description on the dataset page: https://huggingface.co/datasets/boltuix/emotions-dataset.

  4. R

    Emotion Recognition Dataset

    • universe.roboflow.com
    zip
    Updated Feb 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VietnameseGerman University (2025). Emotion Recognition Dataset [Dataset]. https://universe.roboflow.com/vietnamesegerman-university-mavjh/emotion-recognition-rjl9w
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 18, 2025
    Dataset authored and provided by
    VietnameseGerman University
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Mental Health Monitoring: The emotion recognition model could be used in a mental health tracking app to analyze users' facial expressions during video diaries or calls, providing insights into their emotional state over time.

    2. Customer Service Improvement: Businesses could use this model to monitor customer interactions in stores, analysing the facial expressions of customers to gauge their satisfaction level or immediate reaction to products or services.

    3. Educational and Learning Enhancement: This model could be used in an interactive learning platform to observe students' emotional responses to different learning materials, enabling tailored educational experiences.

    4. Online Content Testing: Marketing or content creation teams could utilize this model to test viewers' emotional reactions to different advertisements or content pieces, improving the impact of their messaging.

    5. Social Robotics: The emotion recognition model could be incorporated in social robots or AI assistants to identify human emotions and respond accordingly, improving their overall user interaction and experience.

  5. P

    CARER Dataset

    • paperswithcode.com
    Updated May 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elvis Saravia; Hsien-Chi Toby Liu; Yen-Hao Huang; Junlin Wu; Yi-Shin Chen (2023). CARER Dataset [Dataset]. https://paperswithcode.com/dataset/emotion
    Explore at:
    Dataset updated
    May 16, 2023
    Authors
    Elvis Saravia; Hsien-Chi Toby Liu; Yen-Hao Huang; Junlin Wu; Yi-Shin Chen
    Description

    CARER is an emotion dataset collected through noisy labels, annotated via distant supervision as in (Go et al., 2009).

    The subset of data provided here corresponds to the six emotions variant described in the paper. The six emotions are anger, fear, joy, love, sadness, and surprise.

  6. R

    Cat Emotions Dataset

    • universe.roboflow.com
    zip
    Updated Nov 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CATS (2023). Cat Emotions Dataset [Dataset]. https://universe.roboflow.com/cats-xofvm/cat-emotions
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 18, 2023
    Dataset authored and provided by
    CATS
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions
    Description

    Cat Emotions

    ## Overview
    
    Cat Emotions is a dataset for classification tasks - it contains Emotions annotations for 671 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  7. 1,003 People - Emotional Video Data

    • m.nexdata.ai
    • nexdata.ai
    Updated Nov 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2023). 1,003 People - Emotional Video Data [Dataset]. https://m.nexdata.ai/datasets/speechrecog/977?source=Github
    Explore at:
    Dataset updated
    Nov 18, 2023
    Dataset authored and provided by
    Nexdata
    Variables measured
    Format, Contributor, Accuracy Rate, Content category, Recording device, Recording condition, Features of annotation
    Description

    Emotional Video Data,including multiple races, multiple indoor scenes, multiple age groups, multiple languages, multiple emotions (11 types of facial emotions, 15 types of inner emotions). For each sentence in each video, annotated emotion types (including facial emotions and inner emotions), start & end timestamp, text transcription.This dataset can be used for tasks such as emotion recognition and sentiment analysis, enhancing model performance in real and complex tasks.Quality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.

  8. P

    GoEmotions Dataset

    • paperswithcode.com
    • tensorflow.org
    • +4more
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dorottya Demszky; Dana Movshovitz-Attias; Jeongwoo Ko; Alan Cowen; Gaurav Nemade; Sujith Ravi, GoEmotions Dataset [Dataset]. https://paperswithcode.com/dataset/goemotions
    Explore at:
    Authors
    Dorottya Demszky; Dana Movshovitz-Attias; Jeongwoo Ko; Alan Cowen; Gaurav Nemade; Sujith Ravi
    Description

    GoEmotions is a corpus of 58k carefully curated comments extracted from Reddit, with human annotations to 27 emotion categories or Neutral.

    Number of examples: 58,009. Number of labels: 27 + Neutral. Maximum sequence length in training and evaluation datasets: 30.

    On top of the raw data, the dataset also includes a version filtered based on reter-agreement, which contains a train/test/validation split:

    Size of training dataset: 43,410. Size of test dataset: 5,427. Size of validation dataset: 5,426.

    The emotion categories are: admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise.

  9. A

    ā€˜Emotion Dataset for Emotion Recognition Tasks’ analyzed by Analyst-2

    • analyst-2.ai
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com), ā€˜Emotion Dataset for Emotion Recognition Tasks’ analyzed by Analyst-2 [Dataset]. https://analyst-2.ai/analysis/kaggle-emotion-dataset-for-emotion-recognition-tasks-865f/92eb9d37/?iid=000-362&v=presentation
    Explore at:
    Dataset authored and provided by
    Analyst-2 (analyst-2.ai) / Inspirient GmbH (inspirient.com)
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Analysis of ā€˜Emotion Dataset for Emotion Recognition Tasks’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/parulpandey/emotion-dataset on 13 February 2022.

    --- Dataset description provided by original source is as follows ---

    A dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper below.

    The authors constructed a set of hashtags to collect a separate dataset of English tweets from the Twitter API belonging to eight basic emotions, including anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. The data has already been preprocessed based on the approach described in their paper.

    An example of 'train' looks as follows. { "label": 0, "text": "im feeling quite sad and sorry for myself but ill snap out of it soon" }

    Starter Notebook

    Exploratory Data Analysis of the emotion dataset

    Acknowledgements

    @inproceedings{saravia-etal-2018-carer,
      title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
      author = "Saravia, Elvis and
       Liu, Hsien-Chi Toby and
       Huang, Yen-Hao and
       Wu, Junlin and
       Chen, Yi-Shin",
      booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
      month = oct # "-" # nov,
      year = "2018",
      address = "Brussels, Belgium",
      publisher = "Association for Computational Linguistics",
      url = "https://www.aclweb.org/anthology/D18-1404",
      doi = "10.18653/v1/D18-1404",
      pages = "3687--3697",
      abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
    }
    

    --- Original source retains full ownership of the source dataset ---

  10. p

    Data from: Kinematic dataset of actors expressing emotions

    • physionet.org
    Updated Jul 7, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mingming Zhang; Lu Yu; Keye Zhang; Bixuan Du; Bin Zhan; Shaohua Chen; Xiuhao Jiang; Shuai Guo; Jiafeng Zhao; Yang Wang; Bin Wang; Shenglan Liu; Wenbo Luo (2020). Kinematic dataset of actors expressing emotions [Dataset]. http://doi.org/10.13026/kg8b-1t49
    Explore at:
    Dataset updated
    Jul 7, 2020
    Authors
    Mingming Zhang; Lu Yu; Keye Zhang; Bixuan Du; Bin Zhan; Shaohua Chen; Xiuhao Jiang; Shuai Guo; Jiafeng Zhao; Yang Wang; Bin Wang; Shenglan Liu; Wenbo Luo
    License

    https://github.com/MIT-LCP/license-and-dua/tree/master/draftshttps://github.com/MIT-LCP/license-and-dua/tree/master/drafts

    Description

    We produced a kinematic dataset to assist in recognizing cues from all parts of the body that indicate human emotions (happy, sad, angry, fearful, , disgust, surprise) and neutral. The present dataset was created using a portable wireless motion capture system. Twenty-two semi-professional actors (50% female) completed performances. A total of 1402 recordings at 125 Hz were collected, consisting of the position and rotation data of 72 anatomical nodes. We hope this dataset will contribute to multiple fields of research and practice, including social neuroscience, psychiatry, computer vision, and biometric and information forensics.

  11. u

    Data from: Gilman-Adhered FilmClip Emotion Dataset (GAFED): Tailored Clips...

    • produccioncientifica.ugr.es
    • data.niaid.nih.gov
    Updated 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francisco M. Garcia-Moreno; Marta Badenes-Sastre; Francisco M. Garcia-Moreno; Marta Badenes-Sastre (2020). Gilman-Adhered FilmClip Emotion Dataset (GAFED): Tailored Clips for Emotional Elicitation [Dataset]. https://produccioncientifica.ugr.es/documentos/668fc432b9e7c03b01bd6128?lang=eu
    Explore at:
    Dataset updated
    2020
    Authors
    Francisco M. Garcia-Moreno; Marta Badenes-Sastre; Francisco M. Garcia-Moreno; Marta Badenes-Sastre
    Description

    Gilman-Adhered FilmClip Emotion Dataset (GAFED): Tailored Clips for Emotional Elicitation

    Description:

    Introducing the Gilman-Adhered FilmClip Emotion Dataset (GAFED) - a cutting-edge compilation of video clips curated explicitly based on the guidelines set by Gilman et al. (2017). This dataset is meticulously structured, leveraging both the realms of film and psychological research. The objective is clear: to induce specific emotional responses with utmost precision and reproducibility. Perfectly tuned for researchers, therapists, and educators, GAFED facilitates an in-depth exploration into the human emotional spectrum using the medium of film.

    Dataset Highlights:

    Gilman's Guidelines: GAFED's foundation is built upon the rigorous criteria and insights provided by Gilman et al., ensuring methodological accuracy and relevance in emotional elicitation.

    Film Titles: Each selected film's title provides an immersive backdrop to the emotions sought to be evoked.

    Emotion Label: A focused emotional response is designated for each clip, reinforcing the consistency in elicitation.

    Clip Duration: Standardized duration of every clip ensures a uniform exposure, leading to consistent response measurements.

    Curated with Precision: Every film clip in GAFED has been reviewed and handpicked, echoing Gilman et al.'s principles, thereby cementing their efficacy in triggering the intended emotion.

    Emotion-Eliciting Video Clips within Dataset:

        Film
        Targeted Emotion
        Duration (seconds)
    
    
    
    
        The Lover
        Baseline
        43
    
    
        American History X
        Anger
        106
    
    
        Cry Freedom
        Sadness
        166
    
    
        Alive
        Happiness
        310
    
    
        Scream
        Fear
        395
    

    The crowning feature of GAFED is its identification of "key moments". These crucial timestamps serve as a bridge between cinema and emotion, guiding researchers to intervals teeming with emotional potency.

    Key Emotional Moments within Dataset:

        Film
        Targeted Emotion
        Key moment timestamps (seconds)
    
    
    
    
        American History X
        Anger
        36, 57, 68
    
    
        Cry Freedom
        Sadness
        112, 132, 154
    
    
        Alive
        Happiness
        227, 270, 289
    
    
        Scream
        Fear
        23, 42, 79, 226, 279, 299, 334
    

    Based on: Gilman, T. L., et al. (2017). A film set for the elicitation of emotion in research. Behavior Research Methods, 49(6).

    GAFED isn't merely a dataset; it's an amalgamation of cinema and psychology, encapsulating the vastness of human emotion. Tailored to perfection and adhering to Gilman et al.'s insights, it stands as a beacon for researchers exploring the depths of human emotion through film.

  12. R

    Emotion Detection Dataset

    • universe.roboflow.com
    zip
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Computer Vision Projects (2025). Emotion Detection Dataset [Dataset]. https://universe.roboflow.com/computer-vision-projects-zhogq/emotion-detection-y0svj
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 26, 2025
    Dataset authored and provided by
    Computer Vision Projects
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions Bounding Boxes
    Description

    Emotion Detection Model for Facial Expressions

    Project Description:

    In this project, we developed an Emotion Detection Model using a curated dataset of 715 facial images, aiming to accurately recognize and categorize expressions into five distinct emotion classes. The emotion classes include Happy, Sad, Fearful, Angry, and Neutral.

    Objectives: - Train a robust machine learning model capable of accurately detecting and classifying facial expressions in real-time. - Implement emotion detection to enhance user experience in applications such as human-computer interaction, virtual assistants, and emotion-aware systems.

    Methodology: 1. Data Collection and Preprocessing: - Assembled a diverse dataset of 715 images featuring individuals expressing different emotions. - Employed Roboflow for efficient data preprocessing, handling image augmentation and normalization.

    1. Model Architecture:

      • Utilized a convolutional neural network (CNN) architecture to capture spatial hierarchies in facial features.
      • Implemented a multi-class classification approach to categorize images into the predefined emotion classes.
    2. Training and Validation:

      • Split the dataset into training and validation sets for model training and evaluation.
      • Fine-tuned the model parameters to optimize accuracy and generalization.
    3. Model Evaluation:

      • Evaluated the model's performance on an independent test set to assess its ability to generalize to unseen data.
      • Analyzed confusion matrices and classification reports to understand the model's strengths and areas for improvement.
    4. Deployment and Integration:

      • Deployed the trained emotion detection model for real-time inference.
      • Integrated the model into applications, allowing users to interact with systems based on detected emotions.

    Results: The developed Emotion Detection Model demonstrates high accuracy in recognizing and classifying facial expressions across the defined emotion classes. This project lays the foundation for integrating emotion-aware systems into various applications, fostering more intuitive and responsive interactions.

  13. P

    OMG-Emotion Dataset

    • paperswithcode.com
    Updated Apr 27, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pablo Barros; Nikhil Churamani; Egor Lakomkin; Henrique Siqueira; Alexander Sutherland; Stefan Wermter (2020). OMG-Emotion Dataset [Dataset]. https://paperswithcode.com/dataset/omg-emotion
    Explore at:
    Dataset updated
    Apr 27, 2020
    Authors
    Pablo Barros; Nikhil Churamani; Egor Lakomkin; Henrique Siqueira; Alexander Sutherland; Stefan Wermter
    Description

    The One-Minute Gradual-Emotional Behavior dataset (OMG-Emotion) dataset is composed of Youtube videos which are around a minute in length and are annotated taking into consideration a continuous emotional behavior. The videos were selected using a crawler technique that uses specific keywords based on long-term emotional behaviors such as "monologues", "auditions", "dialogues" and "emotional scenes".

    It contains 567 emotion videos with an average length of 1 minute, collected from a variety of Youtube channels. The videos were separated into clips based on utterances, and each utterance was annotated by at least five independent subjects using the Amazon Mechanical Turk tool.

  14. h

    emotion

    • huggingface.co
    Updated Mar 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Massive Text Embedding Benchmark (2023). emotion [Dataset]. https://huggingface.co/datasets/mteb/emotion
    Explore at:
    Dataset updated
    Mar 11, 2023
    Dataset authored and provided by
    Massive Text Embedding Benchmark
    License

    https://choosealicense.com/licenses/unknown/https://choosealicense.com/licenses/unknown/

    Description

    EmotionClassification An MTEB dataset Massive Text Embedding Benchmark

    Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise.

    Task category t2c

    Domains Social, Written

    Reference https://www.aclweb.org/anthology/D18-1404

      How to evaluate on this task
    

    You can evaluate an embedding model on this dataset using the following code: import mteb

    task = mteb.get_tasks(["EmotionClassification"])… See the full description on the dataset page: https://huggingface.co/datasets/mteb/emotion.

  15. Z

    Data from: K-EmoCon, a multimodal sensor dataset for continuous emotion...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Feb 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Auk Kim (2024). K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3762961
    Explore at:
    Dataset updated
    Feb 19, 2024
    Dataset provided by
    Soowon Kang
    Alice Oh
    Auk Kim
    Uichin Lee
    Ahsan Habib Khandoker
    Leontios Hadjileontiadis
    Yong Jeong
    Cheul Young Park
    Narae Cha
    Description

    ABSTRACT: Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.

    +---------------------------------------+ | Changelog (last updated: Jul 7, 2020) | +---------------------------------------+

    • Version 1.0.0 (Jul 7, 2020):

      • Updated emotion_annotations.tar.gz:
        • Updated aggregated external annotations to support the reproduction of technical validation results.
    • Version 0.2.0 (May 11, 2020):

      • Added data_quality_tables.tar.gz
      • Updated emotion_annotations.tar.gz
        • Newly added aggregated external annotations.
      • Updated metadata.tar.gz:
        • Added a new column to data_availability.csv to show the availability of aggregated external annotations.
    • Version 0.1.0 (Apr 25, 2020):

      • Added debate_audios.tar.gz
      • Added debate_recordings.tar.gz
      • Added e4_data.tar.gz
      • Added emotion_annotations.tar.gz (self, partner, external)
      • Added metadata.tar.gz
      • Added neurosky_polar_data.tar.gz
  16. Music and Animal Basic Emotions

    • kaggle.com
    • data.mendeley.com
    • +1more
    Updated Oct 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jocelyn Dumlao (2023). Music and Animal Basic Emotions [Dataset]. https://www.kaggle.com/datasets/jocelyndumlao/music-and-animal-basic-emotions
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 11, 2023
    Dataset provided by
    Kaggle
    Authors
    Jocelyn Dumlao
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Description

    The dataset includes sound files for music and animal basic emotions

    Categories

    Emotion

    Acknowledgements & Source

    Kseniia Sapozhnikova

    Data Source

    View Details

    Image Source:6 Emotions PowerPoint Template - PPT Slides

    Please don't forget to upvote if you find this useful.

  17. SMILE Twitter Emotion dataset

    • figshare.com
    txt
    Updated Apr 21, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bo Wang; Adam Tsakalidis; Maria Liakata; Arkaitz Zubiaga; Rob Procter; Eric Jensen (2016). SMILE Twitter Emotion dataset [Dataset]. http://doi.org/10.6084/m9.figshare.3187909.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    Apr 21, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Bo Wang; Adam Tsakalidis; Maria Liakata; Arkaitz Zubiaga; Rob Procter; Eric Jensen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is collected and annotated for the SMILE project http://www.culturesmile.org. This collection of tweets mentioning 13 Twitter handles associated with British museums was gathered between May 2013 and June 2015. It was created for the purpose of classifying emotions, expressed on Twitter towards arts and cultural experiences in museums. It contains 3,085 tweets, with 5 emotions namely anger, disgust, happiness, surprise and sadness. Please see our paper "SMILE: Twitter Emotion Classification using Domain Adaptation" for more details of the dataset.License: The annotations are provided under a CC-BY license, while Twitter retains the ownership and rights of the content of the tweets.

  18. o

    eSEEd: emotional State Estimation based on Eye-tracking dataset

    • explore.openaire.eu
    • data.niaid.nih.gov
    • +1more
    Updated Jan 5, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vasileios Skaramagkas; Emmanouil Ktistakis; Dimitris Manousos; Eleni Kazantzaki; Nikolaos S. Nikolaos S. Tachos; Evanthia Tripoliti; Dimitrios I. Dimitrios I. Fotiadis; Manolis Tsiknakis (2022). eSEEd: emotional State Estimation based on Eye-tracking dataset [Dataset]. http://doi.org/10.5281/zenodo.5775673
    Explore at:
    Dataset updated
    Jan 5, 2022
    Authors
    Vasileios Skaramagkas; Emmanouil Ktistakis; Dimitris Manousos; Eleni Kazantzaki; Nikolaos S. Nikolaos S. Tachos; Evanthia Tripoliti; Dimitrios I. Dimitrios I. Fotiadis; Manolis Tsiknakis
    Description

    We present eSEEd- emotional State Estimation based on Eye-tracking database. Eye movements of 48 participants were recorded as they watched 10 emotion evoking videos each of them followed by a neutral video. Participants rated five emotions (tenderness, anger, disgust, sadness, neutral) on a scale from 0 to 10, later translated in terms of emotional arousal and valence levels. Furthermore, each participant filled 3 self-assessment questionnaires. An extensive analysis of the participants' answers to the questionnaires self-assessment scores as well as their ratings during the experiments is presented. Moreover, eye and gaze features were extracted from the low level eye recorded metrics and their correlations with the participants' ratings are investigated. Finally, analysis and results are presented for machine learning approaches, for the classification of various arousal and valence levels based solely on eye and gaze features. The dataset is made publicly available and we encourage other researchers to use it for testing new methods and analytic pipelines for the estimation of an individual's affective state.TO USE THIS DATASET PLEASE CITE:Skaramagkas, V.; Ktistakis, E.; Manousos, D.; Kazantzaki, E.; Tachos, N.S.; Tripoliti, E.; Fotiadis, D.I.; Tsiknakis, M. eSEE-d: Emotional State Estimation Based on Eye-Tracking Dataset. Brain Sci. 2023, 13, 589. https://doi.org/10.3390/brainsci13040589 This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 826429 (Project: SeeFar). This paper reflects only the author's view and the Commission is not responsible for any use that may be made of the information it contains. Please cite: Skaramagkas, V.; Ktistakis, E.; Manousos, D.; Kazantzaki, E.; Tachos, N.S.; Tripoliti, E.; Fotiadis, D.I.; Tsiknakis, M. eSEE-d: Emotional State Estimation Based on Eye-Tracking Dataset. Brain Sci. 2023, 13, 589. https://doi.org/10.3390/brainsci13040589

  19. f

    Music Emotion Dataset with 2496 Songs for Music Emotion Recognition...

    • figshare.com
    bin
    Updated Feb 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qilin Li (2025). Music Emotion Dataset with 2496 Songs for Music Emotion Recognition (Memo2496) [Dataset]. http://doi.org/10.6084/m9.figshare.25827034.v3
    Explore at:
    binAvailable download formats
    Dataset updated
    Feb 14, 2025
    Dataset provided by
    figshare
    Authors
    Qilin Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Music emotion recognition delineates and categorises the spectrum of emotions expressed within musical compositions by conducting a comprehensive analysis of fundamental attributes, including melody, rhythm, and timbre. This task is pivotal for the tailoring of music recommendations, the enhancement of music production, the facilitation of psychotherapeutic interventions, and the execution of market analyses, among other applications. The cornerstone is the establishment of a music emotion recognition dataset annotated with reliable emotional labels, furnishing machine learning algorithms with essential training and validation tools, thereby underpinning the precision and dependability of emotion detection. The Music Emotion Dataset with 2496 Songs (Memo2496) dataset, comprising 2496 instrumental musical pieces annotated with valence-arousal (VA) labels and acoustic features, is introduced to advance music emotion recognition and affective computing. The dataset is meticulously annotated by 30 music experts proficient in music theory and devoid of cognitive impairments, ensuring an unbiased perspective. The annotation methodology and experimental paradigm are grounded in previously validated studies, guaranteeing the integrity and high calibre of the data annotations.Memo2496 R1 updated by Qilin Li @12Feb20251. Remove some unannotated music raw data, now the music contained in MusicRawData.zip file are all annotated music.2. The ā€˜Music Raw Data.zip’ file on FigShare has been updated to contain 2496 songs, consistent with the corpus described in the manuscript. The metadata fields on ā€œTitleā€, ā€œContributing Artistsā€, ā€œGenreā€, and/or ā€œAlbumā€ have been removed to ensure the songs remain anonymous.3. Adjusted the file structure, now the files on FigShare are placed in folders named ā€˜Music Raw Data’, ā€˜Annotations’, ā€˜Features’, and ā€˜Data Processing Utilities’ to reflect the format of the Data Records section in the manuscript.Memo2496 R2 updated by Qilin Li @14Feb2025The source of each song's download platform has been added in ā€˜songs_info_all.csv’ to enable users to search within the platform itself if necessary. This approach aims to balance the privacy requirements of the data with the potential needs of the dataset's users.

  20. Art&Emotions Dataset

    • zenodo.org
    • data.niaid.nih.gov
    csv, zip
    Updated Aug 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessio Bosca; Alessio Bosca (2023). Art&Emotions Dataset [Dataset]. http://doi.org/10.5281/zenodo.8296750
    Explore at:
    csv, zipAvailable download formats
    Dataset updated
    Aug 29, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alessio Bosca; Alessio Bosca
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Art&Emotion experiment description


    The Art & Emotions dataset was collected in the scope of EU funded research project SPICE (https://cordis.europa.eu/project/id/870811) with the goal of investigating the relationship between art and emotions and collecting written data (User Generated Content) in the domain of arts in all the languages of the SPICE project (fi, en, es, he, it). The data was collected through a set of Google Forms (one for each language) and it was used in the project (along the other datasets collected by museums in the different project use cases) in order to train and test Emotion Detection Models within the project.

    The experiment consists of 12 artworks, chosen from a group of artworks provided by the GAM Museum of Turin (https://www.gamtorino.it/) one of the project partners. Each artwork is presented in a different section of the form; for each of the artworks, the user is asked to answer 5 open questions:

    1. What do you see in this picture? Write what strikes you most in this image.

    2. What does this artwork make you think about? Write the thoughts and memories that the picture evokes.

    3. How does this painting make you feel? Write the feelings and emotions that the picture evokes in you

    4. What title would you give to this artwork?

    5. Now choose one or more emoji to associate with your feelings looking at this artwork. You can also select "other" and insert other emojis by copying them from this link: https://emojipedia.org/


    For each of the artworks, the user can decide whether to skip to the next artwork, if he does not like the one in front of him or go back to the previous artworks and modify the answers. It is not mandatory to fill all the questions for a given artwork.

    The question about emotions is left open so as not to force the person to choose emotions from a list of tags which are the tags of a model (e.g. Plutchik), but leaving him free to express the different shades of emotions that can be felt.

    Before getting to the heart of the experiment, with the artworks sections, the user is asked to leave some personal information (anonymously), to help us getting an idea of the type of users who participated in the experiment.

    The questions are:

    1. Age (open)
    2. Gender (male, female, prefer not to say, other (open))
    3. How would you define your relationship with art?
    • My job is related to the art world
    • I am passionate about the art
    • I am a little interested in art
    • I am not interested in art

    4. Do you like going to museums or art exhibitions?

    • I like to visit museums frequently
    • I go occasionally to museums or art exhibitions
    • I rarely visit museums or art exhibitions

    ---------------------

    Dataset structure:

    • FI.csv: form data (personal data + open questions) in Finnish (UTF-8)
    • EN.csv: form data (personal data + open questions) in English (UTF-8)
    • ES.csv: form data (personal data + open questions) in Spanish (UTF-8)
    • HE.csv: form data (personal data + open questions) in Hebrew (UTF-8)
    • IT.csv: form data (personal data + open questions) in Italian (UTF-8)
    • artworks.csv: the list of artworks including title, author, picture name (the pictures can be found in pictures.zip) and the mapping between the columns in the form data and the questions about that artwork
    • pictures.zip: the jpeg of the artworks
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
DAIR.AI (2020). emotion [Dataset]. https://huggingface.co/datasets/dair-ai/emotion

emotion

Emotion

dair-ai/emotion

Explore at:
20 scholarly articles cite this dataset (View in Google Scholar)
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Jul 14, 2020
Dataset provided by
DAIR.AI
License

https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

Description

Dataset Card for "emotion"

  Dataset Summary

Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.

  Supported Tasks and Leaderboards

More Information Needed

  Languages

More Information Needed

  Dataset Structure





  Data Instances

An example looks as follows. { "text": "im feeling quite sad and sorry for myself but… See the full description on the dataset page: https://huggingface.co/datasets/dair-ai/emotion.

Search
Clear search
Close search
Google apps
Main menu