https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
Dataset Card for "emotion"
Dataset Summary
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information Needed
Dataset Structure
Data Instances
An example looks as follows. { "text": "im feeling quite sad and sorry for myself but⦠See the full description on the dataset page: https://huggingface.co/datasets/dair-ai/emotion.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Google AI GoEmotions dataset consists of comments from Reddit users with labels of their emotional coloring. GoEmotions is designed to train neural networks to perform deep analysis of the tonality of texts. Most of the existing emotion classification datasets cover certain areas (for example, news headlines and movie subtitles), are small in size and use a scale of only six basic emotions (anger, surprise, disgust, joy, fear, and sadness). The expansion of the emotional spectrum considered in datasets could make it possible to create more sensitive chatbots, models for detecting dangerous behavior on the Internet, as well as improve customer support services.
The categories of emotions were identified by Google together with psychologists and include 12 positive,, 11 negative, 4 ambiguous emotions, and 1 neutral, which makes the dataset suitable for solving tasks that require subtle differentiation between different emotions.
Source: https://arxiv.org/pdf/2005.00547.pdf Github: https://github.com/google-research/google-research/tree/master/goemotions
Original Data Source: Go Emotions: Google Emotions Dataset
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
š Emotions Dataset ā Infuse Your AI with Human Feelings! šš¢š”
Tap into the Soul of Human Emotions šThe Emotions Dataset is your key to unlocking emotional intelligence in AI. With 131,306 text entries labeled across 13 vivid emotions šš¢š”, this dataset empowers you to build empathetic chatbots š¤, mental health tools š©ŗ, social media analyzers š±, and more!
The Emotions Dataset is a carefully curated collection designed to elevate emotion classification, sentiment⦠See the full description on the dataset page: https://huggingface.co/datasets/boltuix/emotions-dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here are a few use cases for this project:
Mental Health Monitoring: The emotion recognition model could be used in a mental health tracking app to analyze users' facial expressions during video diaries or calls, providing insights into their emotional state over time.
Customer Service Improvement: Businesses could use this model to monitor customer interactions in stores, analysing the facial expressions of customers to gauge their satisfaction level or immediate reaction to products or services.
Educational and Learning Enhancement: This model could be used in an interactive learning platform to observe students' emotional responses to different learning materials, enabling tailored educational experiences.
Online Content Testing: Marketing or content creation teams could utilize this model to test viewers' emotional reactions to different advertisements or content pieces, improving the impact of their messaging.
Social Robotics: The emotion recognition model could be incorporated in social robots or AI assistants to identify human emotions and respond accordingly, improving their overall user interaction and experience.
CARER is an emotion dataset collected through noisy labels, annotated via distant supervision as in (Go et al., 2009).
The subset of data provided here corresponds to the six emotions variant described in the paper. The six emotions are anger, fear, joy, love, sadness, and surprise.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Cat Emotions is a dataset for classification tasks - it contains Emotions annotations for 671 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Emotional Video Data,including multiple races, multiple indoor scenes, multiple age groups, multiple languages, multiple emotions (11 types of facial emotions, 15 types of inner emotions). For each sentence in each video, annotated emotion types (including facial emotions and inner emotions), start & end timestamp, text transcription.This dataset can be used for tasks such as emotion recognition and sentiment analysis, enhancing model performance in real and complex tasks.Quality tested by various AI companies. We strictly adhere to data protection regulations and privacy standards, ensuring the maintenance of user privacy and legal rights throughout the data collection, storage, and usage processes, our datasets are all GDPR, CCPA, PIPL complied.
GoEmotions is a corpus of 58k carefully curated comments extracted from Reddit, with human annotations to 27 emotion categories or Neutral.
Number of examples: 58,009. Number of labels: 27 + Neutral. Maximum sequence length in training and evaluation datasets: 30.
On top of the raw data, the dataset also includes a version filtered based on reter-agreement, which contains a train/test/validation split:
Size of training dataset: 43,410. Size of test dataset: 5,427. Size of validation dataset: 5,426.
The emotion categories are: admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, surprise.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of āEmotion Dataset for Emotion Recognition Tasksā provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://www.kaggle.com/parulpandey/emotion-dataset on 13 February 2022.
--- Dataset description provided by original source is as follows ---
A dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper below.
The authors constructed a set of hashtags to collect a separate dataset of English tweets from the Twitter API belonging to eight basic emotions, including anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. The data has already been preprocessed based on the approach described in their paper.
An example of 'train' looks as follows.
{
"label": 0,
"text": "im feeling quite sad and sorry for myself but ill snap out of it soon"
}
Exploratory Data Analysis of the emotion dataset
@inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
}
--- Original source retains full ownership of the source dataset ---
https://github.com/MIT-LCP/license-and-dua/tree/master/draftshttps://github.com/MIT-LCP/license-and-dua/tree/master/drafts
We produced a kinematic dataset to assist in recognizing cues from all parts of the body that indicate human emotions (happy, sad, angry, fearful, , disgust, surprise) and neutral. The present dataset was created using a portable wireless motion capture system. Twenty-two semi-professional actors (50% female) completed performances. A total of 1402 recordings at 125 Hz were collected, consisting of the position and rotation data of 72 anatomical nodes. We hope this dataset will contribute to multiple fields of research and practice, including social neuroscience, psychiatry, computer vision, and biometric and information forensics.
Gilman-Adhered FilmClip Emotion Dataset (GAFED): Tailored Clips for Emotional Elicitation
Description:
Introducing the Gilman-Adhered FilmClip Emotion Dataset (GAFED) - a cutting-edge compilation of video clips curated explicitly based on the guidelines set by Gilman et al. (2017). This dataset is meticulously structured, leveraging both the realms of film and psychological research. The objective is clear: to induce specific emotional responses with utmost precision and reproducibility. Perfectly tuned for researchers, therapists, and educators, GAFED facilitates an in-depth exploration into the human emotional spectrum using the medium of film.
Dataset Highlights:
Gilman's Guidelines: GAFED's foundation is built upon the rigorous criteria and insights provided by Gilman et al., ensuring methodological accuracy and relevance in emotional elicitation.
Film Titles: Each selected film's title provides an immersive backdrop to the emotions sought to be evoked.
Emotion Label: A focused emotional response is designated for each clip, reinforcing the consistency in elicitation.
Clip Duration: Standardized duration of every clip ensures a uniform exposure, leading to consistent response measurements.
Curated with Precision: Every film clip in GAFED has been reviewed and handpicked, echoing Gilman et al.'s principles, thereby cementing their efficacy in triggering the intended emotion.
Emotion-Eliciting Video Clips within Dataset:
Film
Targeted Emotion
Duration (seconds)
The Lover
Baseline
43
American History X
Anger
106
Cry Freedom
Sadness
166
Alive
Happiness
310
Scream
Fear
395
The crowning feature of GAFED is its identification of "key moments". These crucial timestamps serve as a bridge between cinema and emotion, guiding researchers to intervals teeming with emotional potency.
Key Emotional Moments within Dataset:
Film
Targeted Emotion
Key moment timestamps (seconds)
American History X
Anger
36, 57, 68
Cry Freedom
Sadness
112, 132, 154
Alive
Happiness
227, 270, 289
Scream
Fear
23, 42, 79, 226, 279, 299, 334
Based on: Gilman, T. L., et al. (2017). A film set for the elicitation of emotion in research. Behavior Research Methods, 49(6).
GAFED isn't merely a dataset; it's an amalgamation of cinema and psychology, encapsulating the vastness of human emotion. Tailored to perfection and adhering to Gilman et al.'s insights, it stands as a beacon for researchers exploring the depths of human emotion through film.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Emotion Detection Model for Facial Expressions
Project Description:
In this project, we developed an Emotion Detection Model using a curated dataset of 715 facial images, aiming to accurately recognize and categorize expressions into five distinct emotion classes. The emotion classes include Happy, Sad, Fearful, Angry, and Neutral.
Objectives: - Train a robust machine learning model capable of accurately detecting and classifying facial expressions in real-time. - Implement emotion detection to enhance user experience in applications such as human-computer interaction, virtual assistants, and emotion-aware systems.
Methodology: 1. Data Collection and Preprocessing: - Assembled a diverse dataset of 715 images featuring individuals expressing different emotions. - Employed Roboflow for efficient data preprocessing, handling image augmentation and normalization.
Model Architecture:
Training and Validation:
Model Evaluation:
Deployment and Integration:
Results: The developed Emotion Detection Model demonstrates high accuracy in recognizing and classifying facial expressions across the defined emotion classes. This project lays the foundation for integrating emotion-aware systems into various applications, fostering more intuitive and responsive interactions.
The One-Minute Gradual-Emotional Behavior dataset (OMG-Emotion) dataset is composed of Youtube videos which are around a minute in length and are annotated taking into consideration a continuous emotional behavior. The videos were selected using a crawler technique that uses specific keywords based on long-term emotional behaviors such as "monologues", "auditions", "dialogues" and "emotional scenes".
It contains 567 emotion videos with an average length of 1 minute, collected from a variety of Youtube channels. The videos were separated into clips based on utterances, and each utterance was annotated by at least five independent subjects using the Amazon Mechanical Turk tool.
https://choosealicense.com/licenses/unknown/https://choosealicense.com/licenses/unknown/
EmotionClassification An MTEB dataset Massive Text Embedding Benchmark
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise.
Task category t2c
Domains Social, Written
Reference https://www.aclweb.org/anthology/D18-1404
How to evaluate on this task
You can evaluate an embedding model on this dataset using the following code: import mteb
task = mteb.get_tasks(["EmotionClassification"])⦠See the full description on the dataset page: https://huggingface.co/datasets/mteb/emotion.
ABSTRACT: Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.
+---------------------------------------+ | Changelog (last updated: Jul 7, 2020) | +---------------------------------------+
Version 1.0.0 (Jul 7, 2020):
Version 0.2.0 (May 11, 2020):
Version 0.1.0 (Apr 25, 2020):
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The dataset includes sound files for music and animal basic emotions
Emotion
Kseniia Sapozhnikova
Image Source:6 Emotions PowerPoint Template - PPT Slides
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is collected and annotated for the SMILE project http://www.culturesmile.org. This collection of tweets mentioning 13 Twitter handles associated with British museums was gathered between May 2013 and June 2015. It was created for the purpose of classifying emotions, expressed on Twitter towards arts and cultural experiences in museums. It contains 3,085 tweets, with 5 emotions namely anger, disgust, happiness, surprise and sadness. Please see our paper "SMILE: Twitter Emotion Classification using Domain Adaptation" for more details of the dataset.License: The annotations are provided under a CC-BY license, while Twitter retains the ownership and rights of the content of the tweets.
We present eSEEd- emotional State Estimation based on Eye-tracking database. Eye movements of 48 participants were recorded as they watched 10 emotion evoking videos each of them followed by a neutral video. Participants rated five emotions (tenderness, anger, disgust, sadness, neutral) on a scale from 0 to 10, later translated in terms of emotional arousal and valence levels. Furthermore, each participant filled 3 self-assessment questionnaires. An extensive analysis of the participants' answers to the questionnaires self-assessment scores as well as their ratings during the experiments is presented. Moreover, eye and gaze features were extracted from the low level eye recorded metrics and their correlations with the participants' ratings are investigated. Finally, analysis and results are presented for machine learning approaches, for the classification of various arousal and valence levels based solely on eye and gaze features. The dataset is made publicly available and we encourage other researchers to use it for testing new methods and analytic pipelines for the estimation of an individual's affective state.TO USE THIS DATASET PLEASE CITE:Skaramagkas, V.; Ktistakis, E.; Manousos, D.; Kazantzaki, E.; Tachos, N.S.; Tripoliti, E.; Fotiadis, D.I.; Tsiknakis, M. eSEE-d: Emotional State Estimation Based on Eye-Tracking Dataset. Brain Sci. 2023, 13, 589. https://doi.org/10.3390/brainsci13040589 This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 826429 (Project: SeeFar). This paper reflects only the author's view and the Commission is not responsible for any use that may be made of the information it contains. Please cite: Skaramagkas, V.; Ktistakis, E.; Manousos, D.; Kazantzaki, E.; Tachos, N.S.; Tripoliti, E.; Fotiadis, D.I.; Tsiknakis, M. eSEE-d: Emotional State Estimation Based on Eye-Tracking Dataset. Brain Sci. 2023, 13, 589. https://doi.org/10.3390/brainsci13040589
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Music emotion recognition delineates and categorises the spectrum of emotions expressed within musical compositions by conducting a comprehensive analysis of fundamental attributes, including melody, rhythm, and timbre. This task is pivotal for the tailoring of music recommendations, the enhancement of music production, the facilitation of psychotherapeutic interventions, and the execution of market analyses, among other applications. The cornerstone is the establishment of a music emotion recognition dataset annotated with reliable emotional labels, furnishing machine learning algorithms with essential training and validation tools, thereby underpinning the precision and dependability of emotion detection. The Music Emotion Dataset with 2496 Songs (Memo2496) dataset, comprising 2496 instrumental musical pieces annotated with valence-arousal (VA) labels and acoustic features, is introduced to advance music emotion recognition and affective computing. The dataset is meticulously annotated by 30 music experts proficient in music theory and devoid of cognitive impairments, ensuring an unbiased perspective. The annotation methodology and experimental paradigm are grounded in previously validated studies, guaranteeing the integrity and high calibre of the data annotations.Memo2496 R1 updated by Qilin Li @12Feb20251. Remove some unannotated music raw data, now the music contained in MusicRawData.zip file are all annotated music.2. The āMusic Raw Data.zipā file on FigShare has been updated to contain 2496 songs, consistent with the corpus described in the manuscript. The metadata fields on āTitleā, āContributing Artistsā, āGenreā, and/or āAlbumā have been removed to ensure the songs remain anonymous.3. Adjusted the file structure, now the files on FigShare are placed in folders named āMusic Raw Dataā, āAnnotationsā, āFeaturesā, and āData Processing Utilitiesā to reflect the format of the Data Records section in the manuscript.Memo2496 R2 updated by Qilin Li @14Feb2025The source of each song's download platform has been added in āsongs_info_all.csvā to enable users to search within the platform itself if necessary. This approach aims to balance the privacy requirements of the data with the potential needs of the dataset's users.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Art&Emotion experiment description
The Art & Emotions dataset was collected in the scope of EU funded research project SPICE (https://cordis.europa.eu/project/id/870811) with the goal of investigating the relationship between art and emotions and collecting written data (User Generated Content) in the domain of arts in all the languages of the SPICE project (fi, en, es, he, it). The data was collected through a set of Google Forms (one for each language) and it was used in the project (along the other datasets collected by museums in the different project use cases) in order to train and test Emotion Detection Models within the project.
The experiment consists of 12 artworks, chosen from a group of artworks provided by the GAM Museum of Turin (https://www.gamtorino.it/) one of the project partners. Each artwork is presented in a different section of the form; for each of the artworks, the user is asked to answer 5 open questions:
1. What do you see in this picture? Write what strikes you most in this image.
2. What does this artwork make you think about? Write the thoughts and memories that the picture evokes.
3. How does this painting make you feel? Write the feelings and emotions that the picture evokes in you
4. What title would you give to this artwork?
5. Now choose one or more emoji to associate with your feelings looking at this artwork. You can also select "other" and insert other emojis by copying them from this link: https://emojipedia.org/
For each of the artworks, the user can decide whether to skip to the next artwork, if he does not like the one in front of him or go back to the previous artworks and modify the answers. It is not mandatory to fill all the questions for a given artwork.
The question about emotions is left open so as not to force the person to choose emotions from a list of tags which are the tags of a model (e.g. Plutchik), but leaving him free to express the different shades of emotions that can be felt.
Before getting to the heart of the experiment, with the artworks sections, the user is asked to leave some personal information (anonymously), to help us getting an idea of the type of users who participated in the experiment.
The questions are:
4. Do you like going to museums or art exhibitions?
---------------------
Dataset structure:
https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
Dataset Card for "emotion"
Dataset Summary
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
Supported Tasks and Leaderboards
More Information Needed
Languages
More Information Needed
Dataset Structure
Data Instances
An example looks as follows. { "text": "im feeling quite sad and sorry for myself but⦠See the full description on the dataset page: https://huggingface.co/datasets/dair-ai/emotion.