Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Happy Face is a dataset for object detection tasks - it contains Elderly Happy Face annotations for 456 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Facial Emotion Detection Dataset is a collection of images of individuals with two different emotions - happy and sad. The dataset was captured using a mobile phone camera and contains photos taken from different angles and backgrounds.
The dataset contains a total of 637 photos with an additional dataset of 127 from previous work. Out of the total, 402 images are of happy faces, and 366 images are of sad faces. Each individual had a minimum of 10 images of both expressions.
The project faced challenges in terms of time constraints and people's constraints, which limited the number of individuals who participated. Despite the limitations, the dataset can be used for deep learning projects and real-time emotion detection models. Future work can expand the dataset by capturing more images to improve the accuracy of the model. The dataset can also be used to create a custom object detection model to evaluate other types of emotional expressions.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by shantanugupta2004
Released under MIT
https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement
Welcome to the East Asian Facial Expression Image Dataset, meticulously curated to enhance expression recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.
This dataset comprises over 2000 facial expression images, divided into participant-wise sets with each set including:
The dataset includes contributions from a diverse network of individuals across East Asian countries, such as:
To ensure high utility and robustness, all images are captured under varying conditions:
Each facial expression image set is accompanied by detailed metadata for each participant, including:
This metadata is essential for training models that can accurately recognize and identify expressions across different demographics and conditions.
This facial emotion dataset is ideal for various applications in the field of computer vision, including but not limited to:
We understand the evolving nature of AI and machine learning requirements. Therefore, we continuously add more assets with diverse conditions to this off-the-shelf facial expression dataset.
Empirical findings predominantly support a happiness superiority effect in visual search and emotion categorization paradigms and reveal that social cues, like sex and race, moderate this advantage. A more recent study showed that the facial attribute attractiveness also influences the accuracy and speed of emotion perception. In the current study, we investigated whether the influence of attractiveness on emotion perception translates into a more general evaluation of moods when more than one emotional target is presented. In two experiments, we used the mood-of-the-crowd (MoC) task to investigate whether attractive crowds are perceived more positively compared to less attractive crowds. The task was to decide whether an array of faces included more angry or more happy faces. Furthermore, we recorded gaze movements to test the assumption that fixations on happy expressions occur more often in attractive crowds. Thirty-four participants took part in experiment 1 as well as in experiment 2. In both experiments, crowds presenting attractive faces were judged as being happy more frequently whereas the reverse pattern was found for unattractive crowds of faces. Moreover, participants were faster and more accurate when evaluating attractive crowds containing more happy faces as well as when judging unattractive crowds composed of more angry expressions. Additionally, in experiment 1, there were more fixations on happy compared to angry expressions in attractive crowds. Overall, the present findings support the assumption that attractiveness moderates emotion perception.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Elderly people show a decline in the ability to decode facial expressions, but also experience age-related facial structure changes that may render their facial expressions harder to decode. However, to date there is no empirical evidence to support the latter mechanism. The objective of this study was to assess the effects of age on facial morphology at rest and during smiling, in younger (n = 100; age range, 18–32 years) and older (n = 30; age range, 55–65 years) Japanese women. Three-dimensional images of each subject's face at rest and during smiling were obtained and wire mesh fitting was performed on each image to quantify the facial surface morphology. The mean node coordinates in each facial posture were compared between the groups using t-tests. Further, the node coordinates of the fitted mesh were entered into a principal component analysis (PCA) and a multifactor analysis of variance (MANOVA) to examine the direct interactions of aging and facial postures on the 3D facial morphology. The results indicated that there were significant age-related 3D facial changes in facial expression generation and the transition from resting to smiling produced a smaller amount of soft tissue movement in the older group than in the younger group. Further, 185 surface configuration variables were extracted and the variables were used to create four discriminant functions: the age-group discrimination for each facial expression, and the facial expression discrimination for each age group. For facial expression discrimination, the older group showed 80% accuracy with 2 of 66 significant variables, whereas the younger group showed 99% accuracy with 15 of 144 significant variables. These results indicate that in both facial expressions, the facial morphology was distinctly different in the younger and older subjects, and that in the older group, the facial morphology during smiling could not be as easily discriminated from the morphology at rest as in the younger group. These results may help to explain one aspect of the communication dysfunction observed in older people.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This files present research data and materials (stimuli and forms) about the research entitled Sad, Angry and Fearful Facial Expressions Interfere with Perception of Causal Outcomes
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: The Behavioral Emotion Recognition Dataset (BERD) was developed as part of the research study titled "Behavioral Research Methodologies for Bodily Emotion Recognition." This dataset comprises motion capture data collected from participants performing emotional body movements under various experimental conditions. It is designed to facilitate the development and evaluation of automatic emotion recognition (AER) systems using bodily movement data. The dataset offers insights into the effects of participant acting expertise, motion capture device types, and emotional stimuli on bodily emotion recognition accuracy.
Key Features:
1. (Expertise) Participant Data:
2. (Devices) Motion Capture Devices:
3. (Stimulus) Emotional Stimulus:
4. Emotions Recorded:
5. Data Format:
Potential Applications:
Citation: If you use this dataset in your research, please cite it as follows:
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Past geological and climatological processes shape extant biodiversity. In the Hawaiian Islands these processes have provided the physical environment for a number of extensive adaptive radiations. Yet single species that occur throughout the islands provide some of the best cases for understanding how species respond to the shifting dynamics of the islands in the context of colonization history and associated demographic and adaptive shifts. Here we focus on the Hawaiian happy-face spider, a single color-polymorphic species, and use mitochondrial and nuclear allozyme markers to examine 1) how the mosaic formation of the landscape has dictated population structure, and 2) how cycles of expansion and contraction of the habitat matrix have been associated with demographic shifts, including a 'quantum shift' in the genetic basis of the color polymorphism. The results show a marked structure among populations consistent with the age progression of the islands. The finding of low genetic diversity on the youngest site coupled with the very high diversity of haplotypes on the slightly older substrates that are highly dissected by recent volcanism suggest that the mosaic structure of the landscape may play an important role in allowing differentiation of the adaptive color-polymorphism.
This dataset was created by Duaa Zehra Alvi Malik Fakhar Abbas Alvi
https://choosealicense.com/licenses/unknown/https://choosealicense.com/licenses/unknown/
ReactionGIF
From https://github.com/bshmueli/ReactionGIF
Excerpt from original repo readme
ReactionGIF is a unique, first-of-its-kind dataset of 30K sarcastic tweets and their GIF reactions. To find out more about ReactionGIF, check out our ACL 2021 paper:
Shmueli, Ray and Ku, Happy Dance, Slow Clap: Using Reaction GIFs to Predict Induced Affect on Twitter
Citation
If you use our dataset, kindly cite the paper using the following BibTex… See the full description on the dataset page: https://huggingface.co/datasets/julien-c/reactiongif.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Blended facial expressions with a smiling mouth but non-happy eyes (neutral, sad, etc.) are often (incorrectly) judged as “happy”. We investigated the time course of this phenomenon, both forward and backward. To do this, we varied the order of presentation of a prime stimulus (upper half of a face) and a probe (lower half of a face) stimulus, and their display durations. The forward and the backward influence of the smile was assessed when the mouth was seen before or after the eyes. Participants categorised the eye expression when the mouth and the eyes were congruent or incongruent. Results showed that, as a forward prime, a smiling mouth biased the recognition of incongruent (non-happy) eyes as if they were happy. The effect started as early as 100 ms and dissipated by 1000 ms. As a backward prime, the smile also biased recognition of non-happy eye expressions as happy for at least the first 300 ms. These results suggest, respectively, that the presence of a smiling mouth impairs the accurate encoding and memory for non-happy eyes. Angry eyes are the least susceptible to this effect, probably due to their distinctiveness. An alternative response (rather than sensitivity) bias was partially ruled out.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Smile Detection is a dataset for classification tasks - it contains Smiling Face annotations for 1,203 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Research focusing on the association between age and emotion perception has revealed inconsistent findings, with some support for an age-related positivity effect, as predicted by socioemotional selectivity theory. We used the mood-of-the-crowd (MoC) task to investigate whether older adults judge a crowd consisting of happy and angry expressions to be dominated by happy faces more frequently. The task was to decide whether an array of faces included more angry or more happy faces. Accuracy, reaction times, and gaze movements were analyzed to test the hypothesis, derived from socioemotional selectivity theory, that age would be associated with a bias toward judging crowds as happy, and with longer and more numerous fixations on happy expressions. Seventy-six participants took part in the study representing three different age groups (young, middle-aged, old). Contrary to the hypothesis, older participants more often judged the emotional crowd to be angry compared to younger participants. Furthermore, whereas fixations were longer for happy faces than for angry faces in younger adults, this difference was not present in older adults. A decline in inhibitory processing in older adults as well as higher cognitive demands of the task are discussed as possible explanations for these findings.
This dataset was created by Ghousethanedar
Human faces express emotions, informing others about their affective states. In order to measure expressions of emotion, facial Electromyography (EMG) has widely been used, requiring electrodes and technical equipment. More recently, emotion recognition software has been developed that detects emotions from video recordings of human faces. However, its validity and comparability to EMG measures is unclear. The aim of the current study was to compare the Affectiva Affdex emotion recognition software by iMotions with EMG measurements of the zygomaticus mayor and corrugator supercilii muscle, concerning its ability to identify happy, angry and neutral faces. Twenty participants imitated these facial expressions while videos and EMG were recorded. Happy and angry expressions were detected by both the software and by EMG above chance, while neutral expressions were more often falsely identified as negative by EMG compared to the software. Overall, EMG and software values correlated highly. In conclusion, Affectiva Affdex software can identify facial expressions and its results are comparable to EMG findings.
https://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.11588/DATA/4X2FXChttps://heidata.uni-heidelberg.de/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.11588/DATA/4X2FXC
Research focusing on the association between age and emotion perception has revealed inconsistent findings, with some support for an age-related positivity effect, as predicted by socioemotional selectivity theory. We used the mood-of-the-crowd (MoC) task to investigate whether older adults judge a crowd consisting of happy and angry expressions to be dominated by happy faces more frequently. The task was to decide whether an array of faces included more angry or more happy faces. Accuracy, reaction times, and gaze movements were analyzed to test the hypothesis, derived from socioemotional selectivity theory, that age would be associated with a bias toward judging crowds as happy, and with longer and more numerous fixations on happy expressions. Seventy-six participants took part in the study representing three different age groups (young, middle-aged, old). Contrary to the hypothesis, older participants more often judged the emotional crowd to be angry compared to younger participants. Furthermore, whereas fixations were longer for happy faces than for angry faces in younger adults, this difference was not present in older adults. A decline in inhibitory processing in older adults as well as higher cognitive demands of the task are discussed as possible explanations for these findings.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundIndividuals with anxiety disorders (ADs) often display hypervigilance to threat information, although this response may be less pronounced following psychotherapy. This study aims to investigate the unconscious recognition performance of facial expressions in patients with panic disorder (PD) post-treatment, shedding light on alterations in their emotional processing biases.MethodsPatients with PD (n=34) after (exposure-based) cognitive behavior therapy and healthy controls (n=43) performed a subliminal affective recognition task. Emotional facial expressions (fearful, happy, or mirrored) were displayed for 33 ms and backwardly masked by a neutral face. Participants completed a forced choice task to discriminate the briefly presented facial stimulus and an uncovered condition where only the neutral mask was shown. We conducted a secondary analysis to compare groups based on their four possible response types under the four stimulus conditions and examined the correlation of the false alarm rate for fear responses to non-fearful (happy, mirrored, and uncovered) stimuli with clinical anxiety symptoms.ResultsThe patient group showed a unique selection pattern in response to happy expressions, with significantly more correct “happy” responses compared to controls. Additionally, lower severity of anxiety symptoms after psychotherapy was associated with a decreased false fear response rate with non-threat presentations.ConclusionThese data suggest that patients with PD exhibited a “happy-face recognition advantage” after psychotherapy. Less symptoms after treatment were related to a reduced fear bias. Thus, a differential facial emotion detection task could be a suitable tool to monitor response patterns and biases in individuals with ADs in the context of psychotherapy.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
There is increasing interest in clarifying how different face emotion expressions are perceived by people from different cultures, of different ages and sex. However, scant availability of well-controlled emotional face stimuli from non-Western populations limit the evaluation of cultural differences in face emotion perception and how this might be modulated by age and sex differences. We present a database of East Asian face expression stimuli, enacted by young and older, male and female, Taiwanese using the Facial Action Coding System (FACS). Combined with a prior database, this present database consists of 90 identities with happy, sad, angry, fearful, disgusted, surprised and neutral expressions amounting to 628 photographs. Twenty young and 24 older East Asian raters scored the photographs for intensities of multiple-dimensions of emotions and induced affect. Multivariate analyses characterized the dimensionality of perceived emotions and quantified effects of age and sex. We also applied commercial software to extract computer-based metrics of emotions in photographs. Taiwanese raters perceived happy faces as one category, sad, angry, and disgusted expressions as one category, and fearful and surprised expressions as one category. Younger females were more sensitive to face emotions than younger males. Whereas, older males showed reduced face emotion sensitivity, older female sensitivity was similar or accentuated relative to young females. Commercial software dissociated six emotions according to the FACS demonstrating that defining visual features were present. Our findings show that East Asians perceive a different dimensionality of emotions than Western-based definitions in face recognition software, regardless of age and sex. Critically, stimuli with detailed cultural norms are indispensable in interpreting neural and behavioral responses involving human facial expression processing. To this end, we add to the tools, which are available upon request, for conducting such research.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Affective states can propagate in a group of people and influence their ability to judge others’ affective states. In the present paper, we present a simple mathematical model to describe this process in a three-dimensional affective space. We obtained data from 67 participants randomly assigned to two experimental groups. Participants watched either an upsetting or uplifting video previously calibrated for this goal. Immediately, participants reported their baseline subjective affect in three dimensions: (1) positivity, (2) negativity, and (3) arousal. In a second phase, participants rated the affect they subjectively judged from 10 target angry faces and ten target happy faces in the same three-dimensional scales. These judgments were used as an index of participant’s affective state after observing the faces. Participants’ affective responses were subsequently mapped onto a simple three-dimensional model of emotional contagion, in which the shortest distance between the baseline self-reported affect and the target judgment was calculated. The results display a double dissociation: negatively induced participants show more emotional contagion to angry than happy faces, while positively induced participants show more emotional contagion to happy than angry faces. In sum, emotional contagion exerted by the videos selectively affected judgments of the affective state of others’ faces. We discuss the directionality of emotional contagion to faces, considering whether negative emotions are more easily propagated than positive ones. Additionally, we comment on the lack of significant correlations between our model and standardized tests of empathy and emotional contagion.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Happy Face is a dataset for object detection tasks - it contains Elderly Happy Face annotations for 456 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).