Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Emotion recognition Dataset
Dataset comprises 199,955 images featuring 28,565 individuals displaying a variety of facial expressions. It is designed for research in emotion recognition and facial expression analysis across diverse races, genders, and ages. By utilizing this dataset, researchers and developers can enhance their understanding of facial recognition technology and improve the accuracy of emotion classification systems. - Get the data
Examples of data
This… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/facial-expression-recognition-dataset.
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the South Asian Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.
The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:
To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:
Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:
This metadata helps in building expression recognition models that are both accurate and inclusive.
This dataset is ideal for a variety of AI and computer vision applications, including:
To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:
The Real-world Affective Faces Database (RAF-DB) is a dataset for facial expression. This version Contains 15000k facial images tagged with basic or compound expressions by 40 independent taggers. Images in this database are of great variability in subjects' age, gender and ethnicity, head poses, lighting conditions, occlusions, (e.g. glasses, facial hair or self-occlusion), post-processing operations (e.g. various filters and special effects), etc.
For More Info Visit: Here
The RAF database is available for non-commercial research purposes only.
All images of the RAF database are obtained from the Internet which are not property of PRIS, Beijing University of Posts and Telecommunications. The PRIS is not responsible for the content nor the meaning of these images.
You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.
You agree not to further copy, publish or distribute any portion of the RAF database. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.
The PRIS reserves the right to terminate your access to the RAF database at any time.
The Indian face database is designed to provide a standardized emotional face database of Indian models from northern regions of India. It includes 1302 validated facial expressions of 186 Indian adults expressing anger, disgust, fear, happy, sad, surprise, and neutral expressions. A total of 180 participants rated depicted emotion, clarity, genuineness, intensity, valence, and attractiveness of the faces in a randomized controlled lab experiment. Please send mail to shrutitewari@iimidr.ac.in for more detail and permission to use this face database.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset for this project is characterised by photos of individual human emotion expression and these photos are taken with the help of both digital camera and a mobile phone camera from different angles, posture, background, light exposure, and distances. This task might look and sound very easy but there were some challenges encountered along the process which are reviewed below: 1) People constraint One of the major challenges faced during this project is getting people to participate in the image capturing process as school was on vacation, and other individuals gotten around the environment were not willing to let their images be captured for personal and security reasons even after explaining the notion behind the project which is mainly for academic research purposes. Due to this challenge, we resorted to capturing the images of the researcher and just a few other willing individuals. 2) Time constraint As with all deep learning projects, the more data available the more accuracy and less error the result will produce. At the initial stage of the project, it was agreed to have 10 emotional expression photos each of at least 50 persons and we can increase the number of photos for more accurate results but due to the constraint in time of this project an agreement was later made to just capture the researcher and a few other people that are willing and available. These photos were taken for just two types of human emotion expression that is, “happy” and “sad” faces due to time constraint too. To expand our work further on this project (as future works and recommendations), photos of other facial expression such as anger, contempt, disgust, fright, and surprise can be included if time permits. 3) The approved facial emotions capture. It was agreed to capture as many angles and posture of just two facial emotions for this project with at least 10 images emotional expression per individual, but due to time and people constraints few persons were captured with as many postures as possible for this project which is stated below: Ø Happy faces: 65 images Ø Sad faces: 62 images There are many other types of facial emotions and again to expand our project in the future, we can include all the other types of the facial emotions if time permits, and people are readily available. 4) Expand Further. This project can be improved furthermore with so many abilities, again due to the limitation of time given to this project, these improvements can be implemented later as future works. In simple words, this project is to detect/predict real-time human emotion which involves creating a model that can detect the percentage confidence of any happy or sad facial image. The higher the percentage confidence the more accurate the facial fed into the model. 5) Other Questions Can the model be reproducible? the supposed response to this question should be YES. If and only if the model will be fed with the proper data (images) such as images of other types of emotional expression.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
There is increasing interest in clarifying how different face emotion expressions are perceived by people from different cultures, of different ages and sex. However, scant availability of well-controlled emotional face stimuli from non-Western populations limit the evaluation of cultural differences in face emotion perception and how this might be modulated by age and sex differences. We present a database of East Asian face expression stimuli, enacted by young and older, male and female, Taiwanese using the Facial Action Coding System (FACS). Combined with a prior database, this present database consists of 90 identities with happy, sad, angry, fearful, disgusted, surprised and neutral expressions amounting to 628 photographs. Twenty young and 24 older East Asian raters scored the photographs for intensities of multiple-dimensions of emotions and induced affect. Multivariate analyses characterized the dimensionality of perceived emotions and quantified effects of age and sex. We also applied commercial software to extract computer-based metrics of emotions in photographs. Taiwanese raters perceived happy faces as one category, sad, angry, and disgusted expressions as one category, and fearful and surprised expressions as one category. Younger females were more sensitive to face emotions than younger males. Whereas, older males showed reduced face emotion sensitivity, older female sensitivity was similar or accentuated relative to young females. Commercial software dissociated six emotions according to the FACS demonstrating that defining visual features were present. Our findings show that East Asians perceive a different dimensionality of emotions than Western-based definitions in face recognition software, regardless of age and sex. Critically, stimuli with detailed cultural norms are indispensable in interpreting neural and behavioral responses involving human facial expression processing. To this end, we add to the tools, which are available upon request, for conducting such research.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
etc.)
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the African Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.
The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:
To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:
Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:
This metadata helps in building expression recognition models that are both accurate and inclusive.
This dataset is ideal for a variety of AI and computer vision applications, including:
To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Interactive Facial Expression and Emotion Detection (IFEED) is an annotated dataset that can be used to train, validate, and test Deep Learning models for facial expression and emotion recognition. It contains pre-filtered and analysed images of the interactions between the six main characters of the Friends television series, obtained from the video recordings of the Multimodal EmotionLines Dataset (MELD).
The images were obtained by decomposing the videos into multiple frames and extracting the facial expression of the correctly identified characters. A team composed of 14 researchers manually verified and annotated the processed data into several classes: Angry, Sad, Happy, Fearful, Disgusted, Surprised and Neutral.
IFEED can be valuable for the development of intelligent facial expression recognition solutions and emotion detection software, enabling binary or multi-class classification, or even anomaly detection or clustering tasks. The images with ambiguous or very subtle facial expressions can be repurposed for adversarial learning. The dataset can be combined with additional data recordings to create more complete and extensive datasets and improve the generalization of robust deep learning models.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
often based on foreign samples
Facial Expression Recognition dataset helps AI interpret human emotions for improved sentiment analysis and recognition
The JAFFE images may be used only for non-commercial scientific research.
The source and background of the dataset must be acknowledged by citing the following two articles. Users should read both carefully.
Michael J. Lyons, Miyuki Kamachi, Jiro Gyoba.
Coding Facial Expressions with Gabor Wavelets (IVC Special Issue)
arXiv:2009.05938 (2020) https://arxiv.org/pdf/2009.05938.pdf
Michael J. Lyons
"Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset
arXiv: 2107.13998 (2021) https://arxiv.org/abs/2107.13998
The following is not allowed:
A few sample images (not more than 10) may be displayed in scientific publications.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Facial emotion recognition is an integral part of everyday life for multiple reasons: we use it to identify how others are feeling and to see if the ones we care about are in distress or need help. A recent meta-analysis shows stark cross-sectional adult age differences in identifying negative emotions (i.e., anger, fear, and sadness; Hayes et al., 2020). However, most of this research uses static, young faces as stimuli. This research design poses a problem because using young faces may give an advantage to younger adults due to own-age bias. This bias is where people are better at identifying the emotional expressions of members of their own age group. This is shown in results with the trend that younger individuals will often perform higher on emotion recognition accuracy (correctly identifying the emotion of a face) when looking at the faces of younger individuals compared to older individuals.
To address this problem, Ebner and colleagues (2010) created and validated a set of facial stimuli known as the FACES Lifespan database. The FACES Lifespan database is novel because it is comprised of evenly distributed samples of genders, age groups (Younger, Middle-aged, and Older adults), and emotional expressions (neutral, sad, disgust, anger, fear, and happy faces) (Ebner et al., 2010).
Holland and colleagues (2019) furthered the work of Ebner and colleagues (2010). They used computer software to morph neutral facial expressions into emotional expressions (e.g., angry, sad, happy), creating short videos that mimic the emergence of these emotional expressions. Results of a validation study showed that individuals correctly identified emotional facial expressions and rated the Dynamic FACES stimuli as natural. Older adults were as accurate as younger adults for all canonical emotions except anger. This contrasts previous studies, showing that dynamic facial expression videos may mitigate older adults' emotion recognition deficits. However, one limitation of these databases is that all the models are White (Holland et al., 2019). Individuals may have an easier time recognizing faces from their own race than other races, creating a need for diverse representation in these stimuli (Meissner et al. 2005; Blais et al. 2008).
Racial/ethnic diversity is becoming increasingly common in facial expression stimuli (Chen et al., 2021; Ma et al., 2020; Conley et al., 2018; Ma et al., 2015; Strohminger et al., 2015). However, some notable limitations exist. Specifically, Chen, Norman, and Nam (2021) found that out of 526 unique multiracial facial stimuli databases, 74% were white-black individuals, and 63% were male. This shows that amongst the databases that include racially diverse individuals, Latiné individuals and women are underrepresented. These two groups comprise significant percentages of the USA: 18.9% Latinè or Hispanic and 51.1% female (Census, 2020). In addition to these gender and racial/ethnic limitations, there has not been an effort to represent emotional expressions in diverse populations across the adult lifespan. Previous work on face perception has shown that viewing an individual outside of one's race can decrease emotion recognition accuracy due to an individual encoding more qualitative information about their own race’s face (Meissner et al., 2005). Individuals of the same race also have more motivation and experience with same-race faces (Hugenberg et al., 2010). Cultural backgrounds (such as race) affect how a person perceives a face. For example, people from Eastern cultures avoid looking into the eyes when viewing a face versus those from Western cultures, where it is more typical to engage in eye contact. Researchers argue that cultural differences such as these may cause a bias where participants are more accurate at identifying emotions when viewing a face from their own culture (Blais et al., 2008).
This current study aims to replicate and expand on the research done by Holland and colleagues by creating videos showing dynamic emotional expressions in a racial/ethnic diverse database, the diverse FACES stimuli set. First, in a separate study, experimenters will replicate Ebner and colleagues (2010) by creating the Diverse FACES database and address these shortcomings by taking pictures of Black and Latiné models from three age groups (Younger, Middle-aged, and Older adults) displaying six canonical emotional expressions (refer to pre-registration for DiverseFACES for details). Following Holland and colleagues (2019), the angry, happy, and neutral images will be morphed with neutral images to create short video clips that mimic naturalistic emotional expressions to create a Dynamic Diverse FACES database. Validation of the stimuli will replicate prior approaches, with additional considerations of the race/ethnicity of the models. Online raters will be recruited and asked to perform a Face-Rating Task wherein they answer questions about each facial expression's age, race, and other characteristics. Raters will include equal numbers of White, Latiné, and Black individuals to accurately validate the racial identification of each stimulus’s faces.
Quality data recorded in varied realistic environments is vital for effective human face related research. Currently available datasets for human facial expression analysis have been generated in highly controlled lab environments. We present a new dynamic 2D facial expressions database based on movies capturing diverse scenarios. A new XML schema based approach has been developed for the database collection and distribution tools. Realistic face data plays a vital role in the research advancement of facial expression analysis systems.
We have named our database Acted Facial Expressions in the Wild similar to the spirit of the Labeled Faces in the Wild (LFW) database. It contains 957 videos in AVI format labelled with six basic expressions Angry, Happy, Disgust, Fear, Sad, Surprise and the Neutral expression. We also wanted to capture the information on how facial expressions evolved in subjects with age. Therefore we have chosen sets of movies featuring the same actors. For example, the Harry Potter series forms a good platform to analyse how facial expressions of subjects evolve with age. We used thirty-seven movies from a diverse range of movie genres so as to cover as much varied expressions and natural environments as possible.
Much progress has been made in the fields of face recognition and human activity recognition in the past years due to the availability of realistic databases as well as robust representation and classification techniques. Inspired by them, we present a labelled temporal facial expression database from movies. Human facial expression databases till now have been captured in controlled ‘lab’ environments.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The present study describes the development and validation of a facial expression database comprising five different horizontal face angles in dynamic and static presentations. The database includes twelve expression types portrayed by eight Japanese models. This database was inspired by the dimensional and categorical model of emotions: surprise, fear, sadness, anger with open mouth, anger with closed mouth, disgust with open mouth, disgust with closed mouth, excitement, happiness, relaxation, sleepiness, and neutral (static only). The expressions were validated using emotion classification and Affect Grid rating tasks [Russell, Weiss, & Mendelsohn, 1989. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493–502]. The results indicate that most of the expressions were recognised as the intended emotions and could systematically represent affective valence and arousal. Furthermore, face angle and facial motion information influenced emotion classification and valence and arousal ratings. Our database will be available online at the following URL. https://www.dh.aist.go.jp/database/face2017/.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Hello everyone , this is a dataset I am sharing , contains Happy and Non-Happy facial expressions to practice binary classification It contains labelled images of happy facial expression . I found this dataset while learning on coursera and I'd like to acknowledge them as the primary owner of the dataset
The Expression in-the-Wild (ExpW) Dataset is a comprehensive and diverse collection of facial images carefully curated to capture spontaneous and unscripted facial expressions exhibited by individuals in real-world scenarios. This extensively annotated dataset serves as a valuable resource for advancing research in the fields of computer vision, facial expression analysis, affective computing, and human behavior understanding.
Real-world Expressions: The ExpW dataset stands apart from traditional lab-controlled datasets as it focuses on capturing facial expressions in real-life environments. This authenticity ensures that the dataset reflects the natural diversity of emotions experienced by individuals in everyday situations, making it highly relevant for real-world applications.
Large and Diverse: Comprising a vast number of images, the ExpW dataset encompasses an extensive range of subjects, ethnicities, ages, and genders. This diversity allows researchers and developers to build more robust and inclusive models for facial expression recognition and emotion analysis.
Annotated Emotions: Each facial image in the dataset is meticulously annotated with corresponding emotion labels, including but not limited to happiness, sadness, anger, surprise, fear, disgust, and neutral expressions. The emotion annotations provide ground truth data for training and validating machine learning algorithms.
Various Pose and Illumination: To account for the varying challenges posed by real-life scenarios, the ExpW dataset includes images captured under different lighting conditions and poses. This variability helps researchers create algorithms that are robust to changes in illumination and head orientation.
Privacy and Ethics: ExpW has been compiled adhering to strict privacy and ethical guidelines, ensuring the subjects' consent and data protection. The dataset maintains a high level of anonymity by excluding any personal information or sensitive details.
This dataset has been downloaded from the following Public Directory... https://drive.google.com/drive/folders/1SDcI273EPKzzZCPSfYQs4alqjL01Kybq
Dataset contains 91,793 faces manually labeled with expressions (Figure 1). Each of the face images is annotated as one of the seven basic expression categories: “angry (0)”, “disgust (1)”, “fear (2)”, “happy (3)”, “sad (4)”, “surprise (5)”, or “neutral (6)”.
Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image's facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset AFFECTNET YOLO Format is aimed to be used in facial expression detection for a YOLO project...
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Emotion recognition Dataset
Dataset comprises 199,955 images featuring 28,565 individuals displaying a variety of facial expressions. It is designed for research in emotion recognition and facial expression analysis across diverse races, genders, and ages. By utilizing this dataset, researchers and developers can enhance their understanding of facial recognition technology and improve the accuracy of emotion classification systems. - Get the data
Examples of data
This… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/facial-expression-recognition-dataset.