100+ datasets found
  1. h

    facial-expression-recognition-dataset

    • huggingface.co
    Updated Mar 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata (2025). facial-expression-recognition-dataset [Dataset]. https://huggingface.co/datasets/UniDataPro/facial-expression-recognition-dataset
    Explore at:
    Dataset updated
    Mar 31, 2025
    Authors
    Unidata
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Emotion recognition Dataset

    Dataset comprises 199,955 images featuring 28,565 individuals displaying a variety of facial expressions. It is designed for research in emotion recognition and facial expression analysis across diverse races, genders, and ages. By utilizing this dataset, researchers and developers can enhance their understanding of facial recognition technology and improve the accuracy of emotion classification systems. - Get the data

      Examples of data
    

    This… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/facial-expression-recognition-dataset.

  2. F

    South Asian Facial Expression Image Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). South Asian Facial Expression Image Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-south-asian
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Area covered
    South Asia
    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the South Asian Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.

    Facial Expression Data

    The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:

    Expression Images: 5 distinct facial images capturing common human emotions: Happy, Sad, Angry, Shocked, and Neutral

    Diversity & Representation

    Geographical Coverage: Individuals from South Asian countries including India, Pakistan, Bangladesh, Nepal, Sri Lanka, Bhutan, Maldives, and more
    Demographics: Participants aged 18 to 70 years, with a gender distribution of 60% male and 40% female
    File Formats: All images are available in JPEG and HEIC formats

    Image Quality & Capture Conditions

    To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:

    Lighting Conditions: Natural and artificial lighting to represent diverse scenarios
    Background Variability: Indoor and outdoor backgrounds to enhance model adaptability
    Device Quality: Captured using modern smartphones to ensure clarity and consistency

    Metadata

    Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:

    Unique Participant ID
    File Name
    Age
    Gender
    Country
    Facial Expression Label
    Demographic Information
    File Format

    This metadata helps in building expression recognition models that are both accurate and inclusive.

    Use Cases & Applications

    This dataset is ideal for a variety of AI and computer vision applications, including:

    Facial Expression Recognition: Improve accuracy in detecting emotions like happiness, anger, or surprise
    Biometric & Identity Systems: Enhance facial biometric authentication with expression variation handling
    KYC & Identity Verification: Validate facial consistency in ID documents and selfies despite varied expressions
    Generative AI Training: Support expression generation and animation in AI-generated facial images
    Emotion-Aware Systems: Power human-computer interaction, mental health assessment, and adaptive learning apps

    Secure & Ethical Collection

    Data Security: All data is securely processed and stored on FutureBeeAI’s proprietary platform
    Ethical Standards: Collection followed strict ethical guidelines ensuring participant privacy and informed consent
    Informed Consent: All participants were made aware of the data use and provided written consent

    Dataset Updates & Customization

    To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:

    <div style="margin-top:10px; margin-bottom: 10px; padding-left: 30px; display: flex; gap: 16px; align-items:

  3. RAF-DB DATASET

    • kaggle.com
    Updated Sep 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dev-ShuvoAlok (2023). RAF-DB DATASET [Dataset]. https://www.kaggle.com/datasets/shuvoalok/raf-db-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 20, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Dev-ShuvoAlok
    Description

    The Real-world Affective Faces Database (RAF-DB) is a dataset for facial expression. This version Contains 15000k facial images tagged with basic or compound expressions by 40 independent taggers. Images in this database are of great variability in subjects' age, gender and ethnicity, head poses, lighting conditions, occlusions, (e.g. glasses, facial hair or self-occlusion), post-processing operations (e.g. various filters and special effects), etc.

    For More Info Visit: Here

    Terms & Conditions

    The RAF database is available for non-commercial research purposes only.

    All images of the RAF database are obtained from the Internet which are not property of PRIS, Beijing University of Posts and Telecommunications. The PRIS is not responsible for the content nor the meaning of these images.

    You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.

    You agree not to further copy, publish or distribute any portion of the RAF database. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.

    The PRIS reserves the right to terminate your access to the RAF database at any time.

  4. o

    IIMI Emotional Face Database

    • osf.io
    Updated May 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SHRUTI TEWARI; Samyak Mehta; Narayanan Srinivasan (2023). IIMI Emotional Face Database [Dataset]. https://osf.io/f7zbv
    Explore at:
    Dataset updated
    May 9, 2023
    Dataset provided by
    Center For Open Science
    Authors
    SHRUTI TEWARI; Samyak Mehta; Narayanan Srinivasan
    Description

    The Indian face database is designed to provide a standardized emotional face database of Indian models from northern regions of India. It includes 1302 validated facial expressions of 186 Indian adults expressing anger, disgust, fear, happy, sad, surprise, and neutral expressions. A total of 180 participants rated depicted emotion, clarity, genuineness, intensity, valence, and attractiveness of the faces in a randomized controlled lab experiment. Please send mail to shrutitewari@iimidr.ac.in for more detail and permission to use this face database.

  5. f

    Data from: Facial Expression Image Dataset for Computer Vision Algorithms

    • salford.figshare.com
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Alameer; Odunmolorun Osonuga (2025). Facial Expression Image Dataset for Computer Vision Algorithms [Dataset]. http://doi.org/10.17866/rd.salford.21220835.v2
    Explore at:
    Dataset updated
    Apr 29, 2025
    Dataset provided by
    University of Salford
    Authors
    Ali Alameer; Odunmolorun Osonuga
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset for this project is characterised by photos of individual human emotion expression and these photos are taken with the help of both digital camera and a mobile phone camera from different angles, posture, background, light exposure, and distances. This task might look and sound very easy but there were some challenges encountered along the process which are reviewed below: 1) People constraint One of the major challenges faced during this project is getting people to participate in the image capturing process as school was on vacation, and other individuals gotten around the environment were not willing to let their images be captured for personal and security reasons even after explaining the notion behind the project which is mainly for academic research purposes. Due to this challenge, we resorted to capturing the images of the researcher and just a few other willing individuals. 2) Time constraint As with all deep learning projects, the more data available the more accuracy and less error the result will produce. At the initial stage of the project, it was agreed to have 10 emotional expression photos each of at least 50 persons and we can increase the number of photos for more accurate results but due to the constraint in time of this project an agreement was later made to just capture the researcher and a few other people that are willing and available. These photos were taken for just two types of human emotion expression that is, “happy” and “sad” faces due to time constraint too. To expand our work further on this project (as future works and recommendations), photos of other facial expression such as anger, contempt, disgust, fright, and surprise can be included if time permits. 3) The approved facial emotions capture. It was agreed to capture as many angles and posture of just two facial emotions for this project with at least 10 images emotional expression per individual, but due to time and people constraints few persons were captured with as many postures as possible for this project which is stated below: Ø Happy faces: 65 images Ø Sad faces: 62 images There are many other types of facial emotions and again to expand our project in the future, we can include all the other types of the facial emotions if time permits, and people are readily available. 4) Expand Further. This project can be improved furthermore with so many abilities, again due to the limitation of time given to this project, these improvements can be implemented later as future works. In simple words, this project is to detect/predict real-time human emotion which involves creating a model that can detect the percentage confidence of any happy or sad facial image. The higher the percentage confidence the more accurate the facial fed into the model. 5) Other Questions Can the model be reproducible? the supposed response to this question should be YES. If and only if the model will be fed with the proper data (images) such as images of other types of emotional expression.

  6. f

    The MPI Facial Expression Database — A Validated Database of Emotional and...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kathrin Kaulard; Douglas W. Cunningham; Heinrich H. Bülthoff; Christian Wallraven (2023). The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions [Dataset]. http://doi.org/10.1371/journal.pone.0032321
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Kathrin Kaulard; Douglas W. Cunningham; Heinrich H. Bülthoff; Christian Wallraven
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions.

  7. f

    Table_3_East Asian Young and Older Adult Perceptions of Emotional Faces From...

    • frontiersin.figshare.com
    xlsx
    Updated Jun 4, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yu-Zhen Tu; Dong-Wei Lin; Atsunobu Suzuki; Joshua Oon Soo Goh (2023). Table_3_East Asian Young and Older Adult Perceptions of Emotional Faces From an Age- and Sex-Fair East Asian Facial Expression Database.XLSX [Dataset]. http://doi.org/10.3389/fpsyg.2018.02358.s005
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    Frontiers
    Authors
    Yu-Zhen Tu; Dong-Wei Lin; Atsunobu Suzuki; Joshua Oon Soo Goh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    East Asia
    Description

    There is increasing interest in clarifying how different face emotion expressions are perceived by people from different cultures, of different ages and sex. However, scant availability of well-controlled emotional face stimuli from non-Western populations limit the evaluation of cultural differences in face emotion perception and how this might be modulated by age and sex differences. We present a database of East Asian face expression stimuli, enacted by young and older, male and female, Taiwanese using the Facial Action Coding System (FACS). Combined with a prior database, this present database consists of 90 identities with happy, sad, angry, fearful, disgusted, surprised and neutral expressions amounting to 628 photographs. Twenty young and 24 older East Asian raters scored the photographs for intensities of multiple-dimensions of emotions and induced affect. Multivariate analyses characterized the dimensionality of perceived emotions and quantified effects of age and sex. We also applied commercial software to extract computer-based metrics of emotions in photographs. Taiwanese raters perceived happy faces as one category, sad, angry, and disgusted expressions as one category, and fearful and surprised expressions as one category. Younger females were more sensitive to face emotions than younger males. Whereas, older males showed reduced face emotion sensitivity, older female sensitivity was similar or accentuated relative to young females. Commercial software dissociated six emotions according to the FACS demonstrating that defining visual features were present. Our findings show that East Asians perceive a different dimensionality of emotions than Western-based definitions in face recognition software, regardless of age and sex. Critically, stimuli with detailed cultural norms are indispensable in interpreting neural and behavioral responses involving human facial expression processing. To this end, we add to the tools, which are available upon request, for conducting such research.

  8. i

    135-class Emotional Facial Expression Dataset

    • ieee-dataport.org
    Updated Feb 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    YU DING (2023). 135-class Emotional Facial Expression Dataset [Dataset]. https://ieee-dataport.org/documents/135-class-emotional-facial-expression-dataset
    Explore at:
    Dataset updated
    Feb 27, 2023
    Authors
    YU DING
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    etc.)

  9. F

    African Facial Expression Image Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). African Facial Expression Image Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-african
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the African Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.

    Facial Expression Data

    The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:

    Expression Images: 5 distinct facial images capturing common human emotions: Happy, Sad, Angry, Shocked, and Neutral

    Diversity & Representation

    Geographical Coverage: Individuals from African countries including Kenya, Malawi, Nigeria, Ethiopia, Benin, Somalia, Uganda, and more
    Demographics: Participants aged 18 to 70 years, with a gender distribution of 60% male and 40% female
    File Formats: All images are available in JPEG and HEIC formats

    Image Quality & Capture Conditions

    To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:

    Lighting Conditions: Natural and artificial lighting to represent diverse scenarios
    Background Variability: Indoor and outdoor backgrounds to enhance model adaptability
    Device Quality: Captured using modern smartphones to ensure clarity and consistency

    Metadata

    Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:

    Unique Participant ID
    File Name
    Age
    Gender
    Country
    Facial Expression Label
    Demographic Information
    File Format

    This metadata helps in building expression recognition models that are both accurate and inclusive.

    Use Cases & Applications

    This dataset is ideal for a variety of AI and computer vision applications, including:

    Facial Expression Recognition: Improve accuracy in detecting emotions like happiness, anger, or surprise
    Biometric & Identity Systems: Enhance facial biometric authentication with expression variation handling
    KYC & Identity Verification: Validate facial consistency in ID documents and selfies despite varied expressions
    Generative AI Training: Support expression generation and animation in AI-generated facial images
    Emotion-Aware Systems: Power human-computer interaction, mental health assessment, and adaptive learning apps

    Secure & Ethical Collection

    Data Security: All data is securely processed and stored on FutureBeeAI’s proprietary platform
    Ethical Standards: Collection followed strict ethical guidelines ensuring participant privacy and informed consent
    Informed Consent: All participants were made aware of the data use and provided written consent

    Dataset Updates & Customization

    To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:

  10. Z

    IFEED: Interactive Facial Expression and Emotion Detection Dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated May 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dias, Tiago (2023). IFEED: Interactive Facial Expression and Emotion Detection Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7963451
    Explore at:
    Dataset updated
    May 26, 2023
    Dataset provided by
    Dias, Tiago
    Maia, Eva
    Oliveira, Nuno
    Praça, Isabel
    Oliveira, Jorge
    Vitorino, João
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Interactive Facial Expression and Emotion Detection (IFEED) is an annotated dataset that can be used to train, validate, and test Deep Learning models for facial expression and emotion recognition. It contains pre-filtered and analysed images of the interactions between the six main characters of the Friends television series, obtained from the video recordings of the Multimodal EmotionLines Dataset (MELD).

    The images were obtained by decomposing the videos into multiple frames and extracting the facial expression of the correctly identified characters. A team composed of 14 researchers manually verified and annotated the processed data into several classes: Angry, Sad, Happy, Fearful, Disgusted, Surprised and Neutral.

    IFEED can be valuable for the development of intelligent facial expression recognition solutions and emotion detection software, enabling binary or multi-class classification, or even anomaly detection or clustering tasks. The images with ambiguous or very subtle facial expressions can be repurposed for adversarial learning. The dataset can be combined with additional data recordings to create more complete and extensive datasets and improve the generalization of robust deep learning models.

  11. i

    Facial Expression Dataset (Sri Lankan)

    • ieee-dataport.org
    Updated Sep 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amod Pathirana (2024). Facial Expression Dataset (Sri Lankan) [Dataset]. https://ieee-dataport.org/documents/facial-expression-dataset-sri-lankan
    Explore at:
    Dataset updated
    Sep 26, 2024
    Authors
    Amod Pathirana
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Sri Lanka
    Description

    often based on foreign samples

  12. u

    Facial Expression Recognition Dataset

    • unidata.pro
    jpg/jpeg, png
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata L.L.C-FZ, Facial Expression Recognition Dataset [Dataset]. https://unidata.pro/datasets/facial-expression-recognition-dataset/
    Explore at:
    jpg/jpeg, pngAvailable download formats
    Dataset authored and provided by
    Unidata L.L.C-FZ
    Description

    Facial Expression Recognition dataset helps AI interpret human emotions for improved sentiment analysis and recognition

  13. JAFFE (Deprecated, use v.2 instead)

    • zenodo.org
    • explore.openaire.eu
    Updated Mar 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Lyons; Michael Lyons; Miyuki Kamachi; Jiro Gyoba; Jiro Gyoba; Miyuki Kamachi (2025). JAFFE (Deprecated, use v.2 instead) [Dataset]. http://doi.org/10.5281/zenodo.3451524
    Explore at:
    Dataset updated
    Mar 20, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Michael Lyons; Michael Lyons; Miyuki Kamachi; Jiro Gyoba; Jiro Gyoba; Miyuki Kamachi
    Description

    V.1 is deprecated, use V.2 instead.

    The images are the same: only the README file has been updated.

    https://doi.org/10.5281/zenodo.14974867

    The JAFFE images may be used only for non-commercial scientific research.

    The source and background of the dataset must be acknowledged by citing the following two articles. Users should read both carefully.

    Michael J. Lyons, Miyuki Kamachi, Jiro Gyoba.
    Coding Facial Expressions with Gabor Wavelets (IVC Special Issue)
    arXiv:2009.05938 (2020) https://arxiv.org/pdf/2009.05938.pdf

    Michael J. Lyons
    "Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset
    arXiv: 2107.13998 (2021) https://arxiv.org/abs/2107.13998

    The following is not allowed:

    • Redistribution of the JAFFE dataset (incl. via Github, Kaggle, Colaboratory, GitCafe, CSDN etc.)
    • Posting JAFFE images on the web and social media
    • Public exhibition of JAFFE images in museums/galleries etc.
    • Broadcast in the mass media (tv shows, films, etc.)

    A few sample images (not more than 10) may be displayed in scientific publications.

  14. o

    Emotion Recognition Across Adulthood Using the Dynamic Diverse FACES...

    • osf.io
    url
    Updated Apr 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jared Cortez; Kendra Seaman (2024). Emotion Recognition Across Adulthood Using the Dynamic Diverse FACES database. [Dataset]. http://doi.org/10.17605/OSF.IO/38DVQ
    Explore at:
    urlAvailable download formats
    Dataset updated
    Apr 24, 2024
    Dataset provided by
    Center For Open Science
    Authors
    Jared Cortez; Kendra Seaman
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Facial emotion recognition is an integral part of everyday life for multiple reasons: we use it to identify how others are feeling and to see if the ones we care about are in distress or need help. A recent meta-analysis shows stark cross-sectional adult age differences in identifying negative emotions (i.e., anger, fear, and sadness; Hayes et al., 2020). However, most of this research uses static, young faces as stimuli. This research design poses a problem because using young faces may give an advantage to younger adults due to own-age bias. This bias is where people are better at identifying the emotional expressions of members of their own age group. This is shown in results with the trend that younger individuals will often perform higher on emotion recognition accuracy (correctly identifying the emotion of a face) when looking at the faces of younger individuals compared to older individuals.

    To address this problem, Ebner and colleagues (2010) created and validated a set of facial stimuli known as the FACES Lifespan database. The FACES Lifespan database is novel because it is comprised of evenly distributed samples of genders, age groups (Younger, Middle-aged, and Older adults), and emotional expressions (neutral, sad, disgust, anger, fear, and happy faces) (Ebner et al., 2010).

    Holland and colleagues (2019) furthered the work of Ebner and colleagues (2010). They used computer software to morph neutral facial expressions into emotional expressions (e.g., angry, sad, happy), creating short videos that mimic the emergence of these emotional expressions. Results of a validation study showed that individuals correctly identified emotional facial expressions and rated the Dynamic FACES stimuli as natural. Older adults were as accurate as younger adults for all canonical emotions except anger. This contrasts previous studies, showing that dynamic facial expression videos may mitigate older adults' emotion recognition deficits. However, one limitation of these databases is that all the models are White (Holland et al., 2019). Individuals may have an easier time recognizing faces from their own race than other races, creating a need for diverse representation in these stimuli (Meissner et al. 2005; Blais et al. 2008).

    Racial/ethnic diversity is becoming increasingly common in facial expression stimuli (Chen et al., 2021; Ma et al., 2020; Conley et al., 2018; Ma et al., 2015; Strohminger et al., 2015). However, some notable limitations exist. Specifically, Chen, Norman, and Nam (2021) found that out of 526 unique multiracial facial stimuli databases, 74% were white-black individuals, and 63% were male. This shows that amongst the databases that include racially diverse individuals, Latiné individuals and women are underrepresented. These two groups comprise significant percentages of the USA: 18.9% Latinè or Hispanic and 51.1% female (Census, 2020). In addition to these gender and racial/ethnic limitations, there has not been an effort to represent emotional expressions in diverse populations across the adult lifespan. Previous work on face perception has shown that viewing an individual outside of one's race can decrease emotion recognition accuracy due to an individual encoding more qualitative information about their own race’s face (Meissner et al., 2005). Individuals of the same race also have more motivation and experience with same-race faces (Hugenberg et al., 2010). Cultural backgrounds (such as race) affect how a person perceives a face. For example, people from Eastern cultures avoid looking into the eyes when viewing a face versus those from Western cultures, where it is more typical to engage in eye contact. Researchers argue that cultural differences such as these may cause a bias where participants are more accurate at identifying emotions when viewing a face from their own culture (Blais et al., 2008).

    This current study aims to replicate and expand on the research done by Holland and colleagues by creating videos showing dynamic emotional expressions in a racial/ethnic diverse database, the diverse FACES stimuli set. First, in a separate study, experimenters will replicate Ebner and colleagues (2010) by creating the Diverse FACES database and address these shortcomings by taking pictures of Black and Latiné models from three age groups (Younger, Middle-aged, and Older adults) displaying six canonical emotional expressions (refer to pre-registration for DiverseFACES for details). Following Holland and colleagues (2019), the angry, happy, and neutral images will be morphed with neutral images to create short video clips that mimic naturalistic emotional expressions to create a Dynamic Diverse FACES database. Validation of the stimuli will replicate prior approaches, with additional considerations of the race/ethnicity of the models. Online raters will be recruited and asked to perform a Face-Rating Task wherein they answer questions about each facial expression's age, race, and other characteristics. Raters will include equal numbers of White, Latiné, and Black individuals to accurately validate the racial identification of each stimulus’s faces.

  15. r

    Data from: Acted Facial Expressions In The Wild

    • researchdata.edu.au
    Updated Apr 19, 2012
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr Roland Goecke (2012). Acted Facial Expressions In The Wild [Dataset]. https://researchdata.edu.au/acted-facial-expressions-wild/2734
    Explore at:
    Dataset updated
    Apr 19, 2012
    Dataset provided by
    University of Canberra
    Authors
    Dr Roland Goecke
    Description

    Quality data recorded in varied realistic environments is vital for effective human face related research. Currently available datasets for human facial expression analysis have been generated in highly controlled lab environments. We present a new dynamic 2D facial expressions database based on movies capturing diverse scenarios. A new XML schema based approach has been developed for the database collection and distribution tools. Realistic face data plays a vital role in the research advancement of facial expression analysis systems.

    We have named our database Acted Facial Expressions in the Wild similar to the spirit of the Labeled Faces in the Wild (LFW) database. It contains 957 videos in AVI format labelled with six basic expressions Angry, Happy, Disgust, Fear, Sad, Surprise and the Neutral expression. We also wanted to capture the information on how facial expressions evolved in subjects with age. Therefore we have chosen sets of movies featuring the same actors. For example, the Harry Potter series forms a good platform to analyse how facial expressions of subjects evolve with age. We used thirty-seven movies from a diverse range of movie genres so as to cover as much varied expressions and natural environments as possible.

    Much progress has been made in the fields of face recognition and human activity recognition in the past years due to the availability of realistic databases as well as robust representation and classification techniques. Inspired by them, we present a labelled temporal facial expression database from movies. Human facial expression databases till now have been captured in controlled ‘lab’ environments.

  16. f

    Data from: Development and validation of a facial expression database based...

    • tandf.figshare.com
    tiff
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tomomi Fujimura; Hiroyuki Umemura (2023). Development and validation of a facial expression database based on the dimensional and categorical model of emotions [Dataset]. http://doi.org/10.6084/m9.figshare.5788863.v1
    Explore at:
    tiffAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Tomomi Fujimura; Hiroyuki Umemura
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The present study describes the development and validation of a facial expression database comprising five different horizontal face angles in dynamic and static presentations. The database includes twelve expression types portrayed by eight Japanese models. This database was inspired by the dimensional and categorical model of emotions: surprise, fear, sadness, anger with open mouth, anger with closed mouth, disgust with open mouth, disgust with closed mouth, excitement, happiness, relaxation, sleepiness, and neutral (static only). The expressions were validated using emotion classification and Affect Grid rating tasks [Russell, Weiss, & Mendelsohn, 1989. Affect Grid: A single-item scale of pleasure and arousal. Journal of Personality and Social Psychology, 57(3), 493–502]. The results indicate that most of the expressions were recognised as the intended emotions and could systematically represent affective valence and arousal. Furthermore, face angle and facial motion information influenced emotion classification and valence and arousal ratings. Our database will be available online at the following URL. https://www.dh.aist.go.jp/database/face2017/.

  17. Happy Face Dataset

    • kaggle.com
    Updated Aug 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ashish Motwani (2022). Happy Face Dataset [Dataset]. https://www.kaggle.com/datasets/ashishmotwani/happyface
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 26, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Ashish Motwani
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Hello everyone , this is a dataset I am sharing , contains Happy and Non-Happy facial expressions to practice binary classification It contains labelled images of happy facial expression . I found this dataset while learning on coursera and I'd like to acknowledge them as the primary owner of the dataset

  18. Expression in-the-Wild (ExpW) Dataset

    • kaggle.com
    Updated Jul 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shahzad Abbas (2023). Expression in-the-Wild (ExpW) Dataset [Dataset]. https://www.kaggle.com/datasets/shahzadabbas/expression-in-the-wild-expw-dataset/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 27, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Shahzad Abbas
    Description

    Data Description

    The Expression in-the-Wild (ExpW) Dataset is a comprehensive and diverse collection of facial images carefully curated to capture spontaneous and unscripted facial expressions exhibited by individuals in real-world scenarios. This extensively annotated dataset serves as a valuable resource for advancing research in the fields of computer vision, facial expression analysis, affective computing, and human behavior understanding.

    Key Features:

    1. Real-world Expressions: The ExpW dataset stands apart from traditional lab-controlled datasets as it focuses on capturing facial expressions in real-life environments. This authenticity ensures that the dataset reflects the natural diversity of emotions experienced by individuals in everyday situations, making it highly relevant for real-world applications.

    2. Large and Diverse: Comprising a vast number of images, the ExpW dataset encompasses an extensive range of subjects, ethnicities, ages, and genders. This diversity allows researchers and developers to build more robust and inclusive models for facial expression recognition and emotion analysis.

    3. Annotated Emotions: Each facial image in the dataset is meticulously annotated with corresponding emotion labels, including but not limited to happiness, sadness, anger, surprise, fear, disgust, and neutral expressions. The emotion annotations provide ground truth data for training and validating machine learning algorithms.

    4. Various Pose and Illumination: To account for the varying challenges posed by real-life scenarios, the ExpW dataset includes images captured under different lighting conditions and poses. This variability helps researchers create algorithms that are robust to changes in illumination and head orientation.

    5. Privacy and Ethics: ExpW has been compiled adhering to strict privacy and ethical guidelines, ensuring the subjects' consent and data protection. The dataset maintains a high level of anonymity by excluding any personal information or sensitive details.

    This dataset has been downloaded from the following Public Directory... https://drive.google.com/drive/folders/1SDcI273EPKzzZCPSfYQs4alqjL01Kybq

    Dataset contains 91,793 faces manually labeled with expressions (Figure 1). Each of the face images is annotated as one of the seven basic expression categories: “angry (0)”, “disgust (1)”, “fear (2)”, “happy (3)”, “sad (4)”, “surprise (5)”, or “neutral (6)”.

  19. d

    Data from: Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alaghband, Marie; Yousefi, Niloofar; Garibay, Ivan (2023). Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language [Dataset]. http://doi.org/10.7910/DVN/358QMQ
    Explore at:
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Alaghband, Marie; Yousefi, Niloofar; Garibay, Ivan
    Description

    Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image's facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.

  20. g

    Facial Expression Image Data AFFECTNET YOLO Format

    • gts.ai
    json
    Updated Mar 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Facial Expression Image Data AFFECTNET YOLO Format [Dataset]. https://gts.ai/dataset-download/facial-expression-image-data-affectnet-yolo-format/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Mar 20, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset AFFECTNET YOLO Format is aimed to be used in facial expression detection for a YOLO project...

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Unidata (2025). facial-expression-recognition-dataset [Dataset]. https://huggingface.co/datasets/UniDataPro/facial-expression-recognition-dataset

facial-expression-recognition-dataset

UniDataPro/facial-expression-recognition-dataset

Explore at:
362 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Mar 31, 2025
Authors
Unidata
License

Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically

Description

Emotion recognition Dataset

Dataset comprises 199,955 images featuring 28,565 individuals displaying a variety of facial expressions. It is designed for research in emotion recognition and facial expression analysis across diverse races, genders, and ages. By utilizing this dataset, researchers and developers can enhance their understanding of facial recognition technology and improve the accuracy of emotion classification systems. - Get the data

  Examples of data

This… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/facial-expression-recognition-dataset.

Search
Clear search
Close search
Google apps
Main menu