100+ datasets found
  1. Facial Expression Recognition Dataset

    • kaggle.com
    Updated Jul 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata (2025). Facial Expression Recognition Dataset [Dataset]. https://www.kaggle.com/datasets/unidpro/facial-expression-recognition-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 7, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Unidata
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Emotion recognition Dataset

    Dataset comprises 199,955 images featuring 28,565 individuals displaying a variety of facial expressions. It is designed for research in emotion recognition and facial expression analysis across diverse races, genders, and ages.

    By utilizing this dataset, researchers and developers can enhance their understanding of facial recognition technology and improve the accuracy of emotion classification systems. - Get the data

    Examples of data

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2F22472a4de7d505ff4962b7eaa14071bf%2F1.png?generation=1740432470830146&alt=media" alt="">

    This dataset includes images that capture different emotions, such as happiness, sadness, surprise, anger, disgust, and fear, allowing researchers to develop and evaluate recognition algorithms and detection methods.

    💵 Buy the Dataset: This is a limited preview of the data. To access the full dataset, please contact us at https://unidata.pro to discuss your requirements and pricing options.

    Metadata for the dataset

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2F8cfad327bf19d7f6fad22ae2cc021a5b%2FFrame%201%20(2).png?generation=1740432926933026&alt=media" alt=""> Researchers can leverage this dataset to explore various learning methods and algorithms aimed at improving emotion detection and facial expression recognition.

    🌐 UniData provides high-quality datasets, content moderation, data collection and annotation for your AI/ML projects

  2. f

    Data from: Facial Emotion Recognition Datasets for YOLOv8 Annotation

    • salford.figshare.com
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Alameer (2025). Facial Emotion Recognition Datasets for YOLOv8 Annotation [Dataset]. http://doi.org/10.17866/rd.salford.24192219.v2
    Explore at:
    Dataset updated
    Apr 29, 2025
    Dataset provided by
    University of Salford
    Authors
    Ali Alameer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Computer Vision Scientist-Collected Dataset:This facial emotion recognition dataset has been meticulously curated by computer vision scientists using mobile phone cameras to capture candid moments of individuals expressing a spectrum of emotions, including Happy, Sad, Fear, and Humor. The dataset comprises a rich collection of images with diverse angles and backgrounds, providing a realistic portrayal of human emotional expression.Marketing Expert-Collected Dataset:The ethnicity-focused dataset for facial recognition has been meticulously assembled by marketing experts, aiming to shed light on the vital aspect of ethnicity variations in computer vision. With a dedicated focus on ethnicity, this dataset provides a unique perspective for training and testing facial recognition models in an ethnically diverse context.This dataset comprises a rich collection of images capturing individuals from various ethnic backgrounds. It emphasises the importance of ethnicity in the field of computer vision and includes a wide range of facial features, expressions, and poses, thereby enriching the dataset's diversity.By offering insights into the critical area of ethnicity in computer vision, this dataset is a valuable addition to the toolkit of researchers and practitioners, facilitating the development of more inclusive and accurate facial recognition models.Researchers and experts in the fields of computer vision and marketing are encouraged to explore these datasets for their research, model development, and the advancement of understanding in these respective domains.

  3. F

    East Asian Facial Expression Image Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). East Asian Facial Expression Image Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-east-asia
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Area covered
    East Asia
    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the East Asian Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.

    Facial Expression Data

    The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:

    Expression Images: 5 distinct facial images capturing common human emotions: Happy, Sad, Angry, Shocked, and Neutral

    Diversity & Representation

    Geographical Coverage: Individuals from East Asian countries including China, Japan, Philippines, Malaysia, Singapore, Thailand, Vietnam, Indonesia, and more
    Demographics: Participants aged 18 to 70 years, with a gender distribution of 60% male and 40% female
    File Formats: All images are available in JPEG and HEIC formats

    Image Quality & Capture Conditions

    To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:

    Lighting Conditions: Natural and artificial lighting to represent diverse scenarios
    Background Variability: Indoor and outdoor backgrounds to enhance model adaptability
    Device Quality: Captured using modern smartphones to ensure clarity and consistency

    Metadata

    Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:

    Unique Participant ID
    File Name
    Age
    Gender
    Country
    Facial Expression Label
    Demographic Information
    File Format

    This metadata helps in building expression recognition models that are both accurate and inclusive.

    Use Cases & Applications

    This dataset is ideal for a variety of AI and computer vision applications, including:

    Facial Expression Recognition: Improve accuracy in detecting emotions like happiness, anger, or surprise
    Biometric & Identity Systems: Enhance facial biometric authentication with expression variation handling
    KYC & Identity Verification: Validate facial consistency in ID documents and selfies despite varied expressions
    Generative AI Training: Support expression generation and animation in AI-generated facial images
    Emotion-Aware Systems: Power human-computer interaction, mental health assessment, and adaptive learning apps

    Secure & Ethical Collection

    Data Security: All data is securely processed and stored on FutureBeeAI’s proprietary platform
    Ethical Standards: Collection followed strict ethical guidelines ensuring participant privacy and informed consent
    Informed Consent: All participants were made aware of the data use and provided written consent

    Dataset Updates & Customization

    To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:

    <div style="margin-top:10px; margin-bottom: 10px; padding-left: 30px; display: flex; gap: 16px;

  4. u

    Facial Expression Recognition Dataset

    • unidata.pro
    jpg/jpeg, png
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata L.L.C-FZ, Facial Expression Recognition Dataset [Dataset]. https://unidata.pro/datasets/facial-expression-recognition-dataset/
    Explore at:
    jpg/jpeg, pngAvailable download formats
    Dataset authored and provided by
    Unidata L.L.C-FZ
    Description

    Facial Expression Recognition dataset helps AI interpret human emotions for improved sentiment analysis and recognition

  5. Z

    IFEED: Interactive Facial Expression and Emotion Detection Dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated May 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vitorino, João (2023). IFEED: Interactive Facial Expression and Emotion Detection Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7963451
    Explore at:
    Dataset updated
    May 26, 2023
    Dataset provided by
    Praça, Isabel
    Oliveira, Nuno
    Vitorino, João
    Oliveira, Jorge
    Maia, Eva
    Dias, Tiago
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Interactive Facial Expression and Emotion Detection (IFEED) is an annotated dataset that can be used to train, validate, and test Deep Learning models for facial expression and emotion recognition. It contains pre-filtered and analysed images of the interactions between the six main characters of the Friends television series, obtained from the video recordings of the Multimodal EmotionLines Dataset (MELD).

    The images were obtained by decomposing the videos into multiple frames and extracting the facial expression of the correctly identified characters. A team composed of 14 researchers manually verified and annotated the processed data into several classes: Angry, Sad, Happy, Fearful, Disgusted, Surprised and Neutral.

    IFEED can be valuable for the development of intelligent facial expression recognition solutions and emotion detection software, enabling binary or multi-class classification, or even anomaly detection or clustering tasks. The images with ambiguous or very subtle facial expressions can be repurposed for adversarial learning. The dataset can be combined with additional data recordings to create more complete and extensive datasets and improve the generalization of robust deep learning models.

  6. F

    African Facial Expression Image Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). African Facial Expression Image Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-african
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the African Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.

    Facial Expression Data

    The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:

    Expression Images: 5 distinct facial images capturing common human emotions: Happy, Sad, Angry, Shocked, and Neutral

    Diversity & Representation

    Geographical Coverage: Individuals from African countries including Kenya, Malawi, Nigeria, Ethiopia, Benin, Somalia, Uganda, and more
    Demographics: Participants aged 18 to 70 years, with a gender distribution of 60% male and 40% female
    File Formats: All images are available in JPEG and HEIC formats

    Image Quality & Capture Conditions

    To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:

    Lighting Conditions: Natural and artificial lighting to represent diverse scenarios
    Background Variability: Indoor and outdoor backgrounds to enhance model adaptability
    Device Quality: Captured using modern smartphones to ensure clarity and consistency

    Metadata

    Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:

    Unique Participant ID
    File Name
    Age
    Gender
    Country
    Facial Expression Label
    Demographic Information
    File Format

    This metadata helps in building expression recognition models that are both accurate and inclusive.

    Use Cases & Applications

    This dataset is ideal for a variety of AI and computer vision applications, including:

    Facial Expression Recognition: Improve accuracy in detecting emotions like happiness, anger, or surprise
    Biometric & Identity Systems: Enhance facial biometric authentication with expression variation handling
    KYC & Identity Verification: Validate facial consistency in ID documents and selfies despite varied expressions
    Generative AI Training: Support expression generation and animation in AI-generated facial images
    Emotion-Aware Systems: Power human-computer interaction, mental health assessment, and adaptive learning apps

    Secure & Ethical Collection

    Data Security: All data is securely processed and stored on FutureBeeAI’s proprietary platform
    Ethical Standards: Collection followed strict ethical guidelines ensuring participant privacy and informed consent
    Informed Consent: All participants were made aware of the data use and provided written consent

    Dataset Updates & Customization

    To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:

  7. f

    Facial Emotion Detection Dataset

    • salford.figshare.com
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Alameer (2025). Facial Emotion Detection Dataset [Dataset]. http://doi.org/10.17866/rd.salford.22495669.v2
    Explore at:
    Dataset updated
    Apr 29, 2025
    Dataset provided by
    University of Salford
    Authors
    Ali Alameer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Facial Emotion Detection Dataset is a collection of images of individuals with two different emotions - happy and sad. The dataset was captured using a mobile phone camera and contains photos taken from different angles and backgrounds.

    The dataset contains a total of 637 photos with an additional dataset of 127 from previous work. Out of the total, 402 images are of happy faces, and 366 images are of sad faces. Each individual had a minimum of 10 images of both expressions.

    The project faced challenges in terms of time constraints and people's constraints, which limited the number of individuals who participated. Despite the limitations, the dataset can be used for deep learning projects and real-time emotion detection models. Future work can expand the dataset by capturing more images to improve the accuracy of the model. The dataset can also be used to create a custom object detection model to evaluate other types of emotional expressions.

  8. i

    135-class Emotional Facial Expression Dataset

    • ieee-dataport.org
    Updated Feb 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    YU DING (2023). 135-class Emotional Facial Expression Dataset [Dataset]. https://ieee-dataport.org/documents/135-class-emotional-facial-expression-dataset
    Explore at:
    Dataset updated
    Feb 27, 2023
    Authors
    YU DING
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    etc.)

  9. R

    Emotion Detection Dataset

    • universe.roboflow.com
    zip
    Updated Mar 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Computer Vision Projects (2025). Emotion Detection Dataset [Dataset]. https://universe.roboflow.com/computer-vision-projects-zhogq/emotion-detection-y0svj
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 26, 2025
    Dataset authored and provided by
    Computer Vision Projects
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions Bounding Boxes
    Description

    Emotion Detection Model for Facial Expressions

    Project Description:

    In this project, we developed an Emotion Detection Model using a curated dataset of 715 facial images, aiming to accurately recognize and categorize expressions into five distinct emotion classes. The emotion classes include Happy, Sad, Fearful, Angry, and Neutral.

    Objectives: - Train a robust machine learning model capable of accurately detecting and classifying facial expressions in real-time. - Implement emotion detection to enhance user experience in applications such as human-computer interaction, virtual assistants, and emotion-aware systems.

    Methodology: 1. Data Collection and Preprocessing: - Assembled a diverse dataset of 715 images featuring individuals expressing different emotions. - Employed Roboflow for efficient data preprocessing, handling image augmentation and normalization.

    1. Model Architecture:

      • Utilized a convolutional neural network (CNN) architecture to capture spatial hierarchies in facial features.
      • Implemented a multi-class classification approach to categorize images into the predefined emotion classes.
    2. Training and Validation:

      • Split the dataset into training and validation sets for model training and evaluation.
      • Fine-tuned the model parameters to optimize accuracy and generalization.
    3. Model Evaluation:

      • Evaluated the model's performance on an independent test set to assess its ability to generalize to unseen data.
      • Analyzed confusion matrices and classification reports to understand the model's strengths and areas for improvement.
    4. Deployment and Integration:

      • Deployed the trained emotion detection model for real-time inference.
      • Integrated the model into applications, allowing users to interact with systems based on detected emotions.

    Results: The developed Emotion Detection Model demonstrates high accuracy in recognizing and classifying facial expressions across the defined emotion classes. This project lays the foundation for integrating emotion-aware systems into various applications, fostering more intuitive and responsive interactions.

  10. R

    Facial Expression Recognition Dataset

    • universe.roboflow.com
    zip
    Updated Jul 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TA (2025). Facial Expression Recognition Dataset [Dataset]. https://universe.roboflow.com/ta-3akad/facial-expression-recognition-7ixew/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 3, 2025
    Dataset authored and provided by
    TA
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Emotions
    Description

    Facial Expression Recognition

    ## Overview
    
    Facial Expression Recognition is a dataset for classification tasks - it contains Emotions annotations for 7,939 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  11. Facial_expression_data

    • springernature.figshare.com
    • datasetcatalog.nlm.nih.gov
    zip
    Updated Aug 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wenbo Li; Gang Guo; Ruichen Tan; Yang Xing; Guofa Li; Shen Li; Guanzhong Zeng; Peizhi Wang; Bingbing Zhang; Xinyu Su; Dawei Pi; Dongpu Cao (2022). Facial_expression_data [Dataset]. http://doi.org/10.6084/m9.figshare.17304137.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 5, 2022
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Wenbo Li; Gang Guo; Ruichen Tan; Yang Xing; Guofa Li; Shen Li; Guanzhong Zeng; Peizhi Wang; Bingbing Zhang; Xinyu Su; Dawei Pi; Dongpu Cao
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    40 sub-folders are further divided in this directory, each sub-folder contains the data of all the facial expression per participant. The sub-folders are named after the participant ID and include 4 sub-sub folders which are central RGB (CRGB) facial expression data, left RGB (LRGB) facial expression data, right RGB (RRGB) facial expression data, and central infrared (CIR) facial expression data. Each folder contains multiple MP4 files, and each MP4 file corresponds to valid emotional driving.

  12. JAFFE (Deprecated, use v.2 instead)

    • zenodo.org
    • explore.openaire.eu
    Updated Mar 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Lyons; Michael Lyons; Miyuki Kamachi; Jiro Gyoba; Jiro Gyoba; Miyuki Kamachi (2025). JAFFE (Deprecated, use v.2 instead) [Dataset]. http://doi.org/10.5281/zenodo.3451524
    Explore at:
    Dataset updated
    Mar 20, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Michael Lyons; Michael Lyons; Miyuki Kamachi; Jiro Gyoba; Jiro Gyoba; Miyuki Kamachi
    Description

    V.1 is deprecated, use V.2 instead.

    The images are the same: only the README file has been updated.

    https://doi.org/10.5281/zenodo.14974867

    The JAFFE images may be used only for non-commercial scientific research.

    The source and background of the dataset must be acknowledged by citing the following two articles. Users should read both carefully.

    Michael J. Lyons, Miyuki Kamachi, Jiro Gyoba.
    Coding Facial Expressions with Gabor Wavelets (IVC Special Issue)
    arXiv:2009.05938 (2020) https://arxiv.org/pdf/2009.05938.pdf

    Michael J. Lyons
    "Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset
    arXiv: 2107.13998 (2021) https://arxiv.org/abs/2107.13998

    The following is not allowed:

    • Redistribution of the JAFFE dataset (incl. via Github, Kaggle, Colaboratory, GitCafe, CSDN etc.)
    • Posting JAFFE images on the web and social media
    • Public exhibition of JAFFE images in museums/galleries etc.
    • Broadcast in the mass media (tv shows, films, etc.)

    A few sample images (not more than 10) may be displayed in scientific publications.

  13. Dataset

    • figshare.com
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Tahir Naseem (2025). Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.28236635.v1
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    figshare
    Authors
    Muhammad Tahir Naseem
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Facial Expression Recognition DatasetThis dataset supports research on facial expression recognition using visible and infrared modalities. It includes data for various facial expressions from two publicly available datasets: VIRI (five expressions: angry, happy, neutral, sad, and surprised) and NVIE (three expressions: fear, disgust, and happy). The dataset has been processed and prepared for training and evaluation of machine learning models.The dataset is designed for use with deep learning frameworks like PyTorch and supports experiments in feature extraction, model evaluation, and early fusion approaches for visible and infrared modalities.For details on the methodology, preprocessing steps, and evaluation metrics, please refer to the linked GitHub repository: https://github.com/naseemmuhammadtahir/raw-data.This dataset facilitates reproducibility and exploration of advanced models for facial expression recognition tasks in diverse modalities.

  14. i

    LCK+: A landmark-based facial emotion recognition dataset

    • ieee-dataport.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mahir Shadid (2025). LCK+: A landmark-based facial emotion recognition dataset [Dataset]. https://ieee-dataport.org/documents/lck-landmark-based-facial-emotion-recognition-dataset
    Explore at:
    Dataset updated
    Jun 4, 2025
    Authors
    Mahir Shadid
    Description

    disgust

  15. m

    Cry, Laugh, or Angry? A Benchmark Dataset for Computer Vision-Based Approach...

    • data.mendeley.com
    Updated Mar 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Md. Mehedi Hasan (2025). Cry, Laugh, or Angry? A Benchmark Dataset for Computer Vision-Based Approach to Infant Facial Emotion Recognition [Dataset]. http://doi.org/10.17632/hy969mrx9p.1
    Explore at:
    Dataset updated
    Mar 10, 2025
    Authors
    Md. Mehedi Hasan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is a meticulously curated dataset designed for infant facial emotion recognition, featuring four primary emotional expressions: Angry, Cry, Laugh, and Normal. The dataset aims to facilitate research in machine learning, deep learning, affective computing, and human-computer interaction by providing a large collection of labeled infant facial images.

    Primary Data (1600 Images): - Angry: 400 - Cry: 400 - Laugh: 400 - Normal: 400

    Data Augmentation & Expanded Dataset (26,143 Images): To enhance the dataset's robustness and expand the dataset, 20 augmentation techniques (including HorizontalFlip, VerticalFlip, Rotate, ShiftScaleRotate, BrightnessContrast, GaussNoise, GaussianBlur, Sharpen, HueSaturationValue, CLAHE, GridDistortion, ElasticTransform, GammaCorrection, MotionBlur, ColorJitter, Emboss, Equalize, Posterize, FogEffect, and RainEffect) were applied randomly. This resulted in a significantly larger dataset with:

    • Angry: 5,781
    • Cry: 6,930
    • Laugh: 6,870
    • Normal: 6,562

    Data Collection & Ethical Considerations: The dataset was collected under strict ethical guidelines to ensure compliance with privacy and data protection laws. Key ethical considerations include: 1. Ethical Approval: The study was reviewed and approved by the Institutional Review Board (IRB) of Daffodil International University under Reference No: REC-FSIT-2024-11-10. 2. Informed Parental Consent: Written consent was obtained from parents before capturing and utilizing infant facial images for research purposes. 3. Privacy Protection: No personally identifiable information (PII) is included in the dataset, and images are strictly used for research in AI-driven emotion recognition.

    Data Collection Locations & Geographical Diversity: To ensure diversity in infant facial expressions, data collection was conducted across multiple locations in Bangladesh, covering healthcare centers and educational institutions:

    1. 250-bed District Sadar Hospital, Sherpur (Latitude: 25.019405 & Longitude: 90.013733)
    2. Upazila Health Complex, Baraigram, Natore (Latitude: 24.3083 & Longitude: 89.1700)
    3. Char Bhabna Community Clinic, Sherpur (Latitude: 25.0188 & Longitude: 90.0175)
    4. Jamiatul Amin Mohammad Al-Islamia Cadet Madrasa, Khagan, Dhaka (Latitude: 23.872856 & Longitude: 90.318947)

    Face Detection Methodology: To extract the facial regions efficiently, RetinaNet—a deep learning-based object detection model—was employed. The use of RetinaNet ensures precise facial cropping while minimizing background noise and occlusions.

    Potential Applications: 1. Affective Computing: Understanding infant emotions for smart healthcare and early childhood development. 2. Computer Vision: Training deep learning models for automated infant facial expression recognition. 3. Pediatric & Mental Health Research: Assisting in early autism screening and emotion-aware AI for child psychology. 4. Human-Computer Interaction (HCI): Designing AI-powered assistive technologies for infants.

  16. s

    Indoor Facial 75 Expressions Dataset

    • shaip.com
    • co.shaip.com
    • +6more
    json
    Updated Nov 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Indoor Facial 75 Expressions Dataset [Dataset]. https://www.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Nov 26, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Indoor Facial 75 Expressions Dataset enriches the internet, media, entertainment, and mobile sectors with an in-depth exploration of human emotions. It features 60 individuals in indoor settings, showcasing a balanced gender representation and varied postures, with 75 distinct facial expressions per person. This dataset is tagged with facial expression categories, making it an invaluable tool for emotion recognition and interactive applications.

  17. f

    Data_Sheet_1_Effect of facial emotion recognition learning transfers across...

    • datasetcatalog.nlm.nih.gov
    • frontiersin.figshare.com
    Updated Jan 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kou, Hui; Tan, Qingli; Wu, Jia; Luo, Wei; Shao, Boyao; Bi, Taiyong (2024). Data_Sheet_1_Effect of facial emotion recognition learning transfers across emotions.docx [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001390189
    Explore at:
    Dataset updated
    Jan 19, 2024
    Authors
    Kou, Hui; Tan, Qingli; Wu, Jia; Luo, Wei; Shao, Boyao; Bi, Taiyong
    Description

    IntroductionPerceptual learning of facial expression is shown specific to the train expression, indicating separate encoding of the emotional contents in different expressions. However, little is known about the specificity of emotional recognition training with the visual search paradigm and the sensitivity of learning to near-threshold stimuli.MethodsIn the present study, we adopted a visual search paradigm to measure the recognition of facial expressions. In Experiment 1 (Exp1), Experiment 2 (Exp2), and Experiment 3 (Exp3), subjects were trained for 8 days to search for a target expression in an array of faces presented for 950 ms, 350 ms, and 50 ms, respectively. In Experiment 4 (Exp4), we trained subjects to search for a target of a triangle, and tested them with the task of facial expression search. Before and after the training, subjects were tested on the trained and untrained facial expressions which were presented for 950 ms, 650 ms, 350 ms, or 50 ms.ResultsThe results showed that training led to large improvements in the recognition of facial emotions only if the faces were presented long enough (Exp1: 85.89%; Exp2: 46.05%). Furthermore, the training effect could transfer to the untrained expression. However, when the faces were presented briefly (Exp3), the training effect was small (6.38%). In Exp4, the results indicated that the training effect could not transfer across categories.DiscussionOur findings revealed cross-emotion transfer for facial expression recognition training in a visual search task. In addition, learning hardly affects the recognition of near-threshold expressions.

  18. f

    Recognition rate of the proposed FER system using IMFDB dataset of facial...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated May 31, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Hameed Siddiqi; Md. Golam Rabiul Alam; Choong Seon Hong; Adil Mehmood Khan; Hyunseung Choo (2023). Recognition rate of the proposed FER system using IMFDB dataset of facial expressions (Unit: %). [Dataset]. http://doi.org/10.1371/journal.pone.0162702.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Muhammad Hameed Siddiqi; Md. Golam Rabiul Alam; Choong Seon Hong; Adil Mehmood Khan; Hyunseung Choo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recognition rate of the proposed FER system using IMFDB dataset of facial expressions (Unit: %).

  19. i

    GAN Generated Images for Facial Expression Recognition systems

    • ieee-dataport.org
    Updated Jul 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessandro Floris (2025). GAN Generated Images for Facial Expression Recognition systems [Dataset]. https://ieee-dataport.org/documents/gan-generated-images-facial-expression-recognition-systems
    Explore at:
    Dataset updated
    Jul 8, 2025
    Authors
    Alessandro Floris
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    a good solution is to augment the DBs with appropriate techniques

  20. u

    Video Emotion Recognition Dataset

    • unidata.pro
    json, mp4
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata L.L.C-FZ, Video Emotion Recognition Dataset [Dataset]. https://unidata.pro/datasets/video-emotion-recognition-dataset/
    Explore at:
    json, mp4Available download formats
    Dataset authored and provided by
    Unidata L.L.C-FZ
    Description

    Video dataset capturing diverse facial expressions and emotions from 1000+ people, suitable for emotion recognition AI training

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Unidata (2025). Facial Expression Recognition Dataset [Dataset]. https://www.kaggle.com/datasets/unidpro/facial-expression-recognition-dataset
Organization logo

Facial Expression Recognition Dataset

Dataset contains 199,955 images with different expressions from 28,565 people.

Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Jul 7, 2025
Dataset provided by
Kagglehttp://kaggle.com/
Authors
Unidata
License

Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically

Description

Emotion recognition Dataset

Dataset comprises 199,955 images featuring 28,565 individuals displaying a variety of facial expressions. It is designed for research in emotion recognition and facial expression analysis across diverse races, genders, and ages.

By utilizing this dataset, researchers and developers can enhance their understanding of facial recognition technology and improve the accuracy of emotion classification systems. - Get the data

Examples of data

https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2F22472a4de7d505ff4962b7eaa14071bf%2F1.png?generation=1740432470830146&alt=media" alt="">

This dataset includes images that capture different emotions, such as happiness, sadness, surprise, anger, disgust, and fear, allowing researchers to develop and evaluate recognition algorithms and detection methods.

💵 Buy the Dataset: This is a limited preview of the data. To access the full dataset, please contact us at https://unidata.pro to discuss your requirements and pricing options.

Metadata for the dataset

https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2F8cfad327bf19d7f6fad22ae2cc021a5b%2FFrame%201%20(2).png?generation=1740432926933026&alt=media" alt=""> Researchers can leverage this dataset to explore various learning methods and algorithms aimed at improving emotion detection and facial expression recognition.

🌐 UniData provides high-quality datasets, content moderation, data collection and annotation for your AI/ML projects

Search
Clear search
Close search
Google apps
Main menu