Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Emotion recognition Dataset
Dataset comprises 199,955 images featuring 28,565 individuals displaying a variety of facial expressions. It is designed for research in emotion recognition and facial expression analysis across diverse races, genders, and ages. By utilizing this dataset, researchers and developers can enhance their understanding of facial recognition technology and improve the accuracy of emotion classification systems. - Get the data
Examples of data
This… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/facial-expression-recognition-dataset.
Facial Expression Recognition dataset helps AI interpret human emotions for improved sentiment analysis and recognition
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The dataset consists of images capturing people displaying 7 distinct emotions (anger, contempt, disgust, fear, happiness, sadness and surprise). Each image in the dataset represents one of these specific emotions, enabling researchers and machine learning practitioners to study and develop models for emotion recognition and analysis. The images encompass a diverse range of individuals, including different genders, ethnicities, and age groups*. The dataset aims to provide a comprehensive representation of human emotions, allowing for a wide range of use cases.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Interactive Facial Expression and Emotion Detection (IFEED) is an annotated dataset that can be used to train, validate, and test Deep Learning models for facial expression and emotion recognition. It contains pre-filtered and analysed images of the interactions between the six main characters of the Friends television series, obtained from the video recordings of the Multimodal EmotionLines Dataset (MELD).
The images were obtained by decomposing the videos into multiple frames and extracting the facial expression of the correctly identified characters. A team composed of 14 researchers manually verified and annotated the processed data into several classes: Angry, Sad, Happy, Fearful, Disgusted, Surprised and Neutral.
IFEED can be valuable for the development of intelligent facial expression recognition solutions and emotion detection software, enabling binary or multi-class classification, or even anomaly detection or clustering tasks. The images with ambiguous or very subtle facial expressions can be repurposed for adversarial learning. The dataset can be combined with additional data recordings to create more complete and extensive datasets and improve the generalization of robust deep learning models.
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the East Asian Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.
The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:
To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:
Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:
This metadata helps in building expression recognition models that are both accurate and inclusive.
This dataset is ideal for a variety of AI and computer vision applications, including:
To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:
This dataset contains facial expression recognition data from 1,142 people in online conference scenes. Participants include Asian, Caucasian, Black, and Brown individuals, mainly young and middle-aged adults. Data was collected across a variety of indoor office scenes, covering meeting rooms, coffee shops, libraries , bedroom, etc., Each participant performed seven key expressions: normal, happy, surprised, sad, angry, disgusted, and fearful. The dataset is suitable for tasks such as facial expression recognition, emotion recognition, human-computer interaction, and video conferencing AI applications.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Facial Expression Recognition is a dataset for classification tasks - it contains Emotions annotations for 7,939 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
The 68,405 Videos – Micro-Expression Dataset provides 57 types of subtle facial expressions across diverse populations and environments. Including Asian, Black, Caucasian and Brown individuals; age includes under 18, 18-45, 46-60, and over 60; collection environment includes indoor scenes and outdoor scenes; it can be used in various scenes such as facial recognition, emotion recognition and nonverbal behavior analysis.
https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement
Welcome to the African Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.
The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:
To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:
Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:
This metadata helps in building expression recognition models that are both accurate and inclusive.
This dataset is ideal for a variety of AI and computer vision applications, including:
To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Facial Expressions Recognition is a dataset for classification tasks - it contains Facial Expressions annotations for 9,784 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is a meticulously curated dataset designed for infant facial emotion recognition, featuring four primary emotional expressions: Angry, Cry, Laugh, and Normal. The dataset aims to facilitate research in machine learning, deep learning, affective computing, and human-computer interaction by providing a large collection of labeled infant facial images.
Primary Data (1600 Images): - Angry: 400 - Cry: 400 - Laugh: 400 - Normal: 400
Data Augmentation & Expanded Dataset (26,143 Images): To enhance the dataset's robustness and expand the dataset, 20 augmentation techniques (including HorizontalFlip, VerticalFlip, Rotate, ShiftScaleRotate, BrightnessContrast, GaussNoise, GaussianBlur, Sharpen, HueSaturationValue, CLAHE, GridDistortion, ElasticTransform, GammaCorrection, MotionBlur, ColorJitter, Emboss, Equalize, Posterize, FogEffect, and RainEffect) were applied randomly. This resulted in a significantly larger dataset with:
Data Collection & Ethical Considerations: The dataset was collected under strict ethical guidelines to ensure compliance with privacy and data protection laws. Key ethical considerations include: 1. Ethical Approval: The study was reviewed and approved by the Institutional Review Board (IRB) of Daffodil International University under Reference No: REC-FSIT-2024-11-10. 2. Informed Parental Consent: Written consent was obtained from parents before capturing and utilizing infant facial images for research purposes. 3. Privacy Protection: No personally identifiable information (PII) is included in the dataset, and images are strictly used for research in AI-driven emotion recognition.
Data Collection Locations & Geographical Diversity: To ensure diversity in infant facial expressions, data collection was conducted across multiple locations in Bangladesh, covering healthcare centers and educational institutions:
Face Detection Methodology: To extract the facial regions efficiently, RetinaNet—a deep learning-based object detection model—was employed. The use of RetinaNet ensures precise facial cropping while minimizing background noise and occlusions.
Potential Applications: 1. Affective Computing: Understanding infant emotions for smart healthcare and early childhood development. 2. Computer Vision: Training deep learning models for automated infant facial expression recognition. 3. Pediatric & Mental Health Research: Assisting in early autism screening and emotion-aware AI for child psychology. 4. Human-Computer Interaction (HCI): Designing AI-powered assistive technologies for infants.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Most facial expression recognition (FER) systems rely on machine learning approaches that require large databases (DBs) for effective training. As these are not easily available, a good solution is to augment the DBs with appropriate techniques, which are typically based on either geometric transformation or deep learning based technologies (e.g., Generative Adversarial Networks (GANs)). Whereas the first category of techniques has been fairly adopted in the past, studies that use GAN-based techniques are limited for FER systems. To advance in this respect, we evaluate the impact of the GAN techniques by creating a new DB containing the generated synthetic images.
The face images contained in the KDEF DB serve as the basis for creating novel synthetic images by combining the facial features of two images (i.e., Candie Kung and Cristina Saralegui) selected from the YouTube-Faces DB. The novel images differ from each other, in particular concerning the eyes, the nose, and the mouth, whose characteristics are taken from the Candie and Cristina images.
The total number of novel synthetic images generated with the GAN is 980 (70 individuals from KDEF DB x 7 emotions x 2 subjects from YouTube-Faces DB).
The zip file "GAN_KDEF_Candie" contains the 490 images generated by combining the KDEF images with the Candie Kung image. The zip file "GAN_KDEF_Cristina" contains the 490 images generated by combining the KDEF images with the Cristina Saralegui image. The used image IDs are the same used for the KDEF DB. The synthetic generated images have a resolution of 562x762 pixels.
If you make use of this dataset, please consider citing the following publication:
Porcu, S., Floris, A., & Atzori, L. (2020). Evaluation of Data Augmentation Techniques for Facial Expression Recognition Systems. Electronics, 9, 1892, doi: 10.3390/electronics9111892, url: https://www.mdpi.com/2079-9292/9/11/1892.
BibTex format:
@article{porcu2020evaluation, title={Evaluation of Data Augmentation Techniques for Facial Expression Recognition Systems}, author={Porcu, Simone and Floris, Alessandro and Atzori, Luigi}, journal={Electronics}, volume={9}, pages={108781}, year={2020}, number = {11}, article-number = {1892}, publisher={MDPI}, doi={10.3390/electronics9111892} }
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
40 sub-folders are further divided in this directory, each sub-folder contains the data of all the facial expression per participant. The sub-folders are named after the participant ID and include 4 sub-sub folders which are central RGB (CRGB) facial expression data, left RGB (LRGB) facial expression data, right RGB (RRGB) facial expression data, and central infrared (CIR) facial expression data. Each folder contains multiple MP4 files, and each MP4 file corresponds to valid emotional driving.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Facial Expression Detection is a dataset for object detection tasks - it contains Expression annotations for 1,697 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Facial Expression Recognition DatasetThis dataset supports research on facial expression recognition using visible and infrared modalities. It includes data for various facial expressions from two publicly available datasets: VIRI (five expressions: angry, happy, neutral, sad, and surprised) and NVIE (three expressions: fear, disgust, and happy). The dataset has been processed and prepared for training and evaluation of machine learning models.The dataset is designed for use with deep learning frameworks like PyTorch and supports experiments in feature extraction, model evaluation, and early fusion approaches for visible and infrared modalities.For details on the methodology, preprocessing steps, and evaluation metrics, please refer to the linked GitHub repository: https://github.com/naseemmuhammadtahir/raw-data.This dataset facilitates reproducibility and exploration of advanced models for facial expression recognition tasks in diverse modalities.
The JAFFE images may be used only for non-commercial scientific research.
The source and background of the dataset must be acknowledged by citing the following two articles. Users should read both carefully.
Michael J. Lyons, Miyuki Kamachi, Jiro Gyoba.
Coding Facial Expressions with Gabor Wavelets (IVC Special Issue)
arXiv:2009.05938 (2020) https://arxiv.org/pdf/2009.05938.pdf
Michael J. Lyons
"Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset
arXiv: 2107.13998 (2021) https://arxiv.org/abs/2107.13998
The following is not allowed:
A few sample images (not more than 10) may be displayed in scientific publications.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Video Dataset of Various Emotions for Recognition Tasks
Dataset comprises 1,000+ videos featuring 11 facial emotions and 15 inner emotions expressed by individuals from diverse backgrounds, including various races, genders, and ages. It is designed for emotion recognition research, focusing on emotion detection and emotion classification tasks. By utilizing this dataset, researchers can explore advanced emotion analysis techniques and develop robust recognition models that can… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/video-emotion-recognition-dataset.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
FER2013 Enhanced: Advanced Facial Expression Recognition Dataset
The most comprehensive and quality-enhanced version of the famous FER2013 dataset for state-of-the-art emotion recognition research and applications.
🎯 Dataset Overview
FER2013 Enhanced is a significantly improved version of the landmark FER2013 facial expression recognition dataset. This enhanced version provides AI-powered quality assessment, balanced data splits, comprehensive metadata, and multi-format… See the full description on the dataset page: https://huggingface.co/datasets/abhilash88/fer2013-enhanced.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Recognition rate of the proposed FER system using IMFDB dataset of facial expressions (Unit: %).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Indoor Facial 75 Expressions Dataset enriches the internet, media, entertainment, and mobile sectors with an in-depth exploration of human emotions. It features 60 individuals in indoor settings, showcasing a balanced gender representation and varied postures, with 75 distinct facial expressions per person. This dataset is tagged with facial expression categories, making it an invaluable tool for emotion recognition and interactive applications.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Emotion recognition Dataset
Dataset comprises 199,955 images featuring 28,565 individuals displaying a variety of facial expressions. It is designed for research in emotion recognition and facial expression analysis across diverse races, genders, and ages. By utilizing this dataset, researchers and developers can enhance their understanding of facial recognition technology and improve the accuracy of emotion classification systems. - Get the data
Examples of data
This… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/facial-expression-recognition-dataset.