100+ datasets found
  1. F

    East Asian Facial Expression Images Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). East Asian Facial Expression Images Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-east-asia
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement

    Area covered
    East Asia
    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the East Asian Facial Expression Image Dataset, meticulously curated to enhance expression recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.

    Facial Expression Data

    This dataset comprises over 2000 facial expression images, divided into participant-wise sets with each set including:

    Expression Images: 5 different high-quality images per individual, each capturing a distinct facial emotion like Happy, Sad, Angry, Shocked, and Neutral.

    Diversity and Representation

    The dataset includes contributions from a diverse network of individuals across East Asian countries, such as:

    Geographical Representation: Participants from East Asian countries, including China, Japan, Philippines, Malaysia, Singapore, Thailand, Vietnam, Indonesia, and more.
    Participant Profile: Participants range from 18 to 70 years old, representing both males and females in 60:40 ratio, respectively.
    File Format: The dataset contains images in JPEG and HEIC file format.

    Quality and Conditions

    To ensure high utility and robustness, all images are captured under varying conditions:

    Lighting Conditions: Images are taken in different lighting environments to ensure variability and realism.
    Backgrounds: A variety of backgrounds are available to enhance model generalization.
    Device Quality: Photos are taken using the latest mobile devices to ensure high resolution and clarity.

    Metadata

    Each facial expression image set is accompanied by detailed metadata for each participant, including:

    Participant Identifier
    File Name
    Age
    Gender
    Country
    Expression
    Demographic Information
    File Format

    This metadata is essential for training models that can accurately recognize and identify expressions across different demographics and conditions.

    Usage and Applications

    This facial emotion dataset is ideal for various applications in the field of computer vision, including but not limited to:

    Expression Recognition Models: Improving the accuracy and reliability of facial expression recognition systems.
    KYC Models: Streamlining the identity verification processes for financial and other services.
    Biometric Identity Systems: Developing robust biometric identification solutions.
    Generative AI Models: Training generative AI models to create realistic and diverse synthetic facial images.

    Secure and Ethical Collection

    Data Security: Data was securely stored and processed within our platform, ensuring data security and confidentiality.
    Ethical Guidelines: The biometric data collection process adhered to strict ethical guidelines, ensuring the privacy and consent of all participants.
    Participant Consent: All participants were informed of the purpose of collection and potential use of the data, as agreed through written consent.

    Updates and Customization

    We understand the evolving nature of AI and machine learning requirements. Therefore, we continuously add more assets with diverse conditions to this off-the-shelf facial expression dataset.

    Customization & Custom

  2. P

    Oulu-CASIA Dataset

    • paperswithcode.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guoying Zhao; Xiaohua Huang; Matti Taini; Stan Z. Li; Matti Pietikäinen, Oulu-CASIA Dataset [Dataset]. https://paperswithcode.com/dataset/oulu-casia
    Explore at:
    Authors
    Guoying Zhao; Xiaohua Huang; Matti Taini; Stan Z. Li; Matti Pietikäinen
    Area covered
    Oulu
    Description

    The Oulu-CASIA NIR&VIS facial expression database consists of six expressions (surprise, happiness, sadness, anger, fear and disgust) from 80 people between 23 and 58 years old. 73.8% of the subjects are males. The subjects were asked to sit on a chair in the observation room in a way that he/ she is in front of camera. Camera-face distance is about 60 cm. Subjects were asked to make a facial expression according to an expression example shown in picture sequences. The imaging hardware works at the rate of 25 frames per second and the image resolution is 320 × 240 pixels.

  3. P

    MMI Dataset

    • paperswithcode.com
    Updated Jun 18, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maja Pantic; Michel François Valstar; Ron Rademaker; Ludo Maat (2021). MMI Dataset [Dataset]. https://paperswithcode.com/dataset/mmi
    Explore at:
    Dataset updated
    Jun 18, 2021
    Authors
    Maja Pantic; Michel François Valstar; Ron Rademaker; Ludo Maat
    Description

    The MMI Facial Expression Database consists of over 2900 videos and high-resolution still images of 75 subjects. It is fully annotated for the presence of AUs in videos (event coding), and partially coded on frame-level, indicating for each frame whether an AU is in either the neutral, onset, apex or offset phase. A small part was annotated for audio-visual laughters.

  4. Z

    IFEED: Interactive Facial Expression and Emotion Detection Dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated May 25, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vitorino, João (2023). IFEED: Interactive Facial Expression and Emotion Detection Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7963451
    Explore at:
    Dataset updated
    May 25, 2023
    Dataset provided by
    Praça, Isabel
    Vitorino, João
    Oliveira, Jorge
    Oliveira, Nuno
    Maia, Eva
    Dias, Tiago
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Interactive Facial Expression and Emotion Detection (IFEED) is an annotated dataset that can be used to train, validate, and test Deep Learning models for facial expression and emotion recognition. It contains pre-filtered and analysed images of the interactions between the six main characters of the Friends television series, obtained from the video recordings of the Multimodal EmotionLines Dataset (MELD).

    The images were obtained by decomposing the videos into multiple frames and extracting the facial expression of the correctly identified characters. A team composed of 14 researchers manually verified and annotated the processed data into several classes: Angry, Sad, Happy, Fearful, Disgusted, Surprised and Neutral.

    IFEED can be valuable for the development of intelligent facial expression recognition solutions and emotion detection software, enabling binary or multi-class classification, or even anomaly detection or clustering tasks. The images with ambiguous or very subtle facial expressions can be repurposed for adversarial learning. The dataset can be combined with additional data recordings to create more complete and extensive datasets and improve the generalization of robust deep learning models.

  5. JAFFE (Deprecated, use v.2 instead)

    • zenodo.org
    Updated Mar 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Lyons; Michael Lyons; Miyuki Kamachi; Jiro Gyoba; Jiro Gyoba; Miyuki Kamachi (2025). JAFFE (Deprecated, use v.2 instead) [Dataset]. http://doi.org/10.5281/zenodo.3451524
    Explore at:
    Dataset updated
    Mar 20, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Michael Lyons; Michael Lyons; Miyuki Kamachi; Jiro Gyoba; Jiro Gyoba; Miyuki Kamachi
    Description

    V.1 is deprecated, use V.2 instead.

    The images are the same: only the README file has been updated.

    https://doi.org/10.5281/zenodo.14974867

    The JAFFE images may be used only for non-commercial scientific research.

    The source and background of the dataset must be acknowledged by citing the following two articles. Users should read both carefully.

    Michael J. Lyons, Miyuki Kamachi, Jiro Gyoba.
    Coding Facial Expressions with Gabor Wavelets (IVC Special Issue)
    arXiv:2009.05938 (2020) https://arxiv.org/pdf/2009.05938.pdf

    Michael J. Lyons
    "Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset
    arXiv: 2107.13998 (2021) https://arxiv.org/abs/2107.13998

    The following is not allowed:

    • Redistribution of the JAFFE dataset (incl. via Github, Kaggle, Colaboratory, GitCafe, CSDN etc.)
    • Posting JAFFE images on the web and social media
    • Public exhibition of JAFFE images in museums/galleries etc.
    • Broadcast in the mass media (tv shows, films, etc.)

    A few sample images (not more than 10) may be displayed in scientific publications.

  6. MMA FACIAL EXPRESSION

    • kaggle.com
    zip
    Updated Jun 6, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MahmoudiMA (2020). MMA FACIAL EXPRESSION [Dataset]. https://www.kaggle.com/mahmoudima/mma-facial-expression
    Explore at:
    zip(173545522 bytes)Available download formats
    Dataset updated
    Jun 6, 2020
    Authors
    MahmoudiMA
    Description

    Dataset

    This dataset was created by MahmoudiMA

    Contents

    It contains the following files:

  7. P

    Thermal Face Database Dataset

    • paperswithcode.com
    Updated Sep 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Thermal Face Database Dataset [Dataset]. https://paperswithcode.com/dataset/thermal-face-database
    Explore at:
    Dataset updated
    Sep 20, 2022
    Description

    High-resolution thermal infrared face database with extensive manual annotations, introduced by Kopaczka et al, 2018. Useful for training algoeithms for image processing tasks as well as facial expression recognition. The full database itself, all annotations and the complete source code are freely available from the authors for research purposes at https://github.com/marcinkopaczka/thermalfaceproject.

    Please cite following papers for the dataset: [1] M. Kopaczka, R. Kolk and D. Merhof, "A fully annotated thermal face database and its application for thermal facial expression recognition," 2018 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), 2018, pp. 1-6, doi: 10.1109/I2MTC.2018.8409768. [2] Kopaczka, M., Kolk, R., Schock, J., Burkhard, F., & Merhof, D. (2018). A thermal infrared face database with facial landmarks and emotion labels. IEEE Transactions on Instrumentation and Measurement, 68(5), 1389-1401.

  8. f

    Table_2_East Asian Young and Older Adult Perceptions of Emotional Faces From...

    • figshare.com
    xlsx
    Updated May 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yu-Zhen Tu; Dong-Wei Lin; Atsunobu Suzuki; Joshua Oon Soo Goh (2023). Table_2_East Asian Young and Older Adult Perceptions of Emotional Faces From an Age- and Sex-Fair East Asian Facial Expression Database.XLSX [Dataset]. http://doi.org/10.3389/fpsyg.2018.02358.s004
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Frontiers
    Authors
    Yu-Zhen Tu; Dong-Wei Lin; Atsunobu Suzuki; Joshua Oon Soo Goh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    East Asia
    Description

    There is increasing interest in clarifying how different face emotion expressions are perceived by people from different cultures, of different ages and sex. However, scant availability of well-controlled emotional face stimuli from non-Western populations limit the evaluation of cultural differences in face emotion perception and how this might be modulated by age and sex differences. We present a database of East Asian face expression stimuli, enacted by young and older, male and female, Taiwanese using the Facial Action Coding System (FACS). Combined with a prior database, this present database consists of 90 identities with happy, sad, angry, fearful, disgusted, surprised and neutral expressions amounting to 628 photographs. Twenty young and 24 older East Asian raters scored the photographs for intensities of multiple-dimensions of emotions and induced affect. Multivariate analyses characterized the dimensionality of perceived emotions and quantified effects of age and sex. We also applied commercial software to extract computer-based metrics of emotions in photographs. Taiwanese raters perceived happy faces as one category, sad, angry, and disgusted expressions as one category, and fearful and surprised expressions as one category. Younger females were more sensitive to face emotions than younger males. Whereas, older males showed reduced face emotion sensitivity, older female sensitivity was similar or accentuated relative to young females. Commercial software dissociated six emotions according to the FACS demonstrating that defining visual features were present. Our findings show that East Asians perceive a different dimensionality of emotions than Western-based definitions in face recognition software, regardless of age and sex. Critically, stimuli with detailed cultural norms are indispensable in interpreting neural and behavioral responses involving human facial expression processing. To this end, we add to the tools, which are available upon request, for conducting such research.

  9. R

    Human Face Expression Dataset

    • universe.roboflow.com
    zip
    Updated Dec 17, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Human Face Expression Recognition (2023). Human Face Expression Dataset [Dataset]. https://universe.roboflow.com/human-face-expression-recognition/human-face-expression
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 17, 2023
    Dataset authored and provided by
    Human Face Expression Recognition
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Human Face Bounding Boxes
    Description

    Human Face Expression

    ## Overview
    
    Human Face Expression is a dataset for object detection tasks - it contains Human Face annotations for 1,228 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  10. A fMRI dataset in response to large-scale short natural dynamic facial...

    • zenodo.org
    zip
    Updated Oct 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panpan Chen; Panpan Chen (2024). A fMRI dataset in response to large-scale short natural dynamic facial expression videos [Dataset]. http://doi.org/10.5281/zenodo.13919442
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 11, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Panpan Chen; Panpan Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Facial expression is among the most natural methods for human beings to convey their emotional information in daily life. Although the neural mechanism of facial expression has been extensively studied employing lab-controlled images and a small number of lab-controlled video stimuli, how the human brain processes natural facial expressions still needs to be investigated. To our knowledge, this type of data specifically on large-scale natural facial expression videos is currently missing. We describe here the natural Facial Expressions Dataset (NFED), a fMRI dataset including responses to 1,320 short (3-second) natural facial expression video clips. These video clips is annotated with three types of labels: emotion, gender, and ethnicity, along with accompanying metadata. We validate that the dataset has good quality within and across participants and, notably, can capture temporal and spatial stimuli features. NFED provides researchers with fMRI data for understanding of the visual processing of large number of natural facial expression videos.

  11. R

    Domestic Cats Facial Expressions Dataset

    • universe.roboflow.com
    zip
    Updated Aug 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mubbarryz (2024). Domestic Cats Facial Expressions Dataset [Dataset]. https://universe.roboflow.com/mubbarryz/domestic-cats-facial-expressions
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 5, 2024
    Dataset authored and provided by
    Mubbarryz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Cats Face Bounding Boxes
    Description

    Domestic Cats Facial Expressions

    ## Overview
    
    Domestic Cats Facial Expressions is a dataset for object detection tasks - it contains Cats Face annotations for 548 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  12. Face expression recognition dataset

    • kaggle.com
    zip
    Updated Jan 3, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Oheix (2019). Face expression recognition dataset [Dataset]. https://www.kaggle.com/datasets/jonathanoheix/face-expression-recognition-dataset/data
    Explore at:
    zip(126358582 bytes)Available download formats
    Dataset updated
    Jan 3, 2019
    Authors
    Jonathan Oheix
    Description

    Dataset

    This dataset was created by Jonathan Oheix

    Contents

  13. n

    4,458 People - 3D Facial Expressions Recognition Data

    • nexdata.ai
    • m.nexdata.ai
    Updated Nov 17, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2023). 4,458 People - 3D Facial Expressions Recognition Data [Dataset]. https://www.nexdata.ai/datasets/1097?source=Github
    Explore at:
    Dataset updated
    Nov 17, 2023
    Dataset provided by
    nexdata technology inc
    Authors
    Nexdata
    Variables measured
    Device, Accuracy, Data size, Data format, Data diversity, Annotation content, Collecting environment, Population distribution
    Description

    4,458 People - 3D Facial Expressions Recognition Data. The collection scenes include indoor scenes and outdoor scenes. The dataset includes males and females. The age distribution ranges from juvenile to the elderly, the young people and the middle aged are the majorities. The device includes iPhone X, iPhone XR. The data diversity includes different expressions, different ages, different races, different collecting scenes. This data can be used for tasks such as 3D facial expression recognition.

  14. H

    Data from: Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset...

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Sep 11, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marie Alaghband; Niloofar Yousefi; Ivan Garibay (2020). Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language [Dataset]. http://doi.org/10.7910/DVN/358QMQ
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 11, 2020
    Dataset provided by
    Harvard Dataverse
    Authors
    Marie Alaghband; Niloofar Yousefi; Ivan Garibay
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/2.2/customlicense?persistentId=doi:10.7910/DVN/358QMQhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/2.2/customlicense?persistentId=doi:10.7910/DVN/358QMQ

    Description

    Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image's facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.

  15. Z

    UIBVFED: Virtual facial expression dataset

    • data.niaid.nih.gov
    • zenodo.org
    • +1more
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mascaró Oliver, Miquel (2024). UIBVFED: Virtual facial expression dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10376478
    Explore at:
    Dataset updated
    Jul 8, 2024
    Dataset authored and provided by
    Mascaró Oliver, Miquel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset is composed of 660 facial images (1080x1920) from 20 virtual characters each creating 32 facial expressions. The avatars represent 10 men and 10 women, aged between 20 and 80, from different ethnicities. Expressions are classified by the six universal expressions according to Gary Faigin classification.

  16. F

    Native American Facial Expression Images Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). Native American Facial Expression Images Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-native-american
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the Native American Facial Expression Image Dataset, meticulously curated to enhance expression recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.

    Facial Expression Data

    This dataset comprises over 1000 facial expression images, divided into participant-wise sets with each set including:

    Expression Images: 5 different high-quality images per individual, each capturing a distinct facial emotion like Happy, Sad, Angry, Shocked, and Neutral.

    Diversity and Representation

    The dataset includes contributions from a diverse network of individuals across Native American countries, such as:

    Geographical Representation: Participants from Native American countries, including USA, Canada, Mexico and more.
    Participant Profile: Participants range from 18 to 70 years old, representing both males and females in 60:40 ratio, respectively.
    File Format: The dataset contains images in JPEG and HEIC file format.

    Quality and Conditions

    To ensure high utility and robustness, all images are captured under varying conditions:

    Lighting Conditions: Images are taken in different lighting environments to ensure variability and realism.
    Backgrounds: A variety of backgrounds are available to enhance model generalization.
    Device Quality: Photos are taken using the latest mobile devices to ensure high resolution and clarity.

    Metadata

    Each facial expression image set is accompanied by detailed metadata for each participant, including:

    Participant Identifier
    File Name
    Age
    Gender
    Country
    Expression
    Demographic Information
    File Format

    This metadata is essential for training models that can accurately recognize and identify expressions across different demographics and conditions.

    Usage and Applications

    This facial emotion dataset is ideal for various applications in the field of computer vision, including but not limited to:

    Expression Recognition Models: Improving the accuracy and reliability of facial expression recognition systems.
    KYC Models: Streamlining the identity verification processes for financial and other services.
    Biometric Identity Systems: Developing robust biometric identification solutions.
    Generative AI Models: Training generative AI models to create realistic and diverse synthetic facial images.

    Secure and Ethical Collection

    Data Security: Data was securely stored and processed within our platform, ensuring data security and confidentiality.
    Ethical Guidelines: The biometric data collection process adhered to strict ethical guidelines, ensuring the privacy and consent of all participants.
    Participant Consent: All participants were informed of the purpose of collection and potential use of the data, as agreed through written consent.

    Updates and Customization

    We understand the evolving nature of AI and machine learning requirements. Therefore, we continuously add more assets with diverse conditions to this off-the-shelf facial expression dataset.

    Customization & Custom Collection Options:
    <div

  17. Research Data: Facial Expression Recognition under Visual Field Restriction

    • zenodo.org
    • explore.openaire.eu
    bin, txt
    Updated Apr 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Melina Boratto Urtado; Melina Boratto Urtado; Rafael Delalibera Rodrigues; Rafael Delalibera Rodrigues (2024). Research Data: Facial Expression Recognition under Visual Field Restriction [Dataset]. http://doi.org/10.5281/zenodo.10703513
    Explore at:
    txt, binAvailable download formats
    Dataset updated
    Apr 25, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Melina Boratto Urtado; Melina Boratto Urtado; Rafael Delalibera Rodrigues; Rafael Delalibera Rodrigues
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the following files:

    - view_trial.xlsx: Excel spreadsheet containing data from individual trials.
    - view_participant.xlsx: Excel spreadsheet containing data aggregated at the participant level.
    - consensus.xlsx: Excel spreadsheet containing consensus data analysis.
    - image_id_list.txt: Text file listing the IDs of the images used in the study from The Karolinska Directed Emotional Faces (KDEF); https://kdef.se/.

    These files provide comprehensive data used in the research project titled "Exploring the Visual Field Restriction in the Recognition of Basic Facial Expressions: A Combined Eye Tracking and Gaze Contingency Study" conducted by M. B. Urtado, R. D. Rodrigues, and S. S. Fukusima. The dataset is intended for analysis and replication of the study's findings.

    Please, when using these data, we kindly request citing the following article:
    Urtado, M.B.; Rodrigues, R.D.; Fukusima, S.S. Visual Field Restriction in the Recognition of Basic Facial Expressions: A Combined Eye Tracking and Gaze Contingency Study. Behavioral Sciences 2024, 14, 355. https://doi.org/10.3390/bs14050355

    The study was approved by the Research Ethics Committee (CEP) of the University of São Paulo (protocol code 41844720.5.0000.5407).

  18. S

    NPU-FER: A web image dataset for facial expression recognition

    • scidb.cn
    Updated Oct 17, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xianlin Peng; Zhaoqiang Xia; Lei Li; Xiaoyi Feng (2021). NPU-FER: A web image dataset for facial expression recognition [Dataset]. http://doi.org/10.11922/sciencedb.01199
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 17, 2021
    Dataset provided by
    Science Data Bank
    Authors
    Xianlin Peng; Zhaoqiang Xia; Lei Li; Xiaoyi Feng
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset for natural facial expression recognition was constructed by leveraging the web images. An amount of labeled web images were obtained from the image search engines by using specific keywords. The algorithms of junk image cleansing were then utilized to remove the mislabeled images and further cleaned by the manual labeling. At last , totally 1648 images for six categories of facial expressions were collected, which have unbalanced number and different resolution images for each category.

  19. Dataset of Mouse Facial Expressions

    • figshare.com
    zip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daisuke Ino (2023). Dataset of Mouse Facial Expressions [Dataset]. http://doi.org/10.6084/m9.figshare.22083137.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    figshare
    Authors
    Daisuke Ino
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Mouse facial images during three emotional states (neutral, painful, and tickling).

  20. O

    DISFA(Denver Intensity of Spontaneous Facial Action)

    • opendatalab.com
    zip
    Updated Apr 22, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University of Pittsburgh (2023). DISFA(Denver Intensity of Spontaneous Facial Action) [Dataset]. https://opendatalab.com/OpenDataLab/DISFA_Denver_Intensity_of_Spontaneous_etc
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 22, 2023
    Dataset provided by
    University of Pittsburgh
    Carnegie Mellon University
    University of Denver
    License

    http://mohammadmahoor.com/disfa-contact-form/http://mohammadmahoor.com/disfa-contact-form/

    Area covered
    丹佛
    Description

    The full name of DISFA is Denver Intensity of Spontaneous Facial Action, which is a facial expression action dataset. The dataset consists of 27 videos with 4,844 frames each, for a total of 130,788 images. This dataset is annotated with action units at different intensity levels. The DISFA dataset was culled from a wider database popular in the field of facial expression recognition and contains more smiley data (ie: action units12). The smiley action unit set in this dataset contains 30,792 images. 82,176 images in this dataset are set as action unit sets, while 48,612 images are set as no action unit sets.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
FutureBee AI (2022). East Asian Facial Expression Images Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-east-asia

East Asian Facial Expression Images Dataset

Emotion Detection Image Dataset

Explore at:
wavAvailable download formats
Dataset updated
Aug 1, 2022
Dataset provided by
FutureBeeAI
Authors
FutureBee AI
License

https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement

Area covered
East Asia
Dataset funded by
FutureBeeAI
Description

Introduction

Welcome to the East Asian Facial Expression Image Dataset, meticulously curated to enhance expression recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.

Facial Expression Data

This dataset comprises over 2000 facial expression images, divided into participant-wise sets with each set including:

Expression Images: 5 different high-quality images per individual, each capturing a distinct facial emotion like Happy, Sad, Angry, Shocked, and Neutral.

Diversity and Representation

The dataset includes contributions from a diverse network of individuals across East Asian countries, such as:

Geographical Representation: Participants from East Asian countries, including China, Japan, Philippines, Malaysia, Singapore, Thailand, Vietnam, Indonesia, and more.
Participant Profile: Participants range from 18 to 70 years old, representing both males and females in 60:40 ratio, respectively.
File Format: The dataset contains images in JPEG and HEIC file format.

Quality and Conditions

To ensure high utility and robustness, all images are captured under varying conditions:

Lighting Conditions: Images are taken in different lighting environments to ensure variability and realism.
Backgrounds: A variety of backgrounds are available to enhance model generalization.
Device Quality: Photos are taken using the latest mobile devices to ensure high resolution and clarity.

Metadata

Each facial expression image set is accompanied by detailed metadata for each participant, including:

Participant Identifier
File Name
Age
Gender
Country
Expression
Demographic Information
File Format

This metadata is essential for training models that can accurately recognize and identify expressions across different demographics and conditions.

Usage and Applications

This facial emotion dataset is ideal for various applications in the field of computer vision, including but not limited to:

Expression Recognition Models: Improving the accuracy and reliability of facial expression recognition systems.
KYC Models: Streamlining the identity verification processes for financial and other services.
Biometric Identity Systems: Developing robust biometric identification solutions.
Generative AI Models: Training generative AI models to create realistic and diverse synthetic facial images.

Secure and Ethical Collection

Data Security: Data was securely stored and processed within our platform, ensuring data security and confidentiality.
Ethical Guidelines: The biometric data collection process adhered to strict ethical guidelines, ensuring the privacy and consent of all participants.
Participant Consent: All participants were informed of the purpose of collection and potential use of the data, as agreed through written consent.

Updates and Customization

We understand the evolving nature of AI and machine learning requirements. Therefore, we continuously add more assets with diverse conditions to this off-the-shelf facial expression dataset.

Customization & Custom

Search
Clear search
Close search
Google apps
Main menu