100+ datasets found
  1. b

    BioID-PTS-V1.2

    • bioid.com
    Updated Mar 2, 2011
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BioID (2011). BioID-PTS-V1.2 [Dataset]. https://www.bioid.com/face-database/
    Explore at:
    Dataset updated
    Mar 2, 2011
    Dataset authored and provided by
    BioID
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    FGnet Markup Scheme of the BioID Face Database - The BioID Face Database is being used within the FGnet project of the European Working Group on face and gesture recognition. David Cristinacce and Kola Babalola, PhD students from the department of Imaging Science and Biomedical Engineering at the University of Manchester marked up the images from the BioID Face Database. They selected several additional feature points, which are very useful for facial analysis and gesture recognition.

  2. b

    BioID Face Database

    • bioid.com
    Updated Mar 2, 2011
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BioID (2011). BioID Face Database [Dataset]. https://www.bioid.com/face-database/
    Explore at:
    text/csv+zip, text//x-portable-graymap+zipAvailable download formats
    Dataset updated
    Mar 2, 2011
    Dataset authored and provided by
    BioID
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Pixel
    Description

    The BioID Face Database has been recorded and is published to give all researchers working in the area of face detection the possibility to compare the quality of their face detection algorithms with others. During the recording special emphasis has been laid on real world conditions. Therefore the testset features a large variety of illumination, background and face size. The dataset consists of 1521 gray level images with a resolution of 384x286 pixel. Each one shows the frontal view of a face of one out of 23 different test persons. For comparison reasons the set also contains manually set eye postions. The images are labeled BioID_xxxx.pgm where the characters xxxx are replaced by the index of the current image (with leading zeros). Similar to this, the files BioID_xxxx.eye contain the eye positions for the corresponding images.

  3. F

    East Asian Facial Expression Images Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). East Asian Facial Expression Images Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-east-asia
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement

    Area covered
    East Asia
    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the East Asian Facial Expression Image Dataset, meticulously curated to enhance expression recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.

    Facial Expression Data

    This dataset comprises over 2000 facial expression images, divided into participant-wise sets with each set including:

    Expression Images: 5 different high-quality images per individual, each capturing a distinct facial emotion like Happy, Sad, Angry, Shocked, and Neutral.

    Diversity and Representation

    The dataset includes contributions from a diverse network of individuals across East Asian countries, such as:

    Geographical Representation: Participants from East Asian countries, including China, Japan, Philippines, Malaysia, Singapore, Thailand, Vietnam, Indonesia, and more.
    Participant Profile: Participants range from 18 to 70 years old, representing both males and females in 60:40 ratio, respectively.
    File Format: The dataset contains images in JPEG and HEIC file format.

    Quality and Conditions

    To ensure high utility and robustness, all images are captured under varying conditions:

    Lighting Conditions: Images are taken in different lighting environments to ensure variability and realism.
    Backgrounds: A variety of backgrounds are available to enhance model generalization.
    Device Quality: Photos are taken using the latest mobile devices to ensure high resolution and clarity.

    Metadata

    Each facial expression image set is accompanied by detailed metadata for each participant, including:

    Participant Identifier
    File Name
    Age
    Gender
    Country
    Expression
    Demographic Information
    File Format

    This metadata is essential for training models that can accurately recognize and identify expressions across different demographics and conditions.

    Usage and Applications

    This facial emotion dataset is ideal for various applications in the field of computer vision, including but not limited to:

    Expression Recognition Models: Improving the accuracy and reliability of facial expression recognition systems.
    KYC Models: Streamlining the identity verification processes for financial and other services.
    Biometric Identity Systems: Developing robust biometric identification solutions.
    Generative AI Models: Training generative AI models to create realistic and diverse synthetic facial images.

    Secure and Ethical Collection

    Data Security: Data was securely stored and processed within our platform, ensuring data security and confidentiality.
    Ethical Guidelines: The biometric data collection process adhered to strict ethical guidelines, ensuring the privacy and consent of all participants.
    Participant Consent: All participants were informed of the purpose of collection and potential use of the data, as agreed through written consent.

    Updates and Customization

    We understand the evolving nature of AI and machine learning requirements. Therefore, we continuously add more assets with diverse conditions to this off-the-shelf facial expression dataset.

    Customization & Custom

  4. r

    3D Facial Norms Database

    • rrid.site
    • scicrunch.org
    • +2more
    Updated Feb 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). 3D Facial Norms Database [Dataset]. http://identifiers.org/RRID:SCR_005991
    Explore at:
    Dataset updated
    Feb 2, 2025
    Description

    Database of high-quality craniofacial anthropometric normative data for the research and clinical community based on digital stereophotogrammetry. Unlike traditional craniofacial normative datasets that are limited to measures obtained with handheld calipers and tape measurers, the anthropometric data provided here are based on digital stereophotogrammetry, a method of 3D surface imaging ideally suited for capturing human facial surface morphology. Also unlike more traditional normative craniofacial resources, the 3D Facial Norms Database allows users to interact with data via an intuitive graphical interface and - given proper credentials - gain access to individual-level data, allowing users to perform their own analyses.

  5. P

    Oulu-CASIA Dataset

    • paperswithcode.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guoying Zhao; Xiaohua Huang; Matti Taini; Stan Z. Li; Matti Pietikäinen, Oulu-CASIA Dataset [Dataset]. https://paperswithcode.com/dataset/oulu-casia
    Explore at:
    Authors
    Guoying Zhao; Xiaohua Huang; Matti Taini; Stan Z. Li; Matti Pietikäinen
    Area covered
    Oulu
    Description

    The Oulu-CASIA NIR&VIS facial expression database consists of six expressions (surprise, happiness, sadness, anger, fear and disgust) from 80 people between 23 and 58 years old. 73.8% of the subjects are males. The subjects were asked to sit on a chair in the observation room in a way that he/ she is in front of camera. Camera-face distance is about 60 cm. Subjects were asked to make a facial expression according to an expression example shown in picture sequences. The imaging hardware works at the rate of 25 frames per second and the image resolution is 320 × 240 pixels.

  6. F

    Native American Facial Timeline Dataset | Facial Images from Past

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). Native American Facial Timeline Dataset | Facial Images from Past [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-historical-native-american
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the Native American Facial Images from Past Dataset, meticulously curated to enhance face recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.

    Facial Image Data

    This dataset comprises over 5,000+ images, divided into participant-wise sets with each set including:

    Historical Images: 22 different high-quality historical images per individual from the timeline of 10 years.
    Enrollment Image: One modern high-quality image for reference.

    Diversity and Representation

    The dataset includes contributions from a diverse network of individuals across Native American countries:

    Geographical Representation: Participants from countries including USA, Canada, Mexico and more.
    Demographics: Participants range from 18 to 70 years old, representing both males and females in 60:40 ratio, respectively.
    File Format: The dataset contains images in JPEG and HEIC file format.

    Quality and Conditions

    To ensure high utility and robustness, all images are captured under varying conditions:

    Lighting Conditions: Images are taken in different lighting environments to ensure variability and realism.
    Backgrounds: A variety of backgrounds are available to enhance model generalization.
    Device Quality: Photos are taken using the latest mobile devices to ensure high resolution and clarity.

    Metadata

    Each image set is accompanied by detailed metadata for each participant, including:

    Participant Identifier
    File Name
    Age at the time of capture
    Gender
    Country
    Demographic Information
    File Format

    This metadata is essential for training models that can accurately recognize and identify Native American faces across different demographics and conditions.

    Usage and Applications

    This facial image dataset is ideal for various applications in the field of computer vision, including but not limited to:

    Facial Recognition Models: Improving the accuracy and reliability of facial recognition systems.
    KYC Models: Streamlining the identity verification processes for financial and other services.
    Biometric Identity Systems: Developing robust biometric identification solutions.
    Age Prediction Models: Training models to accurately predict the age of individuals based on facial features.
    Generative AI Models: Training generative AI models to create realistic and diverse synthetic facial images.

    Secure and Ethical Collection

    Data Security: Data was securely stored and processed within our platform, ensuring data security and confidentiality.
    Ethical Guidelines: The biometric data collection process adhered to strict ethical guidelines, ensuring the privacy and consent of all participants.
    Participant Consent: All participants were informed of the purpose of collection and potential use of the data, as agreed through written consent.
    <h3 style="font-weight:

  7. P

    MMI Dataset

    • paperswithcode.com
    Updated Jun 18, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Maja Pantic; Michel François Valstar; Ron Rademaker; Ludo Maat (2021). MMI Dataset [Dataset]. https://paperswithcode.com/dataset/mmi
    Explore at:
    Dataset updated
    Jun 18, 2021
    Authors
    Maja Pantic; Michel François Valstar; Ron Rademaker; Ludo Maat
    Description

    The MMI Facial Expression Database consists of over 2900 videos and high-resolution still images of 75 subjects. It is fully annotated for the presence of AUs in videos (event coding), and partially coded on frame-level, indicating for each frame whether an AU is in either the neutral, onset, apex or offset phase. A small part was annotated for audio-visual laughters.

  8. Facial Recognition Market will grow at a CAGR of 17.0% from 2024 to 2031!

    • cognitivemarketresearch.com
    pdf,excel,csv,ppt
    Updated Apr 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cognitive Market Research (2024). Facial Recognition Market will grow at a CAGR of 17.0% from 2024 to 2031! [Dataset]. https://www.cognitivemarketresearch.com/facial-recognition-market-report
    Explore at:
    pdf,excel,csv,pptAvailable download formats
    Dataset updated
    Apr 6, 2024
    Dataset provided by
    Decipher Market Research
    Authors
    Cognitive Market Research
    License

    https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy

    Time period covered
    2021 - 2033
    Area covered
    Global
    Description

    According to Cognitive Market Research, the global Facial Recognition market will be USD 6515.2 million in 2024 and expand at a compound annual growth rate (CAGR) of 17.0% from 2024 to 2031.

    North America held the major market of more than 40% of the global revenue with a market size of USD 2606.08 million in 2024 and will grow at a compound annual growth rate (CAGR) of 15.2% from 2024 to 2031.
    Europe accounted for a share of over 30% of the global market size of USD 1954.56 million.
    Asia Pacific held the market of around 23% of the global revenue with a market size of USD 1498.50 million in 2024 and will grow at a compound annual growth rate (CAGR) of 19.0% from 2024 to 2031.
    Latin America's market has more than 5% of the global revenue, with a market size of USD 325.76 million in 2024, and will grow at a compound annual growth rate (CAGR) of 16.4% from 2024 to 2031.
    Middle East and Africa held the major market of around 2% of the global revenue with a market size of USD 130.30 million in 2024 and will grow at a compound annual growth rate (CAGR) of 16.7% from 2024 to 2031.
    The government and defense held the highest facial recognition market revenue share in 2024.
    

    Market Dynamics of Facial Recognition Market

    Key Drivers of Facial Recognition Market

    Advancements in Technology to Increase the Demand Globally
    

    More advancements in 3D facial recognition and enhanced algorithms make identity recognition more accurate. This increases the technology's dependability for other uses, such as security. The availability of facial recognition software is growing as a cloud-based service. This lowers the barrier to technology adoption for enterprises by removing the need for costly hardware and infrastructure purchases. Artificial intelligence (AI) developments enable facial recognition systems to perform functions beyond simple identification. They can now assess demographics and facial expressions, opening up new possibilities for customer service, marketing, and other fields. The market is expanding because of the increased range of applications for facial recognition that these developments are enabling.

    Furthermore, the precision offered by 3D facial recognition systems motivates using these systems for public safety applications, including surveillance and border protection. 3D recognition systems better serve high-security areas such as airports than 2D ones. All of these factors will strengthen the worldwide market.

    Increasing Security Concerns to Propel Market Growth
    

    As security concerns grow, facial recognition technology is increasingly employed. This is a key element driving the market for facial recognition technology's growth. People in busy places like train stations, airports, and city centers can be recognized and followed using facial recognition technology. Terrorist acts and criminal activity can both be prevented by this. Travelers' identities can be confirmed via facial recognition, as can the identities of those on watchlists. By doing this, illegal immigration can be stopped, and border security can be strengthened. When someone uses an ATM or other financial facility, facial recognition technology can be used to confirm their identification. Fraud and identity theft may be lessened, and facial recognition can control access to buildings and other secure areas. This can help to prevent unauthorized access and protect sensitive information.

    Restraint Factors Of Facial Recognition Marke

    Privacy Concerns and Technical Limitations to Limit the Sales
    

    One major obstacle to the widespread application of facial recognition technology is privacy concerns, including the possibility of governments or law enforcement abusing face recognition data. Hacking of facial recognition data could lead to identity theft or unauthorized access to personal data. There is a possibility for widespread monitoring and tracking of individuals without their knowledge or agreement through mass surveillance. The use of facial recognition technology is now subject to certain laws and limitations as a result of privacy concerns. For instance, the General Data Protection Regulation (GDPR) in Europe imposes stringent restrictions on the collection and use of face recognition data, and several American towns have outlawed the use of facial recognition technology by law enforcement. The future of the facial recognition market is unclear. ...

  9. Data from: Color FERET Database

    • catalog.data.gov
    • data.nist.gov
    • +2more
    Updated Jun 27, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2023). Color FERET Database [Dataset]. https://catalog.data.gov/dataset/color-feret-database-de79c
    Explore at:
    Dataset updated
    Jun 27, 2023
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    The DOD Counterdrug Technology Program sponsored the Facial Recognition Technology (FERET) program and development of the FERET database. The National Institute of Standards and Technology (NIST) is serving as Technical Agent for distribution of the FERET database. The goal of the FERET program is to develop new techniques, technology, and algorithms for the automatic recognition of human faces. As part of the FERET program, a database of facial imagery was collected between December 1993 and August 1996. The database is used to develop, test, and evaluate face recognition algorithms.

  10. Data from: An fMRI dataset in response to large-scale short natural dynamic...

    • openneuro.org
    Updated Oct 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panpan Chen; Chi Zhang; Bao Li; Li Tong; Linyuan Wang; Shuxiao Ma; Long Cao; Ziya Yu; Bin Yan (2024). An fMRI dataset in response to large-scale short natural dynamic facial expression videos [Dataset]. http://doi.org/10.18112/openneuro.ds005047.v1.0.6
    Explore at:
    Dataset updated
    Oct 15, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Panpan Chen; Chi Zhang; Bao Li; Li Tong; Linyuan Wang; Shuxiao Ma; Long Cao; Ziya Yu; Bin Yan
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Summary

    Facial expression is among the most natural methods for human beings to convey their emotional information in daily life. Although the neural mechanisms of facial expression have been extensively studied employing lab-controlled images and a small number of lab-controlled video stimuli, how the human brain processes natural dynamic facial expression videos still needs to be investigated. To our knowledge, this type of data specifically on large-scale natural facial expression videos is currently missing. We describe here the natural Facial Expressions Dataset (NFED), an fMRI dataset including responses to 1,320 short (3-second) natural facial expression video clips. These video clips are annotated with three types of labels: emotion, gender, and ethnicity, along with accompanying metadata. We validate that the dataset has good quality within and across participants and, notably, can capture temporal and spatial stimuli features. NFED provides researchers with fMRI data for understanding of the visual processing of large number of natural facial expression videos.

    Data Records

    Data Records The data, which is structured following the BIDS format53, were accessible at https://openneuro.org/datasets/ds00504754. The “sub-

    Stimulus. Distinct folders store the stimuli for distinct fMRI experiments: "stimuli/face-video", "stimuli/floc", and "stimuli/prf" (Fig. 2b). The category labels and metadata corresponding to video stimuli are stored in the "videos-stimuli_category_metadata.tsv”. The “videos-stimuli_description.json” file describes category and metadata information of video stimuli(Fig. 2b).

    Raw MRI data. Each participant's folder is comprised of 11 session folders: “sub-

    Volume data from pre-processing. The pre-processed volume-based fMRI data were in the folder named “pre-processed_volume_data/sub-

    Surface data from pre-processing. The pre-processed surface-based data were stored in a file named “volumetosurface/sub-

    FreeSurfer recon-all. The results of reconstructing the cortical surface are saved as “recon-all-FreeSurfer/sub-

    Surface-based GLM analysis data. We have conducted GLMsingle on the data of the main experiment. There is a file named “sub--

    Validation. The code of technical validation was saved in the “derivatives/validation/code” folder. The results of technical validation were saved in the “derivatives/validation/results” folder (Fig. 2h). The “README.md” describes the detailed information of code and results.

  11. b

    BioID-FD-EYEPOS-V1.2

    • bioid.com
    Updated Mar 2, 2011
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BioID (2011). BioID-FD-EYEPOS-V1.2 [Dataset]. https://www.bioid.com/face-database/
    Explore at:
    Dataset updated
    Mar 2, 2011
    Dataset authored and provided by
    BioID
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Eye Position File Format - The eye position files are text files containing a single comment line followed by the x and the y coordinate of the left eye and the x and the y coordinate of the right eye separated by spaces. Note that we refer to the left eye as the person's left eye. Therefore, when captured by a camera, the position of the left eye is on the image's right and vice versa.

  12. AR Face Database (128x128)

    • kaggle.com
    Updated Dec 28, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Felipe Menino (2020). AR Face Database (128x128) [Dataset]. https://www.kaggle.com/datasets/phelpsmemo/ar-face-database-128x128
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 28, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Felipe Menino
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset

    This dataset was created by Felipe Menino

    Released under Attribution 4.0 International (CC BY 4.0)

    Contents

  13. f

    Table_2_East Asian Young and Older Adult Perceptions of Emotional Faces From...

    • figshare.com
    xlsx
    Updated May 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yu-Zhen Tu; Dong-Wei Lin; Atsunobu Suzuki; Joshua Oon Soo Goh (2023). Table_2_East Asian Young and Older Adult Perceptions of Emotional Faces From an Age- and Sex-Fair East Asian Facial Expression Database.XLSX [Dataset]. http://doi.org/10.3389/fpsyg.2018.02358.s004
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Frontiers
    Authors
    Yu-Zhen Tu; Dong-Wei Lin; Atsunobu Suzuki; Joshua Oon Soo Goh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    East Asia
    Description

    There is increasing interest in clarifying how different face emotion expressions are perceived by people from different cultures, of different ages and sex. However, scant availability of well-controlled emotional face stimuli from non-Western populations limit the evaluation of cultural differences in face emotion perception and how this might be modulated by age and sex differences. We present a database of East Asian face expression stimuli, enacted by young and older, male and female, Taiwanese using the Facial Action Coding System (FACS). Combined with a prior database, this present database consists of 90 identities with happy, sad, angry, fearful, disgusted, surprised and neutral expressions amounting to 628 photographs. Twenty young and 24 older East Asian raters scored the photographs for intensities of multiple-dimensions of emotions and induced affect. Multivariate analyses characterized the dimensionality of perceived emotions and quantified effects of age and sex. We also applied commercial software to extract computer-based metrics of emotions in photographs. Taiwanese raters perceived happy faces as one category, sad, angry, and disgusted expressions as one category, and fearful and surprised expressions as one category. Younger females were more sensitive to face emotions than younger males. Whereas, older males showed reduced face emotion sensitivity, older female sensitivity was similar or accentuated relative to young females. Commercial software dissociated six emotions according to the FACS demonstrating that defining visual features were present. Our findings show that East Asians perceive a different dimensionality of emotions than Western-based definitions in face recognition software, regardless of age and sex. Critically, stimuli with detailed cultural norms are indispensable in interpreting neural and behavioral responses involving human facial expression processing. To this end, we add to the tools, which are available upon request, for conducting such research.

  14. JAFFE (Deprecated, use v.2 instead)

    • zenodo.org
    Updated Mar 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Lyons; Michael Lyons; Miyuki Kamachi; Jiro Gyoba; Jiro Gyoba; Miyuki Kamachi (2025). JAFFE (Deprecated, use v.2 instead) [Dataset]. http://doi.org/10.5281/zenodo.3451524
    Explore at:
    Dataset updated
    Mar 20, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Michael Lyons; Michael Lyons; Miyuki Kamachi; Jiro Gyoba; Jiro Gyoba; Miyuki Kamachi
    Description

    V.1 is deprecated, use V.2 instead.

    The images are the same: only the README file has been updated.

    https://doi.org/10.5281/zenodo.14974867

    The JAFFE images may be used only for non-commercial scientific research.

    The source and background of the dataset must be acknowledged by citing the following two articles. Users should read both carefully.

    Michael J. Lyons, Miyuki Kamachi, Jiro Gyoba.
    Coding Facial Expressions with Gabor Wavelets (IVC Special Issue)
    arXiv:2009.05938 (2020) https://arxiv.org/pdf/2009.05938.pdf

    Michael J. Lyons
    "Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset
    arXiv: 2107.13998 (2021) https://arxiv.org/abs/2107.13998

    The following is not allowed:

    • Redistribution of the JAFFE dataset (incl. via Github, Kaggle, Colaboratory, GitCafe, CSDN etc.)
    • Posting JAFFE images on the web and social media
    • Public exhibition of JAFFE images in museums/galleries etc.
    • Broadcast in the mass media (tv shows, films, etc.)

    A few sample images (not more than 10) may be displayed in scientific publications.

  15. m

    Facial Recognition Dataset FULL (part 2 of 4)

    • data.mendeley.com
    Updated Dec 19, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Collin Gros (2018). Facial Recognition Dataset FULL (part 2 of 4) [Dataset]. http://doi.org/10.17632/ycjd7mdsbs.1
    Explore at:
    Dataset updated
    Dec 19, 2018
    Authors
    Collin Gros
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Includes face images of 11 subjects with 3 sets of images: one of the subject with no occlusion, one of them wearing a hat, and one of them wearing glasses. Each set consists of 5 subject positions (subject's two profile positions, one central position, and two positions angled between the profile and central positions), with 7 lighting angles for each position (completing a 180 degree arc around the subject), and 5 light settings for each angle (warm, cold, low, medium, and bright). Images are 5184 pixels tall by 3456 pixels wide and are saved in .JPG format.

  16. f

    The ablation study result on AFEW.

    • plos.figshare.com
    xls
    Updated Aug 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhuan Li; Jin Liu; Hengyang Wang; Xiliang Zhang; Zhongdai Wu; Bing Han (2024). The ablation study result on AFEW. [Dataset]. http://doi.org/10.1371/journal.pone.0307446.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Aug 23, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Zhuan Li; Jin Liu; Hengyang Wang; Xiliang Zhang; Zhongdai Wu; Bing Han
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Facial expression recognition(FER) is a hot topic in computer vision, especially as deep learning based methods are gaining traction in this field. However, traditional convolutional neural networks (CNN) ignore the relative position relationship of key facial features (mouth, eyebrows, eyes, etc.) due to changes of facial expressions in real-world environments such as rotation, displacement or partial occlusion. In addition, most of the works in the literature do not take visual tempos into account when recognizing facial expressions that possess higher similarities. To address these issues, we propose a visual tempos 3D-CapsNet framework(VT-3DCapsNet). First, we propose 3D-CapsNet model for emotion recognition, in which we introduced improved 3D-ResNet architecture that integrated with AU-perceived attention module to enhance the ability of feature representation of capsule network, through expressing deeper hierarchical spatiotemporal features and extracting latent information (position, size, orientation) in key facial areas. Furthermore, we propose the temporal pyramid network(TPN)-based expression recognition module(TPN-ERM), which can learn high-level facial motion features from video frames to model differences in visual tempos, further improving the recognition accuracy of 3D-CapsNet. Extensive experiments are conducted on extended Kohn-Kanada (CK+) database and Acted Facial Expression in Wild (AFEW) database. The results demonstrate competitive performance of our approach compared with other state-of-the-art methods.

  17. Research Data: Facial Expression Recognition under Visual Field Restriction

    • zenodo.org
    • explore.openaire.eu
    bin, txt
    Updated Apr 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Melina Boratto Urtado; Melina Boratto Urtado; Rafael Delalibera Rodrigues; Rafael Delalibera Rodrigues (2024). Research Data: Facial Expression Recognition under Visual Field Restriction [Dataset]. http://doi.org/10.5281/zenodo.10703513
    Explore at:
    txt, binAvailable download formats
    Dataset updated
    Apr 25, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Melina Boratto Urtado; Melina Boratto Urtado; Rafael Delalibera Rodrigues; Rafael Delalibera Rodrigues
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the following files:

    - view_trial.xlsx: Excel spreadsheet containing data from individual trials.
    - view_participant.xlsx: Excel spreadsheet containing data aggregated at the participant level.
    - consensus.xlsx: Excel spreadsheet containing consensus data analysis.
    - image_id_list.txt: Text file listing the IDs of the images used in the study from The Karolinska Directed Emotional Faces (KDEF); https://kdef.se/.

    These files provide comprehensive data used in the research project titled "Exploring the Visual Field Restriction in the Recognition of Basic Facial Expressions: A Combined Eye Tracking and Gaze Contingency Study" conducted by M. B. Urtado, R. D. Rodrigues, and S. S. Fukusima. The dataset is intended for analysis and replication of the study's findings.

    Please, when using these data, we kindly request citing the following article:
    Urtado, M.B.; Rodrigues, R.D.; Fukusima, S.S. Visual Field Restriction in the Recognition of Basic Facial Expressions: A Combined Eye Tracking and Gaze Contingency Study. Behavioral Sciences 2024, 14, 355. https://doi.org/10.3390/bs14050355

    The study was approved by the Research Ethics Committee (CEP) of the University of São Paulo (protocol code 41844720.5.0000.5407).

  18. z

    UIBVFED: Virtual facial expression dataset

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    bin, png
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Miquel Mascaró Oliver; Miquel Mascaró Oliver (2024). UIBVFED: Virtual facial expression dataset [Dataset]. http://doi.org/10.1371/journal.pone.0231266
    Explore at:
    png, binAvailable download formats
    Dataset updated
    Jul 8, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Miquel Mascaró Oliver; Miquel Mascaró Oliver
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Apr 6, 2020
    Description

    The dataset is composed of 660 facial images (1080x1920) from 20 virtual characters each creating 32 facial expressions. The avatars represent 10 men and 10 women, aged between 20 and 80, from different ethnicities. Expressions are classified by the six universal expressions according to Gary Faigin classification.

  19. faces_dataset

    • kaggle.com
    zip
    Updated Mar 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GIACOMO CAPITANI (2024). faces_dataset [Dataset]. https://www.kaggle.com/datasets/giacomocapitani/faces-dataset
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Mar 26, 2024
    Authors
    GIACOMO CAPITANI
    Description

    Dataset

    This dataset was created by GIACOMO CAPITANI

    Contents

  20. P

    Data from: DISFA Dataset

    • paperswithcode.com
    Updated Apr 19, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DISFA Dataset [Dataset]. https://paperswithcode.com/dataset/disfa
    Explore at:
    Dataset updated
    Apr 19, 2020
    Authors
    Seyed Mohammad Mavadati; Mohammad H. Mahoor; Kevin Bartlett; Philip Trinh; Jeffrey F. Cohn
    Description

    The Denver Intensity of Spontaneous Facial Action (DISFA) dataset consists of 27 videos of 4844 frames each, with 130,788 images in total. Action unit annotations are on different levels of intensity, which are ignored in the following experiments and action units are either set or unset. DISFA was selected from a wider range of databases popular in the field of facial expression recognition because of the high number of smiles, i.e. action unit 12. In detail, 30,792 have this action unit set, 82,176 images have some action unit(s) set and 48,612 images have no action unit(s) set at all.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
BioID (2011). BioID-PTS-V1.2 [Dataset]. https://www.bioid.com/face-database/

BioID-PTS-V1.2

Explore at:
5 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Mar 2, 2011
Dataset authored and provided by
BioID
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

FGnet Markup Scheme of the BioID Face Database - The BioID Face Database is being used within the FGnet project of the European Working Group on face and gesture recognition. David Cristinacce and Kola Babalola, PhD students from the department of Imaging Science and Biomedical Engineering at the University of Manchester marked up the images from the BioID Face Database. They selected several additional feature points, which are very useful for facial analysis and gesture recognition.

Search
Clear search
Close search
Google apps
Main menu