Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
FGnet Markup Scheme of the BioID Face Database - The BioID Face Database is being used within the FGnet project of the European Working Group on face and gesture recognition. David Cristinacce and Kola Babalola, PhD students from the department of Imaging Science and Biomedical Engineering at the University of Manchester marked up the images from the BioID Face Database. They selected several additional feature points, which are very useful for facial analysis and gesture recognition.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The BioID Face Database has been recorded and is published to give all researchers working in the area of face detection the possibility to compare the quality of their face detection algorithms with others. During the recording special emphasis has been laid on real world conditions. Therefore the testset features a large variety of illumination, background and face size. The dataset consists of 1521 gray level images with a resolution of 384x286 pixel. Each one shows the frontal view of a face of one out of 23 different test persons. For comparison reasons the set also contains manually set eye postions. The images are labeled BioID_xxxx.pgm where the characters xxxx are replaced by the index of the current image (with leading zeros). Similar to this, the files BioID_xxxx.eye contain the eye positions for the corresponding images.
https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement
Welcome to the East Asian Facial Expression Image Dataset, meticulously curated to enhance expression recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.
This dataset comprises over 2000 facial expression images, divided into participant-wise sets with each set including:
The dataset includes contributions from a diverse network of individuals across East Asian countries, such as:
To ensure high utility and robustness, all images are captured under varying conditions:
Each facial expression image set is accompanied by detailed metadata for each participant, including:
This metadata is essential for training models that can accurately recognize and identify expressions across different demographics and conditions.
This facial emotion dataset is ideal for various applications in the field of computer vision, including but not limited to:
We understand the evolving nature of AI and machine learning requirements. Therefore, we continuously add more assets with diverse conditions to this off-the-shelf facial expression dataset.
Database of high-quality craniofacial anthropometric normative data for the research and clinical community based on digital stereophotogrammetry. Unlike traditional craniofacial normative datasets that are limited to measures obtained with handheld calipers and tape measurers, the anthropometric data provided here are based on digital stereophotogrammetry, a method of 3D surface imaging ideally suited for capturing human facial surface morphology. Also unlike more traditional normative craniofacial resources, the 3D Facial Norms Database allows users to interact with data via an intuitive graphical interface and - given proper credentials - gain access to individual-level data, allowing users to perform their own analyses.
The Oulu-CASIA NIR&VIS facial expression database consists of six expressions (surprise, happiness, sadness, anger, fear and disgust) from 80 people between 23 and 58 years old. 73.8% of the subjects are males. The subjects were asked to sit on a chair in the observation room in a way that he/ she is in front of camera. Camera-face distance is about 60 cm. Subjects were asked to make a facial expression according to an expression example shown in picture sequences. The imaging hardware works at the rate of 25 frames per second and the image resolution is 320 × 240 pixels.
https://www.futurebeeai.com/data-license-agreementhttps://www.futurebeeai.com/data-license-agreement
Welcome to the Native American Facial Images from Past Dataset, meticulously curated to enhance face recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.
This dataset comprises over 5,000+ images, divided into participant-wise sets with each set including:
The dataset includes contributions from a diverse network of individuals across Native American countries:
To ensure high utility and robustness, all images are captured under varying conditions:
Each image set is accompanied by detailed metadata for each participant, including:
This metadata is essential for training models that can accurately recognize and identify Native American faces across different demographics and conditions.
This facial image dataset is ideal for various applications in the field of computer vision, including but not limited to:
The MMI Facial Expression Database consists of over 2900 videos and high-resolution still images of 75 subjects. It is fully annotated for the presence of AUs in videos (event coding), and partially coded on frame-level, indicating for each frame whether an AU is in either the neutral, onset, apex or offset phase. A small part was annotated for audio-visual laughters.
https://www.cognitivemarketresearch.com/privacy-policyhttps://www.cognitivemarketresearch.com/privacy-policy
According to Cognitive Market Research, the global Facial Recognition market will be USD 6515.2 million in 2024 and expand at a compound annual growth rate (CAGR) of 17.0% from 2024 to 2031.
North America held the major market of more than 40% of the global revenue with a market size of USD 2606.08 million in 2024 and will grow at a compound annual growth rate (CAGR) of 15.2% from 2024 to 2031.
Europe accounted for a share of over 30% of the global market size of USD 1954.56 million.
Asia Pacific held the market of around 23% of the global revenue with a market size of USD 1498.50 million in 2024 and will grow at a compound annual growth rate (CAGR) of 19.0% from 2024 to 2031.
Latin America's market has more than 5% of the global revenue, with a market size of USD 325.76 million in 2024, and will grow at a compound annual growth rate (CAGR) of 16.4% from 2024 to 2031.
Middle East and Africa held the major market of around 2% of the global revenue with a market size of USD 130.30 million in 2024 and will grow at a compound annual growth rate (CAGR) of 16.7% from 2024 to 2031.
The government and defense held the highest facial recognition market revenue share in 2024.
Market Dynamics of Facial Recognition Market
Key Drivers of Facial Recognition Market
Advancements in Technology to Increase the Demand Globally
More advancements in 3D facial recognition and enhanced algorithms make identity recognition more accurate. This increases the technology's dependability for other uses, such as security. The availability of facial recognition software is growing as a cloud-based service. This lowers the barrier to technology adoption for enterprises by removing the need for costly hardware and infrastructure purchases. Artificial intelligence (AI) developments enable facial recognition systems to perform functions beyond simple identification. They can now assess demographics and facial expressions, opening up new possibilities for customer service, marketing, and other fields. The market is expanding because of the increased range of applications for facial recognition that these developments are enabling.
Furthermore, the precision offered by 3D facial recognition systems motivates using these systems for public safety applications, including surveillance and border protection. 3D recognition systems better serve high-security areas such as airports than 2D ones. All of these factors will strengthen the worldwide market.
Increasing Security Concerns to Propel Market Growth
As security concerns grow, facial recognition technology is increasingly employed. This is a key element driving the market for facial recognition technology's growth. People in busy places like train stations, airports, and city centers can be recognized and followed using facial recognition technology. Terrorist acts and criminal activity can both be prevented by this. Travelers' identities can be confirmed via facial recognition, as can the identities of those on watchlists. By doing this, illegal immigration can be stopped, and border security can be strengthened. When someone uses an ATM or other financial facility, facial recognition technology can be used to confirm their identification. Fraud and identity theft may be lessened, and facial recognition can control access to buildings and other secure areas. This can help to prevent unauthorized access and protect sensitive information.
Restraint Factors Of Facial Recognition Marke
Privacy Concerns and Technical Limitations to Limit the Sales
One major obstacle to the widespread application of facial recognition technology is privacy concerns, including the possibility of governments or law enforcement abusing face recognition data. Hacking of facial recognition data could lead to identity theft or unauthorized access to personal data. There is a possibility for widespread monitoring and tracking of individuals without their knowledge or agreement through mass surveillance. The use of facial recognition technology is now subject to certain laws and limitations as a result of privacy concerns. For instance, the General Data Protection Regulation (GDPR) in Europe imposes stringent restrictions on the collection and use of face recognition data, and several American towns have outlawed the use of facial recognition technology by law enforcement. The future of the facial recognition market is unclear. ...
The DOD Counterdrug Technology Program sponsored the Facial Recognition Technology (FERET) program and development of the FERET database. The National Institute of Standards and Technology (NIST) is serving as Technical Agent for distribution of the FERET database. The goal of the FERET program is to develop new techniques, technology, and algorithms for the automatic recognition of human faces. As part of the FERET program, a database of facial imagery was collected between December 1993 and August 1996. The database is used to develop, test, and evaluate face recognition algorithms.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Summary
Facial expression is among the most natural methods for human beings to convey their emotional information in daily life. Although the neural mechanisms of facial expression have been extensively studied employing lab-controlled images and a small number of lab-controlled video stimuli, how the human brain processes natural dynamic facial expression videos still needs to be investigated. To our knowledge, this type of data specifically on large-scale natural facial expression videos is currently missing. We describe here the natural Facial Expressions Dataset (NFED), an fMRI dataset including responses to 1,320 short (3-second) natural facial expression video clips. These video clips are annotated with three types of labels: emotion, gender, and ethnicity, along with accompanying metadata. We validate that the dataset has good quality within and across participants and, notably, can capture temporal and spatial stimuli features. NFED provides researchers with fMRI data for understanding of the visual processing of large number of natural facial expression videos.
Data Records
Data Records The data, which is structured following the BIDS format53, were accessible at https://openneuro.org/datasets/ds00504754. The “sub-
Stimulus. Distinct folders store the stimuli for distinct fMRI experiments: "stimuli/face-video", "stimuli/floc", and "stimuli/prf" (Fig. 2b). The category labels and metadata corresponding to video stimuli are stored in the "videos-stimuli_category_metadata.tsv”. The “videos-stimuli_description.json” file describes category and metadata information of video stimuli(Fig. 2b).
Raw MRI data. Each participant's folder is comprised of 11 session folders: “sub-
Volume data from pre-processing. The pre-processed volume-based fMRI data were in the folder named “pre-processed_volume_data/sub-
Surface data from pre-processing. The pre-processed surface-based data were stored in a file named “volumetosurface/sub-
FreeSurfer recon-all. The results of reconstructing the cortical surface are saved as “recon-all-FreeSurfer/sub-
Surface-based GLM analysis data. We have conducted GLMsingle on the data of the main experiment. There is a file named “sub--
Validation. The code of technical validation was saved in the “derivatives/validation/code” folder. The results of technical validation were saved in the “derivatives/validation/results” folder (Fig. 2h). The “README.md” describes the detailed information of code and results.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Eye Position File Format - The eye position files are text files containing a single comment line followed by the x and the y coordinate of the left eye and the x and the y coordinate of the right eye separated by spaces. Note that we refer to the left eye as the person's left eye. Therefore, when captured by a camera, the position of the left eye is on the image's right and vice versa.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset was created by Felipe Menino
Released under Attribution 4.0 International (CC BY 4.0)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
There is increasing interest in clarifying how different face emotion expressions are perceived by people from different cultures, of different ages and sex. However, scant availability of well-controlled emotional face stimuli from non-Western populations limit the evaluation of cultural differences in face emotion perception and how this might be modulated by age and sex differences. We present a database of East Asian face expression stimuli, enacted by young and older, male and female, Taiwanese using the Facial Action Coding System (FACS). Combined with a prior database, this present database consists of 90 identities with happy, sad, angry, fearful, disgusted, surprised and neutral expressions amounting to 628 photographs. Twenty young and 24 older East Asian raters scored the photographs for intensities of multiple-dimensions of emotions and induced affect. Multivariate analyses characterized the dimensionality of perceived emotions and quantified effects of age and sex. We also applied commercial software to extract computer-based metrics of emotions in photographs. Taiwanese raters perceived happy faces as one category, sad, angry, and disgusted expressions as one category, and fearful and surprised expressions as one category. Younger females were more sensitive to face emotions than younger males. Whereas, older males showed reduced face emotion sensitivity, older female sensitivity was similar or accentuated relative to young females. Commercial software dissociated six emotions according to the FACS demonstrating that defining visual features were present. Our findings show that East Asians perceive a different dimensionality of emotions than Western-based definitions in face recognition software, regardless of age and sex. Critically, stimuli with detailed cultural norms are indispensable in interpreting neural and behavioral responses involving human facial expression processing. To this end, we add to the tools, which are available upon request, for conducting such research.
The JAFFE images may be used only for non-commercial scientific research.
The source and background of the dataset must be acknowledged by citing the following two articles. Users should read both carefully.
Michael J. Lyons, Miyuki Kamachi, Jiro Gyoba.
Coding Facial Expressions with Gabor Wavelets (IVC Special Issue)
arXiv:2009.05938 (2020) https://arxiv.org/pdf/2009.05938.pdf
Michael J. Lyons
"Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset
arXiv: 2107.13998 (2021) https://arxiv.org/abs/2107.13998
The following is not allowed:
A few sample images (not more than 10) may be displayed in scientific publications.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Includes face images of 11 subjects with 3 sets of images: one of the subject with no occlusion, one of them wearing a hat, and one of them wearing glasses. Each set consists of 5 subject positions (subject's two profile positions, one central position, and two positions angled between the profile and central positions), with 7 lighting angles for each position (completing a 180 degree arc around the subject), and 5 light settings for each angle (warm, cold, low, medium, and bright). Images are 5184 pixels tall by 3456 pixels wide and are saved in .JPG format.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Facial expression recognition(FER) is a hot topic in computer vision, especially as deep learning based methods are gaining traction in this field. However, traditional convolutional neural networks (CNN) ignore the relative position relationship of key facial features (mouth, eyebrows, eyes, etc.) due to changes of facial expressions in real-world environments such as rotation, displacement or partial occlusion. In addition, most of the works in the literature do not take visual tempos into account when recognizing facial expressions that possess higher similarities. To address these issues, we propose a visual tempos 3D-CapsNet framework(VT-3DCapsNet). First, we propose 3D-CapsNet model for emotion recognition, in which we introduced improved 3D-ResNet architecture that integrated with AU-perceived attention module to enhance the ability of feature representation of capsule network, through expressing deeper hierarchical spatiotemporal features and extracting latent information (position, size, orientation) in key facial areas. Furthermore, we propose the temporal pyramid network(TPN)-based expression recognition module(TPN-ERM), which can learn high-level facial motion features from video frames to model differences in visual tempos, further improving the recognition accuracy of 3D-CapsNet. Extensive experiments are conducted on extended Kohn-Kanada (CK+) database and Acted Facial Expression in Wild (AFEW) database. The results demonstrate competitive performance of our approach compared with other state-of-the-art methods.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the following files:
- view_trial.xlsx: Excel spreadsheet containing data from individual trials.
- view_participant.xlsx: Excel spreadsheet containing data aggregated at the participant level.
- consensus.xlsx: Excel spreadsheet containing consensus data analysis.
- image_id_list.txt: Text file listing the IDs of the images used in the study from The Karolinska Directed Emotional Faces (KDEF); https://kdef.se/.
These files provide comprehensive data used in the research project titled "Exploring the Visual Field Restriction in the Recognition of Basic Facial Expressions: A Combined Eye Tracking and Gaze Contingency Study" conducted by M. B. Urtado, R. D. Rodrigues, and S. S. Fukusima. The dataset is intended for analysis and replication of the study's findings.
Please, when using these data, we kindly request citing the following article:
Urtado, M.B.; Rodrigues, R.D.; Fukusima, S.S. Visual Field Restriction in the Recognition of Basic Facial Expressions: A Combined Eye Tracking and Gaze Contingency Study. Behavioral Sciences 2024, 14, 355. https://doi.org/10.3390/bs14050355
The study was approved by the Research Ethics Committee (CEP) of the University of São Paulo (protocol code 41844720.5.0000.5407).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is composed of 660 facial images (1080x1920) from 20 virtual characters each creating 32 facial expressions. The avatars represent 10 men and 10 women, aged between 20 and 80, from different ethnicities. Expressions are classified by the six universal expressions according to Gary Faigin classification.
This dataset was created by GIACOMO CAPITANI
The Denver Intensity of Spontaneous Facial Action (DISFA) dataset consists of 27 videos of 4844 frames each, with 130,788 images in total. Action unit annotations are on different levels of intensity, which are ignored in the following experiments and action units are either set or unset. DISFA was selected from a wider range of databases popular in the field of facial expression recognition because of the high number of smiles, i.e. action unit 12. In detail, 30,792 have this action unit set, 82,176 images have some action unit(s) set and 48,612 images have no action unit(s) set at all.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
FGnet Markup Scheme of the BioID Face Database - The BioID Face Database is being used within the FGnet project of the European Working Group on face and gesture recognition. David Cristinacce and Kola Babalola, PhD students from the department of Imaging Science and Biomedical Engineering at the University of Manchester marked up the images from the BioID Face Database. They selected several additional feature points, which are very useful for facial analysis and gesture recognition.