This dataset was created by GIACOMO CAPITANI
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
FGnet Markup Scheme of the BioID Face Database - The BioID Face Database is being used within the FGnet project of the European Working Group on face and gesture recognition. David Cristinacce and Kola Babalola, PhD students from the department of Imaging Science and Biomedical Engineering at the University of Manchester marked up the images from the BioID Face Database. They selected several additional feature points, which are very useful for facial analysis and gesture recognition.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The BioID Face Database has been recorded and is published to give all researchers working in the area of face detection the possibility to compare the quality of their face detection algorithms with others. During the recording special emphasis has been laid on real world conditions. Therefore the testset features a large variety of illumination, background and face size. The dataset consists of 1521 gray level images with a resolution of 384x286 pixel. Each one shows the frontal view of a face of one out of 23 different test persons. For comparison reasons the set also contains manually set eye postions. The images are labeled BioID_xxxx.pgm where the characters xxxx are replaced by the index of the current image (with leading zeros). Similar to this, the files BioID_xxxx.eye contain the eye positions for the corresponding images.
The Extended Yale B database contains 2414 frontal-face images with size 192×168 over 38 subjects and about 64 images per subject. The images were captured under different lighting conditions and various facial expressions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Human Face Expression is a dataset for object detection tasks - it contains Human Face annotations for 1,228 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We examined whether reading and writing habits known to drive agency perception also shape the attribution of other agency-related traits, particularly for faces oriented congruently with script direction (i.e., left-to-right). Participants rated front-oriented, left-oriented and right-oriented faces on 14 dimensions. These ratings were first reduced to two dimensions, which were further confirmed with a new sample: power and social-warmth. Both dimensions were systematically affected by head orientation. Right-oriented faces generated a stronger endorsement of the power dimension (e.g., agency, dominance), and, to a lesser extent, of the social-warmth dimension, relative to the left and frontal-oriented faces. A further interaction between the head orientation of the faces and their gender revealed that front-facing females, relative to front-facing males, were attributed higher social-warmth scores, or communal traits (e.g., valence, warmth). These results carry implications for the representation of people in space particularly in marketing and political contexts. Face stimuli and respective norming data are available at www.osf.io/v5jpd.
WIDER FACE dataset is a face detection benchmark dataset, of which images are selected from the publicly available WIDER dataset. We choose 32,203 images and label 393,703 faces with a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER FACE dataset is organized based on 61 event classes. For each event class, we randomly select 40%/10%/50% data as training, validation and testing sets. We adopt the same evaluation metric employed in the PASCAL VOC dataset. Similar to MALF and Caltech datasets, we do not release bounding box ground truth for the test images. Users are required to submit final prediction files, which we shall proceed to evaluate.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('wider_face', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
https://storage.googleapis.com/tfds-data/visualization/fig/wider_face-0.1.0.png" alt="Visualization" width="500px">
https://choosealicense.com/licenses/cc0-1.0/https://choosealicense.com/licenses/cc0-1.0/
Dataset Card for anime-faces
Dataset Summary
This is a dataset consisting of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm in https://github.com/nagadomi/lbpcascade_animeface. All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset. Some outliers are still present in the dataset: Bad cropping results Some non-human faces. Feel free to… See the full description on the dataset page: https://huggingface.co/datasets/huggan/anime-faces.
https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
Welcome to Labeled Faces in the Wild, a database of face photographs designed for studying the problem of unconstrained face recognition. The data set contains more than 13,000 images of faces collected from the web. Each face has been labeled with the name of the person pictured. 1680 of the people pictured have two or more distinct photos in the data set. The only constraint on these faces is that they were detected by the Viola-Jones face detector. More details can be found in the technical report below. Information: 13233 images 5749 people 1680 people with two or more images Citation: Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments Gary B. Huang and Manu Ramesh and Tamara Berg and Erik Learned-Miller University of Massachusetts, Amherst - 2007
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The MAPIR Faces Dataset (publication still pending) is a collection of face images from 16 different individuals. Each individual has approximately 49 images uniformly distributed in a 7x7 grid over the head pose space defined by:
This dataset is designed as a benchmark to analyze the effect of detrimental factors due to pose variance in face recognition algorithms.
The Caltech Occluded Faces in the Wild (COFW) dataset is designed to present faces in real-world conditions. Faces show large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food, hands, microphones, etc.). All images were hand annotated using the same 29 landmarks as in LFPW. Both the landmark positions as well as their occluded/unoccluded state were annotated. The faces are occluded to different degrees, with large variations in the type of occlusions encountered. COFW has an average occlusion of over 23.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Facial Recognition (Celeb Faces) is a dataset for object detection tasks - it contains Celebrites Otherpplprolly annotations for 810 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Face evaluation and first impression generation can be affected by multiple face elements such as invariant facial features, gaze direction and environmental context; however, the composite modulation of eye gaze and illumination on faces of different gender and ages has not been previously investigated. We aimed at testing how these different facial and contextual features affect ratings of social attributes. Thus, we created and validated the Bi-AGI Database, a freely available new set of male and female face stimuli varying in age across lifespan from 18 to 87 years, gaze direction and illumination conditions. Judgments on attractiveness, femininity-masculinity, dominance and trustworthiness were collected for each stimulus. Results evidence the interaction of the different variables in modulating social trait attribution, in particular illumination differently affects ratings across age, gaze and gender, with less impact on older adults and greater effect on young faces.
https://www.icpsr.umich.edu/web/ICPSR/studies/38026/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/38026/terms
The Head Start Family and Child Experiences Survey (FACES) has been a source of information on the Head Start program and the children and families it serves. The 2019 Head Start Family and Child Experiences Survey, or FACES 2019, is the seventh in a series of national studies of Head Start, with earlier studies conducted in 1997, 2000, 2003, 2006, 2009, and 2014. It includes nationally representative samples of Head Start programs and centers, classrooms, and children and their families during the 2019-2020 program year. Data from surveys of Head Start program and center directors and classroom teachers provide descriptive information about program policies and practices, classroom activities, and the background of Head Start staff. These data compromise the Classroom Study. A sample of these programs also provides data from parent surveys, teacher child reports, and direct child assessments as part of the Classroom + Child Outcomes Study. FACES 2019 is designed to help policymakers address current policy questions and to support programs and practitioners working with Head Start families. According to the study design, FACES would have assessed children's readiness for school, surveyed parents, and asked teachers to provide information on children in both fall 2019 and spring 2020. In response to the COVID-19 (for coronavirus disease 2019) pandemic, however, FACES 2019 cancelled the first piece--the in-person data collection of child assessments in spring 2020. In-person classroom observations as part of the Classroom Study were also cancelled in spring 2020. FACES is designed so that researchers can answer a wide range of research questions that are crucial for aiding program directors and policymakers. FACES 2019 data may be used to describe (1) the quality and characteristics of Head Start programs, teachers, and classrooms; (2) the changes or trends in the quality and characteristics of the classrooms, programs, and staff over time; (3) the school readiness skills and family characteristics of the children who participate in Head Start; (4) the factors or characteristics that predict differences in classroom quality; (5) the changes or trends in the children's outcomes and family characteristics over time; and (6) the factors or characteristics at multiple levels that predict differences in the children's outcomes. The study also supports research questions related to subgroups of interest, such as children with identified disabilities and children who are dual-language learners (DLLs), as well as policy issues that emerge during the study. The study addresses changes in children's outcomes and experiences as well as changes in the characteristics of Head Start classrooms over time and across the rounds of FACES. Some of the questions that are central to FACES include: What are the characteristics of Head Start programs, including structural characteristics and program policies and practices? What are the characteristics and observed quality of Head Start classrooms? What are the characteristics and qualifications of Head Start teachers and management staff? Are the characteristics of programs, classrooms, and staff changing over time? What are the demographic characteristics and home environments of children and families who participate in Head Start? Are family demographic characteristics and aspects of home environments changing over time? How do families make early care and education decisions? What are the experiences of families and children in Head Start? What are the average school readiness skills and developmental outcomes of the population of Head Start children in fall and spring of the Head Start year? What gains do children make during a year of Head Start? Are children's school readiness skills (average skills or average gains in skills) improving over time? Does classroom quality vary by characteristics of classrooms, teachers, or programs? What characteristics of programs, teachers, or classrooms are associated with aspects of classroom quality? Do the school readiness skills of children in fall and spring and their gains in skills vary by child, family, program, and classroom characteristics? What is the association between observed classroom quality and children's school readiness skills? Between child and family characteristics and children's school readiness skills? The User Guide provides d
Vision-based cognitive services (CogS) have become crucial in a wide range of applications, from real-time security and social networks to smartphone applications. Many services focus on analyzing people images. When it comes to facial analysis, these services can be misleading or even inaccurate, raising ethical concerns such as the amplification of social stereotypes. We analyzed popular Image Tagging CogS that infer emotion from a person’s face, considering whether they perpetuate racial and gender stereotypes concerning emotion. By comparing both CogS and Human-generated descriptions on a set of controlled images, we highlight the need for transparency and fairness in CogS. In particular, we document evidence that CogS may actually be more likely than crowdworkers to perpetuate the stereotype of the “angry black man" and often attribute black race individuals with “emotions of hostility". This dataset consists of the raw data collected for this work, both from Emotion Analysis Services (EAS) and Crowdsourcing (Crowdworkers from the Appen (formerly known as FigureEight) Platform targeting US and India participants. We’ve used the Chicago Face Database (CFD) as our primary dataset for testing the behavior of the target EAS.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset ReleaseWe are pleased to announce the release of our "Facing Asymmetry Dataset," a comprehensive collection of simulated asymmetric faces relevant to understanding how neural networks react toward facial asymmetry during the six base emotions.This dataset has been developed through causal interventions and contains 200 individuals. The data has been carefully curated and processed to ensure quality and consistency. Each person has optimized facial expression for 17 independent FER classifiers. Additionally, we provide the logit activations of the classifiers.All resemblance to existing people is not intended and could only result from the underlying FLAME geometry model and the texture from the BaselFaceModel.This dataset accompanies the upcoming ACCV 2024 publication: Facing Asymmetry - Uncovering the Causal Link between Facial Symmetry and Expressio Classifiers using Synthetic Interventions.Dataset DetailsName: Facing AsymmetryDescription: Simulated facial asymmetry during the six base emotionsNumber of Examples: 200 individuals, with 17 expression classifiers, with each six emotionsData Type: images, CSV tables with logit activations, expression vectorsField: Computer Vision, Facial Expression Recognition, Facial PalsyUse and CitationThis dataset is intended for use in research. We encourage researchers and developers to utilize this resource and contribute to its further development.To cite this dataset, please refer to the following paper:Facing Asymmetry - Uncovering the Causal Link between Facial Symmetry and Expressio Classifiers using Synthetic InterventionsLicense and PermissionsThis dataset is released under the CC BY 4.0 license. By downloading or using this dataset, you agree to the terms of this license.Contact InformationIf you have any questions or comments regarding this dataset, please do not hesitate to contact us at tim.buechner@uni-jena.de
This dataset was created by Tanavya Dimri
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Example Faces is a dataset for object detection tasks - it contains Face annotations for 888 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Mariska E. Kret, Angela T. Maitner, & Agneta H. Fischer (2021) Frontiers in Psychology
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Eye Position File Format - The eye position files are text files containing a single comment line followed by the x and the y coordinate of the left eye and the x and the y coordinate of the right eye separated by spaces. Note that we refer to the left eye as the person's left eye. Therefore, when captured by a camera, the position of the left eye is on the image's right and vice versa.
This dataset was created by GIACOMO CAPITANI