The Celebrities in Frontal-Profile (CFP) dataset contains 500 celebrities, each of which has ten frontal and four profile face images.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Open set face recognition on a small dataset
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Collecting personal photos. The guidelines for personal photo collection were asfollows. Participants were required to provide a total of 10 personal photos that meet specificcriteria to ensure diversity and comprehensiveness. These photoswere required to vary in type,capturing different times and angles, such as front and side views. They also had to reflectdifferent facial expressions, including smiling and neutral, and could be taken with and withoutmakeup. Additionally, the photos had to feature various backgrounds, encompassing bothindoor and outdoor settings, and different lighting conditions. Participants were instructed towear different attire in these photos, including both casual clothes and uniforms. The photosneeded to be named according to the participant’s age at the time the photo was taken.Along with the photos, participants were required to fill out a basic demographic informationform. Note that makeup and accessories were allowed in our dataset. Photos that displayedissues—such as obstructions covering facial features, excessive similarity affecting individual distinctiveness, blurriness, or unrecognizable facial features—prompted experimenters torequest additional images. The photos with issues were included in the dataset.distinctiveness, blurriness, or unrecognizable facial features—prompted experimenters toObtaining laboratory photos. In the laboratory photo collection, model participantsvisited the lab to have standardized photos taken. They were instructed not to wear heavymakeup during the photo sessions. The photos were captured using a Canon EOS 6D MarkII digital camera with a 50 mm lens, set to a resolution of 72 dpi and a size of 4160×6240pixels. The camera was positioned at an adjustable height to align with the model’s face,ensuring that individuals of varying heights were fully visible in the photos. A green curtainserved as a consistent background, while two softboxes, placed one meter on either side ofthe model, provided bright and evenly distributed illumination.During the laboratory photo session, model participants were photographed under nine ∗specific conditions. These conditions were categorized across fourvariables: photo types (whole body and headshots), pose types (spontaneous and standardpose), expression types (neutral and smiling), and viewpoints (frontal, 45-degree left profile,45-degree right profile, and back). This categorization resulted in nine distinct image types,such as a whole-body standardized pose with a neutral expression from a frontal viewpoint.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This project reannotated the Celebrities in Frontal-Profile (CFP) [1] dataset. It selected all the profile face images (2000 images) from the CFP dataset, aligned their facial directions to the same orientation, and annotated them with a target box (side face) and five keypoints (tragus, eye corner, nose tip, upper lip, and mouth corner). The annotation format used is the JSON format of Labelme. Please note that this project involved solely the reannotation of an existing dataset. [1] Sengupta, S., Chen, J. C., Castillo, C., Patel, V. M., Chellappa, R., & Jacobs, D. W. (2016, March). Frontal to profile face verification in the wild. In 2016 IEEE winter conference on applications of computer vision (WACV) (pp. 1-9). IEEE.
Current profile data from Upward-looking ADCP #01 on FPF Front Runner, 2.763000e+01N, 9.044000e+01W, 2009/01/20 00:14 through 2009/02/03 08:21
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT Objective: The objective of this study was to evaluate the facial attractiveness in 30 black individuals, according to the Subjective Facial Analysis criteria. Methods: Frontal and profile view photographs of 30 black individuals were evaluated for facial attractiveness and classified as esthetically unpleasant, acceptable, or pleasant by 50 evaluators: the 30 individuals from the sample, 10 orthodontists, and 10 laymen. Besides assessing the facial attractiveness, the evaluators had to identify the structures responsible for the classification as unpleasant and pleasant. Intraexaminer agreement was assessed by using Spearman's correlation, correlation within each category using Kendall concordance coefficient, and correlation between the 3 categories using chi-square test and proportions. Results: Most of the frontal (53. 5%) and profile view (54. 9%) photographs were classified as esthetically acceptable. The structures most identified as esthetically unpleasant were the mouth, lips, and face, in the frontal view; and nose and chin in the profile view. The structures most identified as esthetically pleasant were harmony, face, and mouth, in the frontal view; and harmony and nose in the profile view. The ratings by the examiners in the sample and laymen groups showed statistically significant correlation in both views. The orthodontists agreed with the laymen on the evaluation of the frontal view and disagreed on profile view, especially regarding whether the images were esthetically unpleasant or acceptable. Conclusions: Based on these results, the evaluation of facial attractiveness according to the Subjective Facial Analysis criteria proved to be applicable and to have a subjective influence; therefore, it is suggested that the patient's opinion regarding the facial esthetics should be considered in orthodontic treatmentplanning.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Draw-A-Person Intellectual Ability Test for Children, Adolescents, and Adults (DAP: IQ; Reynolds & Hickman, 2004) was used. The DAP: IQ is a quick screening version of a test typically used to estimate IQ (Goodenough, 1926). It was designed to be used with children through adults ranging in age from 4 to 89 years and can be individually or group administered (Williams, Fall, Eaves, & Woods-Groves, 2006). A number of studies have suggested that it may also be a useful screening tool for fine motor movements (Rehrig & Stromswold, 2018). The present study utilized it in this way - as an assessment of motor drawing skills, specifically drawing of self.
Using
official DAP drawing sheet and pencil, participants were instructed to draw a full-body, frontal-view of themselves, while avoiding cartoonish or stick figure representations. However, they were told that their work would not be evaluated on drawing skill so as to alleviate performance anxiety. Participants were given 10 min to complete the task, with a 1-2 min extension, if necessary. Because EEG was simultaneously recorded, participants were instructed to relax and minimize head and body movements.
The QSS
(quantitative scoring system) was utilized to evaluate the DAP drawings. This system analyzes fourteen different aspects of the drawings (such as specific body parts and clothing) for various criteria, including presence or absence, detail, and proportion. Goodenough's original scale had 46 scoring items for each drawing, with 5 bonus items for drawings in profile (Goodenough, 1926). A more recent version was used that uses 64 scoring items for each drawing (Reynolds & Hickman, 2004). A separate standard score is recorded for each drawing.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Studies in social perception traditionally use as stimuli frontal portrait photographs. It turns out, however, that 2D frontal depiction may not fully capture the entire morphological diversity of facial features. Recently, 3D images started to become increasingly popular, but whether their perception differs from the perception of 2D has not been systematically studied as yet. Here we investigated congruence in the perception of portrait, left profile, and 360° rotation photographs. The photographs were obtained from 45 male athletes under standardized conditions. In two separate studies, each set of images was rated for formidability (portraits by 62, profiles by 60, and 360° rotations by 94 raters) and attractiveness (portraits by 195, profiles by 176, and 360° rotations by 150 raters) on a 7-point scale. The ratings of the stimuli types were highly intercorrelated (for formidability all rs > 0.8, for attractiveness all rs > 0.7). Moreover, we found no differences in the mean ratings between the three types of stimuli, neither in formidability, nor in attractiveness. Overall, our results clearly suggest that different facial views convey highly overlapping information about structural facial elements of an individual. They lead to congruent assessments of formidability and attractiveness, and a single angle view seems sufficient for face perception research.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Emotion recognition confusion matrix using frontal and profile views of the DVFs.
This database was distributed for use in development and testing of automated mugshot identification systems. The database consists of one zipped file, containing a total of 3,248 images of variable size using PNG format for the images and TXT format for corresponding metadata files. There are images of 1,573 individuals (cases) 1,495 male and 78 female. The database contains both front and side (profile) views when available. Separating front views and profiles, there are 131 cases with two or more front views and 1,418 with only one front view. Profiles have 89 cases with two or more profiles and 1,268 with only one profile. Cases with both fronts and profiles have 89 cases with two or more of both fronts and profiles, 27 with two or more fronts and one profile, and 1,217 with only one front and one profile.
https://dataverse.nl/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.34894/K0WSN6https://dataverse.nl/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.34894/K0WSN6
Mistaken eyewitness identifications continue to be a major contributor to miscarriages of justice. Previous experiments suggested that implicit identification procedures such as the Concealed Information Test (CIT) might be a promising alternative to classic lineups when encoding conditions during the crime were favorable. We tested this idea by manipulating view congruency (frontal vs. profile view) between encoding and test. Participants witnessed a videotaped mock theft that showed the thief and victim almost exclusively from frontal or profile view. At test, viewing angle was either congruent or incongruent with the view during encoding. We tested eyewitness identification with the RT-CIT (N = 74), and with a traditional simultaneous photo lineup (N = 97). The CIT showed strong capacity to diagnose face recognition (d = 0.91 [0.64; 1.18]) but unexpectedly, view congruency did not moderate this effect. View congruency moderated lineup performance for one of the two lineups. Following these unexpected findings, we conducted a replication with a stronger congruency manipulation and larger sample size. CIT (N = 156) showed moderate capacity to diagnose face recognition (d = 0.63 [0.46; 0.80]) and now view congruency did moderate the CIT effect. For lineups (N = 156), view congruency again moderated performance for one of the two lineups. Capacity for diagnosing face recognition was similar for lineups and RT-CIT in our first comparison but much stronger for lineups in our second comparison. Future experiments might investigate more conditions that affect performance in lineups vs. the RT-CIT differentially.
Current profile data from Downward-looking ADCP #02 on FPF Front Runner, 2.763000e+01N, 9.044000e+01W, 2009/01/02 00:17 through 2009/04/14 23:48
Current profile data from Downward-looking ADCP #02 on FPF Front Runner, 2.763000e+01N, 9.044000e+01W, 2005/07/28 05:19 through 2005/12/29 23:52 _NCProperties=version=1|netcdflibversion=4.4.1.1|hdf5libversion=1.8.19 acknowledgement=Data collection funded by various oil industry operators in accordance with BSEE Notice to Lessees cdm_data_type=TimeSeriesProfile cdm_profile_variables=time, profile_id cdm_timeseries_variables=platform, latitude, longitude, instrument, crs contributor_name=Murphy Exploration & Production Company Conventions=CF-1.6, ACDD-1.3, IOOS Metadata Profile Version 1.2, COARDS Easternmost_Easting=-90.44 featureType=TimeSeriesProfile geospatial_bounds=POINT (27.63 -90.44) geospatial_bounds_crs=EPSG:4326 geospatial_bounds_vertical_crs=EPSG:5703 geospatial_lat_max=27.63 geospatial_lat_min=27.63 geospatial_lat_units=degrees_north geospatial_lon_max=-90.44 geospatial_lon_min=-90.44 geospatial_lon_units=degrees_east geospatial_vertical_max=700.3 geospatial_vertical_min=196.2 geospatial_vertical_positive=down geospatial_vertical_resolution=24 geospatial_vertical_units=m history=Generated by convert_to_nc.py version 0.0.1 id=urn:ioos:sensor:WMO:42890.01:ADCP.02 infoUrl=www.woodsholegroup.com institution=GCOOS instrument=In Situ/Laboratory Instruments > Profilers/Sounders > Acoustic Sounders > ADCP > Acoustic Doppler Current Profiler instrument_vocabulary=GCMD Earth Science Keywords. Version 8.4 keywords_vocabulary=GCMD Science Keywords naming_authority=com.woodsholegroup NCProperties=version=1netcdflibversion=4.6.0hdf5libversion=1.10.1 Northernmost_Northing=27.63 platform=In Situ Ocean-based Platforms > OCEAN PLATFORM/OCEAN STATIONS > DRILLING PLATFORMS platform_name=Front Runner platform_vocabulary=GCMD Earth Science Keywords. Version 8.4 processing_level=Data QA'd by WHG oceanographer program=BSEE Notice to Lessees 2018-G01 and predecessors project=WHG NTL Data Processing and QA Subcontract to GCOOS sea_name=Gulf of Mexico source=Frnt Run MUR-2763-9044-050728-051229-42890-02-02.mat sourceUrl=(local files) Southernmost_Northing=27.63 standard_name_vocabulary=CF Standard Name Table v67 station_type=FPF subsetVariables=platform, latitude, longitude, instrument, crs, time, profile_id time_coverage_duration=P0Y5M1DT18H33M0S time_coverage_end=2005-12-29T23:52:00Z time_coverage_resolution=20 time_coverage_start=2005-07-28T05:19:00Z water_depth=1015 Westernmost_Easting=-90.44 wmo_platform_code=42890
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset tracks annual total students amount from 1996 to 2023 for Front Street Elementary School
https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
The Interactions Dataset consists of 300 video clips collected from over 20 different TV shows and containing 4 interactions: hand shakes, high fives, hugs, and kisses, as well as clips that don t contain any of the interactions. In each frame of every video are annotated the upper body of people (with a bounding box), the discrete head orientation (profile-left, profile-right, frontal-left, frontal-right and backwards), and the Interaction label of each person.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset tracks annual distribution of students across grade levels in Front Street Elementary School
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset tracks annual total classroom teachers amount from 1992 to 2023 for Front Street Elementary School
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset tracks annual overall school rank from 2014 to 2022 for Front Street Elementary School
High resolution bathymetric, sea-floor backscatter, and seismic-reflection data were collected offshore of southeastern Louisiana aboard the research vessel Point Sur on May 19-26, 2017, in an effort to characterize mudflow hazards on the Mississippi River Delta front. As the initial field program of a research cooperative between the U.S. Geological Survey, the Bureau of Ocean Energy Management, and other Federal and academic partners, the primary objective of this cruise was to assess the suitability of sea-floor mapping and shallow subsurface imaging tools in the challenging environmental conditions found across delta fronts (for example, variably distributed water column stratification and widespread biogenic gas in the shallow subsurface). Approximately 675 kilometers (km) of multibeam bathymetry and backscatter data, 420 km of towed chirp data, and 550 km of multichannel seismic data were collected. Varied mudflow (gully, lobe), prodelta morphologies, and structural features were imaged in selected survey areas from Pass a Loutre to Southwest Pass.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
The Celebrities in Frontal-Profile (CFP) dataset contains 500 celebrities, each of which has ten frontal and four profile face images.