9 datasets found
  1. P

    Data from: DISFA Dataset

    • paperswithcode.com
    Updated Mar 18, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seyed Mohammad Mavadati; Mohammad H. Mahoor; Kevin Bartlett; Philip Trinh; Jeffrey F. Cohn (2021). DISFA Dataset [Dataset]. https://paperswithcode.com/dataset/disfa
    Explore at:
    Dataset updated
    Mar 18, 2021
    Authors
    Seyed Mohammad Mavadati; Mohammad H. Mahoor; Kevin Bartlett; Philip Trinh; Jeffrey F. Cohn
    Description

    The Denver Intensity of Spontaneous Facial Action (DISFA) dataset consists of 27 videos of 4844 frames each, with 130,788 images in total. Action unit annotations are on different levels of intensity, which are ignored in the following experiments and action units are either set or unset. DISFA was selected from a wider range of databases popular in the field of facial expression recognition because of the high number of smiles, i.e. action unit 12. In detail, 30,792 have this action unit set, 82,176 images have some action unit(s) set and 48,612 images have no action unit(s) set at all.

  2. f

    Description of the 12 AUs coded in DISFA.

    • figshare.com
    xls
    Updated May 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shivansh Chandra Tripathi; Rahul Garg (2024). Description of the 12 AUs coded in DISFA. [Dataset]. http://doi.org/10.1371/journal.pone.0302705.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 17, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Shivansh Chandra Tripathi; Rahul Garg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Neuropsychological research aims to unravel how diverse individuals’ brains exhibit similar functionality when exposed to the same stimuli. The evocation of consistent responses when different subjects watch the same emotionally evocative stimulus has been observed through modalities like fMRI, EEG, physiological signals and facial expressions. We refer to the quantification of these shared consistent signals across subjects at each time instant across the temporal dimension as Consistent Response Measurement (CRM). CRM is widely explored through fMRI, occasionally with EEG, physiological signals and facial expressions using metrics like Inter-Subject Correlation (ISC). However, fMRI tools are expensive and constrained, while EEG and physiological signals are prone to facial artifacts and environmental conditions (such as temperature, humidity, and health condition of subjects). In this research, facial expression videos are used as a cost-effective and flexible alternative for CRM, minimally affected by external conditions. By employing computer vision-based automated facial keypoint tracking, a new metric similar to ISC, called the Average t-statistic, is introduced. Unlike existing facial expression-based methodologies that measure CRM of secondary indicators like inferred emotions, keypoint, and ICA-based features, the Average t-statistic is closely associated with the direct measurement of consistent facial muscle movement using the Facial Action Coding System (FACS). This is evidenced in DISFA dataset where the time-series of Average t-statistic has a high correlation (R2 = 0.78) with a metric called AU consistency, which directly measures facial muscle movement through FACS coding of video frames. The simplicity of recording facial expressions with the automated Average t-statistic expands the applications of CRM such as measuring engagement in online learning, customer interactions, etc., and diagnosing outliers in healthcare conditions like stroke, autism, depression, etc. To promote further research, we have made the code repository publicly available.

  3. P

    FEAFA+ Dataset

    • paperswithcode.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wei Gan; Jian Xue; Ke Lu; Yanfu Yan; Pengcheng Gao; Jiayi Lyu, FEAFA+ Dataset [Dataset]. https://paperswithcode.com/dataset/feafa
    Explore at:
    Authors
    Wei Gan; Jian Xue; Ke Lu; Yanfu Yan; Pengcheng Gao; Jiayi Lyu
    Description

    FEAFA+ is a dataset for Facial expression analysis and 3D Facial animation. It includes 150 video sequences from FEAFA and DISFA, with a total of 230,184 frames being manually annotated on floating-point intensity value of 24 redefined AUs using the Expression Quantitative Tool.

  4. f

    KL-divergence (row-wise averaged) between κAU distribution table and each of...

    • plos.figshare.com
    xls
    Updated May 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shivansh Chandra Tripathi; Rahul Garg (2024). KL-divergence (row-wise averaged) between κAU distribution table and each of the five keypoint-based metrics (). [Dataset]. http://doi.org/10.1371/journal.pone.0302705.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 17, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Shivansh Chandra Tripathi; Rahul Garg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    KL-divergence (row-wise averaged) between κAU distribution table and each of the five keypoint-based metrics ().

  5. f

    Distribution of the four consistency classes present in different emotion...

    • plos.figshare.com
    xls
    Updated May 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shivansh Chandra Tripathi; Rahul Garg (2024). Distribution of the four consistency classes present in different emotion segments. [Dataset]. http://doi.org/10.1371/journal.pone.0302705.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 17, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Shivansh Chandra Tripathi; Rahul Garg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Distribution of the four consistency classes present in different emotion segments.

  6. f

    Distribution of the CRM metrics in the four consistency classes per emotion....

    • plos.figshare.com
    xls
    Updated May 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shivansh Chandra Tripathi; Rahul Garg (2024). Distribution of the CRM metrics in the four consistency classes per emotion. Each entry contains values in the order (). [Dataset]. http://doi.org/10.1371/journal.pone.0302705.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 17, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Shivansh Chandra Tripathi; Rahul Garg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Distribution of the CRM metrics in the four consistency classes per emotion. Each entry contains values in the order ().

  7. f

    Start and end frame number of different target emotion segments.

    • plos.figshare.com
    xls
    Updated May 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shivansh Chandra Tripathi; Rahul Garg (2024). Start and end frame number of different target emotion segments. [Dataset]. http://doi.org/10.1371/journal.pone.0302705.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 17, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Shivansh Chandra Tripathi; Rahul Garg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Start and end frame number of different target emotion segments.

  8. f

    Consistency classes based on AU consistency.

    • plos.figshare.com
    xls
    Updated May 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shivansh Chandra Tripathi; Rahul Garg (2024). Consistency classes based on AU consistency. [Dataset]. http://doi.org/10.1371/journal.pone.0302705.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 17, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Shivansh Chandra Tripathi; Rahul Garg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Neuropsychological research aims to unravel how diverse individuals’ brains exhibit similar functionality when exposed to the same stimuli. The evocation of consistent responses when different subjects watch the same emotionally evocative stimulus has been observed through modalities like fMRI, EEG, physiological signals and facial expressions. We refer to the quantification of these shared consistent signals across subjects at each time instant across the temporal dimension as Consistent Response Measurement (CRM). CRM is widely explored through fMRI, occasionally with EEG, physiological signals and facial expressions using metrics like Inter-Subject Correlation (ISC). However, fMRI tools are expensive and constrained, while EEG and physiological signals are prone to facial artifacts and environmental conditions (such as temperature, humidity, and health condition of subjects). In this research, facial expression videos are used as a cost-effective and flexible alternative for CRM, minimally affected by external conditions. By employing computer vision-based automated facial keypoint tracking, a new metric similar to ISC, called the Average t-statistic, is introduced. Unlike existing facial expression-based methodologies that measure CRM of secondary indicators like inferred emotions, keypoint, and ICA-based features, the Average t-statistic is closely associated with the direct measurement of consistent facial muscle movement using the Facial Action Coding System (FACS). This is evidenced in DISFA dataset where the time-series of Average t-statistic has a high correlation (R2 = 0.78) with a metric called AU consistency, which directly measures facial muscle movement through FACS coding of video frames. The simplicity of recording facial expressions with the automated Average t-statistic expands the applications of CRM such as measuring engagement in online learning, customer interactions, etc., and diagnosing outliers in healthcare conditions like stroke, autism, depression, etc. To promote further research, we have made the code repository publicly available.

  9. P

    BP4D Dataset

    • paperswithcode.com
    • opendatalab.com
    Updated Mar 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xing Zhang; Lijun Yin; Jeffrey F. Cohn; Shaun J. Canavan; Michael Reale; Andy Horowitz; Peng Liu; Jeffrey M. Girard (2021). BP4D Dataset [Dataset]. https://paperswithcode.com/dataset/bp4d
    Explore at:
    Dataset updated
    Mar 27, 2021
    Authors
    Xing Zhang; Lijun Yin; Jeffrey F. Cohn; Shaun J. Canavan; Michael Reale; Andy Horowitz; Peng Liu; Jeffrey M. Girard
    Description

    The BP4D-Spontaneous dataset is a 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains using both person-specific and generic approaches. The database includes forty-one participants (23 women, 18 men). They were 18 – 29 years of age; 11 were Asian, 6 were African-American, 4 were Hispanic, and 20 were Euro-American. An emotion elicitation protocol was designed to elicit emotions of participants effectively. Eight tasks were covered with an interview process and a series of activities to elicit eight emotions. The database is structured by participants. Each participant is associated with 8 tasks. For each task, there are both 3D and 2D videos. As well, the Metadata include manually annotated action units (FACS AU), automatically tracked head pose, and 2D/3D facial landmarks. The database is in the size of about 2.6TB (without compression).

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Seyed Mohammad Mavadati; Mohammad H. Mahoor; Kevin Bartlett; Philip Trinh; Jeffrey F. Cohn (2021). DISFA Dataset [Dataset]. https://paperswithcode.com/dataset/disfa

Data from: DISFA Dataset

Denver Intensity of Spontaneous Facial Action

Related Article
Explore at:
Dataset updated
Mar 18, 2021
Authors
Seyed Mohammad Mavadati; Mohammad H. Mahoor; Kevin Bartlett; Philip Trinh; Jeffrey F. Cohn
Description

The Denver Intensity of Spontaneous Facial Action (DISFA) dataset consists of 27 videos of 4844 frames each, with 130,788 images in total. Action unit annotations are on different levels of intensity, which are ignored in the following experiments and action units are either set or unset. DISFA was selected from a wider range of databases popular in the field of facial expression recognition because of the high number of smiles, i.e. action unit 12. In detail, 30,792 have this action unit set, 82,176 images have some action unit(s) set and 48,612 images have no action unit(s) set at all.

Search
Clear search
Close search
Google apps
Main menu