100+ datasets found
  1. F

    East Asian Facial Expression Image Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). East Asian Facial Expression Image Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-east-asia
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Area covered
    East Asia
    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the East Asian Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.

    Facial Expression Data

    The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:

    Expression Images: 5 distinct facial images capturing common human emotions: Happy, Sad, Angry, Shocked, and Neutral

    Diversity & Representation

    Geographical Coverage: Individuals from East Asian countries including China, Japan, Philippines, Malaysia, Singapore, Thailand, Vietnam, Indonesia, and more
    Demographics: Participants aged 18 to 70 years, with a gender distribution of 60% male and 40% female
    File Formats: All images are available in JPEG and HEIC formats

    Image Quality & Capture Conditions

    To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:

    Lighting Conditions: Natural and artificial lighting to represent diverse scenarios
    Background Variability: Indoor and outdoor backgrounds to enhance model adaptability
    Device Quality: Captured using modern smartphones to ensure clarity and consistency

    Metadata

    Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:

    Unique Participant ID
    File Name
    Age
    Gender
    Country
    Facial Expression Label
    Demographic Information
    File Format

    This metadata helps in building expression recognition models that are both accurate and inclusive.

    Use Cases & Applications

    This dataset is ideal for a variety of AI and computer vision applications, including:

    Facial Expression Recognition: Improve accuracy in detecting emotions like happiness, anger, or surprise
    Biometric & Identity Systems: Enhance facial biometric authentication with expression variation handling
    KYC & Identity Verification: Validate facial consistency in ID documents and selfies despite varied expressions
    Generative AI Training: Support expression generation and animation in AI-generated facial images
    Emotion-Aware Systems: Power human-computer interaction, mental health assessment, and adaptive learning apps

    Secure & Ethical Collection

    Data Security: All data is securely processed and stored on FutureBeeAI’s proprietary platform
    Ethical Standards: Collection followed strict ethical guidelines ensuring participant privacy and informed consent
    Informed Consent: All participants were made aware of the data use and provided written consent

    Dataset Updates & Customization

    To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:

    <div style="margin-top:10px; margin-bottom: 10px; padding-left: 30px; display: flex; gap: 16px;

  2. F

    African Facial Expression Image Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). African Facial Expression Image Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-african
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the African Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.

    Facial Expression Data

    The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:

    Expression Images: 5 distinct facial images capturing common human emotions: Happy, Sad, Angry, Shocked, and Neutral

    Diversity & Representation

    Geographical Coverage: Individuals from African countries including Kenya, Malawi, Nigeria, Ethiopia, Benin, Somalia, Uganda, and more
    Demographics: Participants aged 18 to 70 years, with a gender distribution of 60% male and 40% female
    File Formats: All images are available in JPEG and HEIC formats

    Image Quality & Capture Conditions

    To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:

    Lighting Conditions: Natural and artificial lighting to represent diverse scenarios
    Background Variability: Indoor and outdoor backgrounds to enhance model adaptability
    Device Quality: Captured using modern smartphones to ensure clarity and consistency

    Metadata

    Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:

    Unique Participant ID
    File Name
    Age
    Gender
    Country
    Facial Expression Label
    Demographic Information
    File Format

    This metadata helps in building expression recognition models that are both accurate and inclusive.

    Use Cases & Applications

    This dataset is ideal for a variety of AI and computer vision applications, including:

    Facial Expression Recognition: Improve accuracy in detecting emotions like happiness, anger, or surprise
    Biometric & Identity Systems: Enhance facial biometric authentication with expression variation handling
    KYC & Identity Verification: Validate facial consistency in ID documents and selfies despite varied expressions
    Generative AI Training: Support expression generation and animation in AI-generated facial images
    Emotion-Aware Systems: Power human-computer interaction, mental health assessment, and adaptive learning apps

    Secure & Ethical Collection

    Data Security: All data is securely processed and stored on FutureBeeAI’s proprietary platform
    Ethical Standards: Collection followed strict ethical guidelines ensuring participant privacy and informed consent
    Informed Consent: All participants were made aware of the data use and provided written consent

    Dataset Updates & Customization

    To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:

  3. d

    Data from: Plant Expression Database

    • catalog.data.gov
    • datasetcatalog.nlm.nih.gov
    • +3more
    Updated Apr 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Plant Expression Database [Dataset]. https://catalog.data.gov/dataset/plant-expression-database-8ddd3
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    Agricultural Research Service
    Description

    [NOTE: PLEXdb is no longer available online. Oct 2019.] PLEXdb (Plant Expression Database) is a unified gene expression resource for plants and plant pathogens. PLEXdb is a genotype to phenotype, hypothesis building information warehouse, leveraging highly parallel expression data with seamless portals to related genetic, physical, and pathway data. PLEXdb (http://www.plexdb.org), in partnership with community databases, supports comparisons of gene expression across multiple plant and pathogen species, promoting individuals and/or consortia to upload genome-scale data sets to contrast them to previously archived data. These analyses facilitate the interpretation of structure, function and regulation of genes in economically important plants. A list of Gene Atlas experiments highlights data sets that give responses across different developmental stages, conditions and tissues. Tools at PLEXdb allow users to perform complex analyses quickly and easily. The Model Genome Interrogator (MGI) tool supports mapping gene lists onto corresponding genes from model plant organisms, including rice and Arabidopsis. MGI predicts homologies, displays gene structures and supporting information for annotated genes and full-length cDNAs. The gene list-processing wizard guides users through PLEXdb functions for creating, analyzing, annotating and managing gene lists. Users can upload their own lists or create them from the output of PLEXdb tools, and then apply diverse higher level analyses, such as ANOVA and clustering. PLEXdb also provides methods for users to track how gene expression changes across many different experiments using the Gene OscilloScope. This tool can identify interesting expression patterns, such as up-regulation under diverse conditions or checking any gene’s suitability as a steady-state control. Resources in this dataset:Resource Title: Website Pointer for Plant Expression Database, Iowa State University. File Name: Web Page, url: https://www.bcb.iastate.edu/plant-expression-database [NOTE: PLEXdb is no longer available online. Oct 2019.] Project description for the Plant Expression Database (PLEXdb) and integrated tools.

  4. d

    Data from: Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alaghband, Marie; Yousefi, Niloofar; Garibay, Ivan (2023). Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language [Dataset]. http://doi.org/10.7910/DVN/358QMQ
    Explore at:
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Alaghband, Marie; Yousefi, Niloofar; Garibay, Ivan
    Description

    Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image's facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.

  5. f

    Data from: Facial Expression Image Dataset for Computer Vision Algorithms

    • salford.figshare.com
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Alameer; Odunmolorun Osonuga (2025). Facial Expression Image Dataset for Computer Vision Algorithms [Dataset]. http://doi.org/10.17866/rd.salford.21220835.v2
    Explore at:
    Dataset updated
    Apr 29, 2025
    Dataset provided by
    University of Salford
    Authors
    Ali Alameer; Odunmolorun Osonuga
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset for this project is characterised by photos of individual human emotion expression and these photos are taken with the help of both digital camera and a mobile phone camera from different angles, posture, background, light exposure, and distances. This task might look and sound very easy but there were some challenges encountered along the process which are reviewed below: 1) People constraint One of the major challenges faced during this project is getting people to participate in the image capturing process as school was on vacation, and other individuals gotten around the environment were not willing to let their images be captured for personal and security reasons even after explaining the notion behind the project which is mainly for academic research purposes. Due to this challenge, we resorted to capturing the images of the researcher and just a few other willing individuals. 2) Time constraint As with all deep learning projects, the more data available the more accuracy and less error the result will produce. At the initial stage of the project, it was agreed to have 10 emotional expression photos each of at least 50 persons and we can increase the number of photos for more accurate results but due to the constraint in time of this project an agreement was later made to just capture the researcher and a few other people that are willing and available. These photos were taken for just two types of human emotion expression that is, “happy” and “sad” faces due to time constraint too. To expand our work further on this project (as future works and recommendations), photos of other facial expression such as anger, contempt, disgust, fright, and surprise can be included if time permits. 3) The approved facial emotions capture. It was agreed to capture as many angles and posture of just two facial emotions for this project with at least 10 images emotional expression per individual, but due to time and people constraints few persons were captured with as many postures as possible for this project which is stated below: Ø Happy faces: 65 images Ø Sad faces: 62 images There are many other types of facial emotions and again to expand our project in the future, we can include all the other types of the facial emotions if time permits, and people are readily available. 4) Expand Further. This project can be improved furthermore with so many abilities, again due to the limitation of time given to this project, these improvements can be implemented later as future works. In simple words, this project is to detect/predict real-time human emotion which involves creating a model that can detect the percentage confidence of any happy or sad facial image. The higher the percentage confidence the more accurate the facial fed into the model. 5) Other Questions Can the model be reproducible? the supposed response to this question should be YES. If and only if the model will be fed with the proper data (images) such as images of other types of emotional expression.

  6. c

    Data from: Sharing and reusing gene expression profiling data in...

    • portal.conp.ca
    • borealisdata.ca
    • +2more
    Updated Mar 6, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xian Wan; Tomi Pastinen; Paul Pavlidis (2020). Sharing and reusing gene expression profiling data in neuroscience [Dataset]. https://portal.conp.ca/dataset?id=projects/Reusing-Neuro-Data
    Explore at:
    Dataset updated
    Mar 6, 2020
    Dataset provided by
    Department of Psychiatry and Bioinformatics Centre, UBC
    McGill University
    Authors
    Xian Wan; Tomi Pastinen; Paul Pavlidis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    As public availability of gene expression profiling data increases, it is natural to ask how these data can be used by neuroscientists. Here we review the public availability of high-throughput expression data in neuroscience and how it has been reused, and tools that have been developed to facilitate reuse. There is increasing interest in making expression data reuse a routine part of the neuroscience tool-kit, but there are a number of challenges. Data must become more readily available in public databases; efforts to encourage investigators to make data available are important, as is education on the benefits of public data release. Once released, data must be better-annotated. Techniques and tools for data reuse are also in need of improvement. Integration of expression profiling data with neuroscience-specific resources such as anatomical atlases will further increase the value of expression data. (2018-02)

  7. Expression in-the-Wild (ExpW) Dataset

    • kaggle.com
    Updated Jul 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shahzad Abbas (2023). Expression in-the-Wild (ExpW) Dataset [Dataset]. https://www.kaggle.com/datasets/shahzadabbas/expression-in-the-wild-expw-dataset/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 27, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Shahzad Abbas
    Description

    Data Description

    The Expression in-the-Wild (ExpW) Dataset is a comprehensive and diverse collection of facial images carefully curated to capture spontaneous and unscripted facial expressions exhibited by individuals in real-world scenarios. This extensively annotated dataset serves as a valuable resource for advancing research in the fields of computer vision, facial expression analysis, affective computing, and human behavior understanding.

    Key Features:

    1. Real-world Expressions: The ExpW dataset stands apart from traditional lab-controlled datasets as it focuses on capturing facial expressions in real-life environments. This authenticity ensures that the dataset reflects the natural diversity of emotions experienced by individuals in everyday situations, making it highly relevant for real-world applications.

    2. Large and Diverse: Comprising a vast number of images, the ExpW dataset encompasses an extensive range of subjects, ethnicities, ages, and genders. This diversity allows researchers and developers to build more robust and inclusive models for facial expression recognition and emotion analysis.

    3. Annotated Emotions: Each facial image in the dataset is meticulously annotated with corresponding emotion labels, including but not limited to happiness, sadness, anger, surprise, fear, disgust, and neutral expressions. The emotion annotations provide ground truth data for training and validating machine learning algorithms.

    4. Various Pose and Illumination: To account for the varying challenges posed by real-life scenarios, the ExpW dataset includes images captured under different lighting conditions and poses. This variability helps researchers create algorithms that are robust to changes in illumination and head orientation.

    5. Privacy and Ethics: ExpW has been compiled adhering to strict privacy and ethical guidelines, ensuring the subjects' consent and data protection. The dataset maintains a high level of anonymity by excluding any personal information or sensitive details.

    This dataset has been downloaded from the following Public Directory... https://drive.google.com/drive/folders/1SDcI273EPKzzZCPSfYQs4alqjL01Kybq

    Dataset contains 91,793 faces manually labeled with expressions (Figure 1). Each of the face images is annotated as one of the seven basic expression categories: “angry (0)”, “disgust (1)”, “fear (2)”, “happy (3)”, “sad (4)”, “surprise (5)”, or “neutral (6)”.

  8. d

    Data from: Gene Expression Omnibus (GEO)

    • catalog.data.gov
    • data.virginia.gov
    • +2more
    Updated Jul 26, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institutes of Health (NIH) (2023). Gene Expression Omnibus (GEO) [Dataset]. https://catalog.data.gov/dataset/gene-expression-omnibus-geo
    Explore at:
    Dataset updated
    Jul 26, 2023
    Dataset provided by
    National Institutes of Health (NIH)
    Description

    Gene Expression Omnibus is a public functional genomics data repository supporting MIAME-compliant submissions of array- and sequence-based data. Tools are provided to help users query and download experiments and curated gene expression profiles.

  9. Z

    IFEED: Interactive Facial Expression and Emotion Detection Dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated May 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dias, Tiago (2023). IFEED: Interactive Facial Expression and Emotion Detection Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7963451
    Explore at:
    Dataset updated
    May 26, 2023
    Dataset provided by
    Oliveira, Nuno
    Vitorino, João
    Praça, Isabel
    Oliveira, Jorge
    Maia, Eva
    Dias, Tiago
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Interactive Facial Expression and Emotion Detection (IFEED) is an annotated dataset that can be used to train, validate, and test Deep Learning models for facial expression and emotion recognition. It contains pre-filtered and analysed images of the interactions between the six main characters of the Friends television series, obtained from the video recordings of the Multimodal EmotionLines Dataset (MELD).

    The images were obtained by decomposing the videos into multiple frames and extracting the facial expression of the correctly identified characters. A team composed of 14 researchers manually verified and annotated the processed data into several classes: Angry, Sad, Happy, Fearful, Disgusted, Surprised and Neutral.

    IFEED can be valuable for the development of intelligent facial expression recognition solutions and emotion detection software, enabling binary or multi-class classification, or even anomaly detection or clustering tasks. The images with ambiguous or very subtle facial expressions can be repurposed for adversarial learning. The dataset can be combined with additional data recordings to create more complete and extensive datasets and improve the generalization of robust deep learning models.

  10. Expression Data from International C.elegans Experiment 1st

    • catalog.data.gov
    • omicsdi.org
    • +2more
    Updated Apr 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Aeronautics and Space Administration (2025). Expression Data from International C.elegans Experiment 1st [Dataset]. https://catalog.data.gov/dataset/expression-data-from-international-c-elegans-experiment-1st-578fb
    Explore at:
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The effect of microgravity on gene expression in C.elegans was comprehensively analysed by DNA microarray. This is the first DNA microarray analysis for C.elegans grown under microgravity. Hyper gravity and clinorotation experiments were performed as reference against the flight experiment.

  11. g

    Facial Expression Image Data AFFECTNET YOLO Format

    • gts.ai
    json
    Updated Mar 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Facial Expression Image Data AFFECTNET YOLO Format [Dataset]. https://gts.ai/dataset-download/facial-expression-image-data-affectnet-yolo-format/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Mar 20, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset AFFECTNET YOLO Format is aimed to be used in facial expression detection for a YOLO project...

  12. Z

    Data from: UIBVFEDPlus-Light: Virtual facial expression dataset with...

    • data.niaid.nih.gov
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mascaró Oliver, Miquel (2024). UIBVFEDPlus-Light: Virtual facial expression dataset with lighting [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10377462
    Explore at:
    Dataset updated
    Jul 8, 2024
    Dataset authored and provided by
    Mascaró Oliver, Miquel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This database, named UIBVFEDPlus-Light, is an extension of the previously published UIBVFED virtual facial expression dataset. It includes 100 characters, four lighting configurations and 13200 images. Images are in png format with a resolution of 1080x1920 RGB, without alpha channel and an average size of 2.0 MB.The images represent virtual characters reproducing FACS-based facial expressions. Expressions are classified based on the six universal emotions (Anger, Disgust, Fear, Joy, Sadness, and Surprise) labeled according to Faigin’s classification.The dataset aims to give researchers access to data they may use to support their research and generate new knowledge. In particular, to study the effect of lighting conditions in the fields of facial expression and emotion recognition.

  13. R

    Data from: Face Expression Dataset

    • universe.roboflow.com
    zip
    Updated May 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ed (2025). Face Expression Dataset [Dataset]. https://universe.roboflow.com/ed-xo6db/face-expression-ueimg/model/4
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 21, 2025
    Dataset authored and provided by
    Ed
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Objects Bounding Boxes
    Description

    Face Expression

    ## Overview
    
    Face Expression is a dataset for object detection tasks - it contains Objects annotations for 997 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  14. Data from: An fMRI dataset in response to large-scale short natural dynamic...

    • openneuro.org
    Updated Oct 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panpan Chen; Chi Zhang; Bao Li; Li Tong; Linyuan Wang; Shuxiao Ma; Long Cao; Ziya Yu; Bin Yan (2024). An fMRI dataset in response to large-scale short natural dynamic facial expression videos [Dataset]. http://doi.org/10.18112/openneuro.ds005047.v1.0.6
    Explore at:
    Dataset updated
    Oct 15, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Panpan Chen; Chi Zhang; Bao Li; Li Tong; Linyuan Wang; Shuxiao Ma; Long Cao; Ziya Yu; Bin Yan
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Summary

    Facial expression is among the most natural methods for human beings to convey their emotional information in daily life. Although the neural mechanisms of facial expression have been extensively studied employing lab-controlled images and a small number of lab-controlled video stimuli, how the human brain processes natural dynamic facial expression videos still needs to be investigated. To our knowledge, this type of data specifically on large-scale natural facial expression videos is currently missing. We describe here the natural Facial Expressions Dataset (NFED), an fMRI dataset including responses to 1,320 short (3-second) natural facial expression video clips. These video clips are annotated with three types of labels: emotion, gender, and ethnicity, along with accompanying metadata. We validate that the dataset has good quality within and across participants and, notably, can capture temporal and spatial stimuli features. NFED provides researchers with fMRI data for understanding of the visual processing of large number of natural facial expression videos.

    Data Records

    Data Records The data, which is structured following the BIDS format53, were accessible at https://openneuro.org/datasets/ds00504754. The “sub-

    Stimulus. Distinct folders store the stimuli for distinct fMRI experiments: "stimuli/face-video", "stimuli/floc", and "stimuli/prf" (Fig. 2b). The category labels and metadata corresponding to video stimuli are stored in the "videos-stimuli_category_metadata.tsv”. The “videos-stimuli_description.json” file describes category and metadata information of video stimuli(Fig. 2b).

    Raw MRI data. Each participant's folder is comprised of 11 session folders: “sub-

    Volume data from pre-processing. The pre-processed volume-based fMRI data were in the folder named “pre-processed_volume_data/sub-

    Surface data from pre-processing. The pre-processed surface-based data were stored in a file named “volumetosurface/sub-

    FreeSurfer recon-all. The results of reconstructing the cortical surface are saved as “recon-all-FreeSurfer/sub-

    Surface-based GLM analysis data. We have conducted GLMsingle on the data of the main experiment. There is a file named “sub--

    Validation. The code of technical validation was saved in the “derivatives/validation/code” folder. The results of technical validation were saved in the “derivatives/validation/results” folder (Fig. 2h). The “README.md” describes the detailed information of code and results.

  15. i

    Facial Expression Dataset (Sri Lankan)

    • ieee-dataport.org
    Updated Sep 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amod Pathirana (2024). Facial Expression Dataset (Sri Lankan) [Dataset]. https://ieee-dataport.org/documents/facial-expression-dataset-sri-lankan
    Explore at:
    Dataset updated
    Sep 26, 2024
    Authors
    Amod Pathirana
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Sri Lanka
    Description

    often based on foreign samples

  16. o

    The Singapore Nanopore Expression Data Set

    • registry.opendata.aws
    Updated Aug 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Genome Institute of Singapore (https://www.a-star.edu.sg/gis) (2022). The Singapore Nanopore Expression Data Set [Dataset]. https://registry.opendata.aws/sgnex/
    Explore at:
    Dataset updated
    Aug 14, 2022
    Dataset provided by
    The Genome Institute of Singapore (<a href="https://www.a-star.edu.sg/gis">https://www.a-star.edu.sg/gis</a>)
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    Singapore
    Description

    The Singapore Nanopore Expression (SG-NEx) project is an international collaboration to generate reference transcriptomes and a comprehensive benchmark data set for long read Nanopore RNA-Seq. Transcriptome profiling is done using PCR-cDNA sequencing (PCR-cDNA), amplification-free cDNA sequencing (direct cDNA), direct sequencing of native RNA (direct RNA), and short read RNA-Seq. The SG-NEx core data includes 5 of the most commonly used cell lines and it is extended with additional cell lines and samples that cover a broad range of human tissues. All core samples are sequenced with at least 3 high quality replicates. For a subset of samples spike-in RNAs are used and matched m6A profiling data is available.

  17. Gene expression data

    • figshare.com
    txt
    Updated Jan 5, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark Izraelson (2021). Gene expression data [Dataset]. http://doi.org/10.6084/m9.figshare.13522748.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 5, 2021
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Mark Izraelson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Gene expression data

  18. Data from: Face Expression Dataset

    • kaggle.com
    Updated Jan 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ritesh1420 (2024). Face Expression Dataset [Dataset]. https://www.kaggle.com/ritesh1420/face-expression-data/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 24, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    ritesh1420
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset

    This dataset was created by ritesh1420

    Released under Apache 2.0

    Contents

  19. d

    Expression data from drosophila melanogaster

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Apr 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Open Science Data Repository (2025). Expression data from drosophila melanogaster [Dataset]. https://catalog.data.gov/dataset/expression-data-from-drosophila-melanogaster-f8490
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Open Science Data Repository
    Description

    Space travel presents unlimited opportunities for exploration and discovery, but requires a more complete understanding of the immunological consequences of long-term exposure to the conditions of spaceflight. To understand these consequences better and to contribute to design of effective countermeasures, we used the Drosophila model to compare innate immune responses to bacteria and fungi in flies that were either raised on earth or in outer space aboard the NASA Space Shuttle Discovery (STS-121). Microarrays were used to characterize changes in gene expression that occur in response to infection by bacteria and fungus in drosophila that were either hatched and raised in outer space (microgravity) or on earth (normal gravity). Whole Oregon R strain drosophila melanogaster fruit flies either raised on earth or in space that were (1) uninfected, (2) infected with bacteria (Escherichia coli), or (3) infected with fungus (Beauveria bassiana) were used for RNA extraction and hybridization on Affymetrix microarrays.

  20. Expression Data: GEUVADIS gene expression values (First 1000 entries and...

    • figshare.com
    txt
    Updated Feb 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniel Fischer (2022). Expression Data: GEUVADIS gene expression values (First 1000 entries and names adjusted) [Dataset]. http://doi.org/10.6084/m9.figshare.7618538.v4
    Explore at:
    txtAvailable download formats
    Dataset updated
    Feb 2, 2022
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Daniel Fischer
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Corresponding gene expression dataset from the GEUVADIS project. This dataset was not produced by me, it is just used as an example dataset in the corresponding manuscript, that can be found here:The original dataset can be found here:https://www.ebi.ac.uk/arrayexpress/files/E-GEUV-1/

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
FutureBee AI (2022). East Asian Facial Expression Image Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-east-asia

East Asian Facial Expression Image Dataset

East Asian Emotion Detection Image Dataset

Explore at:
wavAvailable download formats
Dataset updated
Aug 1, 2022
Dataset provided by
FutureBeeAI
Authors
FutureBee AI
License

https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

Area covered
East Asia
Dataset funded by
FutureBeeAI
Description

Introduction

Welcome to the East Asian Facial Expression Image Dataset, curated to support the development of advanced facial expression recognition systems, biometric identification models, KYC verification processes, and a wide range of facial analysis applications. This dataset is ideal for training robust emotion-aware AI solutions.

Facial Expression Data

The dataset includes over 2000 high-quality facial expression images, grouped into participant-wise sets. Each participant contributes:

Expression Images: 5 distinct facial images capturing common human emotions: Happy, Sad, Angry, Shocked, and Neutral

Diversity & Representation

Geographical Coverage: Individuals from East Asian countries including China, Japan, Philippines, Malaysia, Singapore, Thailand, Vietnam, Indonesia, and more
Demographics: Participants aged 18 to 70 years, with a gender distribution of 60% male and 40% female
File Formats: All images are available in JPEG and HEIC formats

Image Quality & Capture Conditions

To ensure generalizability and robustness in model training, images were captured under varied real-world conditions:

Lighting Conditions: Natural and artificial lighting to represent diverse scenarios
Background Variability: Indoor and outdoor backgrounds to enhance model adaptability
Device Quality: Captured using modern smartphones to ensure clarity and consistency

Metadata

Each participant's image set is accompanied by detailed metadata, enabling precise filtering and training:

Unique Participant ID
File Name
Age
Gender
Country
Facial Expression Label
Demographic Information
File Format

This metadata helps in building expression recognition models that are both accurate and inclusive.

Use Cases & Applications

This dataset is ideal for a variety of AI and computer vision applications, including:

Facial Expression Recognition: Improve accuracy in detecting emotions like happiness, anger, or surprise
Biometric & Identity Systems: Enhance facial biometric authentication with expression variation handling
KYC & Identity Verification: Validate facial consistency in ID documents and selfies despite varied expressions
Generative AI Training: Support expression generation and animation in AI-generated facial images
Emotion-Aware Systems: Power human-computer interaction, mental health assessment, and adaptive learning apps

Secure & Ethical Collection

Data Security: All data is securely processed and stored on FutureBeeAI’s proprietary platform
Ethical Standards: Collection followed strict ethical guidelines ensuring participant privacy and informed consent
Informed Consent: All participants were made aware of the data use and provided written consent

Dataset Updates & Customization

To support evolving AI development needs, this dataset is regularly updated and can be tailored to project-specific requirements. Custom options include:

<div style="margin-top:10px; margin-bottom: 10px; padding-left: 30px; display: flex; gap: 16px;

Search
Clear search
Close search
Google apps
Main menu