100+ datasets found
  1. F

    East Asian Facial Expression Images Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). East Asian Facial Expression Images Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-east-asia
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Area covered
    East Asia
    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the East Asian Facial Expression Image Dataset, meticulously curated to enhance expression recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.

    Facial Expression Data

    This dataset comprises over 2000 facial expression images, divided into participant-wise sets with each set including:

    Expression Images: 5 different high-quality images per individual, each capturing a distinct facial emotion like Happy, Sad, Angry, Shocked, and Neutral.

    Diversity and Representation

    The dataset includes contributions from a diverse network of individuals across East Asian countries, such as:

    Geographical Representation: Participants from East Asian countries, including China, Japan, Philippines, Malaysia, Singapore, Thailand, Vietnam, Indonesia, and more.
    Participant Profile: Participants range from 18 to 70 years old, representing both males and females in 60:40 ratio, respectively.
    File Format: The dataset contains images in JPEG and HEIC file format.

    Quality and Conditions

    To ensure high utility and robustness, all images are captured under varying conditions:

    Lighting Conditions: Images are taken in different lighting environments to ensure variability and realism.
    Backgrounds: A variety of backgrounds are available to enhance model generalization.
    Device Quality: Photos are taken using the latest mobile devices to ensure high resolution and clarity.

    Metadata

    Each facial expression image set is accompanied by detailed metadata for each participant, including:

    Participant Identifier
    File Name
    Age
    Gender
    Country
    Expression
    Demographic Information
    File Format

    This metadata is essential for training models that can accurately recognize and identify expressions across different demographics and conditions.

    Usage and Applications

    This facial emotion dataset is ideal for various applications in the field of computer vision, including but not limited to:

    Expression Recognition Models: Improving the accuracy and reliability of facial expression recognition systems.
    KYC Models: Streamlining the identity verification processes for financial and other services.
    Biometric Identity Systems: Developing robust biometric identification solutions.
    Generative AI Models: Training generative AI models to create realistic and diverse synthetic facial images.

    Secure and Ethical Collection

    Data Security: Data was securely stored and processed within our platform, ensuring data security and confidentiality.
    Ethical Guidelines: The biometric data collection process adhered to strict ethical guidelines, ensuring the privacy and consent of all participants.
    Participant Consent: All participants were informed of the purpose of collection and potential use of the data, as agreed through written consent.

    Updates and Customization

    We understand the evolving nature of AI and machine learning requirements. Therefore, we continuously add more assets with diverse conditions to this off-the-shelf facial expression dataset.

    Customization & Custom

  2. f

    Data from: Facial Expression Image Dataset for Computer Vision Algorithms

    • salford.figshare.com
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Alameer; Odunmolorun Osonuga (2025). Facial Expression Image Dataset for Computer Vision Algorithms [Dataset]. http://doi.org/10.17866/rd.salford.21220835.v2
    Explore at:
    Dataset updated
    Apr 29, 2025
    Dataset provided by
    University of Salford
    Authors
    Ali Alameer; Odunmolorun Osonuga
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset for this project is characterised by photos of individual human emotion expression and these photos are taken with the help of both digital camera and a mobile phone camera from different angles, posture, background, light exposure, and distances. This task might look and sound very easy but there were some challenges encountered along the process which are reviewed below: 1) People constraint One of the major challenges faced during this project is getting people to participate in the image capturing process as school was on vacation, and other individuals gotten around the environment were not willing to let their images be captured for personal and security reasons even after explaining the notion behind the project which is mainly for academic research purposes. Due to this challenge, we resorted to capturing the images of the researcher and just a few other willing individuals. 2) Time constraint As with all deep learning projects, the more data available the more accuracy and less error the result will produce. At the initial stage of the project, it was agreed to have 10 emotional expression photos each of at least 50 persons and we can increase the number of photos for more accurate results but due to the constraint in time of this project an agreement was later made to just capture the researcher and a few other people that are willing and available. These photos were taken for just two types of human emotion expression that is, “happy” and “sad” faces due to time constraint too. To expand our work further on this project (as future works and recommendations), photos of other facial expression such as anger, contempt, disgust, fright, and surprise can be included if time permits. 3) The approved facial emotions capture. It was agreed to capture as many angles and posture of just two facial emotions for this project with at least 10 images emotional expression per individual, but due to time and people constraints few persons were captured with as many postures as possible for this project which is stated below: Ø Happy faces: 65 images Ø Sad faces: 62 images There are many other types of facial emotions and again to expand our project in the future, we can include all the other types of the facial emotions if time permits, and people are readily available. 4) Expand Further. This project can be improved furthermore with so many abilities, again due to the limitation of time given to this project, these improvements can be implemented later as future works. In simple words, this project is to detect/predict real-time human emotion which involves creating a model that can detect the percentage confidence of any happy or sad facial image. The higher the percentage confidence the more accurate the facial fed into the model. 5) Other Questions Can the model be reproducible? the supposed response to this question should be YES. If and only if the model will be fed with the proper data (images) such as images of other types of emotional expression.

  3. g

    Facial Expression Image Data AFFECTNET YOLO Format

    • gts.ai
    json
    Updated Mar 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GTS (2024). Facial Expression Image Data AFFECTNET YOLO Format [Dataset]. https://gts.ai/dataset-download/facial-expression-image-data-affectnet-yolo-format/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Mar 20, 2024
    Dataset provided by
    GLOBOSE TECHNOLOGY SOLUTIONS PRIVATE LIMITED
    Authors
    GTS
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset AFFECTNET YOLO Format is aimed to be used in facial expression detection for a YOLO project...

  4. n

    Gene Expression Database

    • neuinfo.org
    • dknet.org
    Updated Sep 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Gene Expression Database [Dataset]. http://identifiers.org/RRID:SCR_006539
    Explore at:
    Dataset updated
    Sep 17, 2024
    Description

    Community database that collects and integrates the gene expression information in MGI with a primary emphasis on endogenous gene expression during mouse development. The data in GXD are obtained from the literature, from individual laboratories, and from large-scale data providers. All data are annotated and reviewed by GXD curators. GXD stores and integrates different types of expression data (RNA in situ hybridization; Immunohistochemistry; in situ reporter (knock in); RT-PCR; Northern and Western blots; and RNase and Nuclease s1 protection assays) and makes these data freely available in formats appropriate for comprehensive analysis. There is particular emphasis on endogenous gene expression during mouse development. GXD also maintains an index of the literature examining gene expression in the embryonic mouse. It is comprehensive and up-to-date, containing all pertinent journal articles from 1993 to the present and articles from major developmental journals from 1990 to the present. GXD stores primary data from different types of expression assays and by integrating these data, as data accumulate, GXD provides increasingly complete information about the expression profiles of transcripts and proteins in different mouse strains and mutants. GXD describes expression patterns using an extensive, hierarchically-structured dictionary of anatomical terms. In this way, expression results from assays with differing spatial resolution are recorded in a standardized and integrated manner and expression patterns can be queried at different levels of detail. The records are complemented with digitized images of the original expression data. The Anatomical Dictionary for Mouse Development has been developed by our Edinburgh colleagues, as part of the joint Mouse Gene Expression Information Resource project. GXD places the gene expression data in the larger biological context by establishing and maintaining interconnections with many other resources. Integration with MGD enables a combined analysis of genotype, sequence, expression, and phenotype data. Links to PubMed, Online Mendelian Inheritance in Man (OMIM), sequence databases, and databases from other species further enhance the utility of GXD. GXD accepts both published and unpublished data.

  5. Facial Recognition dataset (Human)

    • kaggle.com
    Updated Nov 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zawar Khan (2023). Facial Recognition dataset (Human) [Dataset]. https://www.kaggle.com/datasets/zawarkhan69/human-facial-expression-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 24, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Zawar Khan
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset comprises of images featuring diverse human faces annotated with various emotions like happiness, sadness, anger, neutral, and surprised. With a standard resolution of pixels, it's suitable for training and evaluating facial expression recognition models, and is publicly accessible.

  6. H

    Data from: Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset...

    • dataverse.harvard.edu
    tsv, zip
    Updated Sep 11, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harvard Dataverse (2020). Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language [Dataset]. http://doi.org/10.7910/DVN/358QMQ
    Explore at:
    zip(358564620), tsv(287767)Available download formats
    Dataset updated
    Sep 11, 2020
    Dataset provided by
    Harvard Dataverse
    Description

    Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image's facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.

  7. Z

    IFEED: Interactive Facial Expression and Emotion Detection Dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated May 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oliveira, Jorge (2023). IFEED: Interactive Facial Expression and Emotion Detection Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7963451
    Explore at:
    Dataset updated
    May 26, 2023
    Dataset provided by
    Oliveira, Nuno
    Praça, Isabel
    Dias, Tiago
    Maia, Eva
    Oliveira, Jorge
    Vitorino, João
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Interactive Facial Expression and Emotion Detection (IFEED) is an annotated dataset that can be used to train, validate, and test Deep Learning models for facial expression and emotion recognition. It contains pre-filtered and analysed images of the interactions between the six main characters of the Friends television series, obtained from the video recordings of the Multimodal EmotionLines Dataset (MELD).

    The images were obtained by decomposing the videos into multiple frames and extracting the facial expression of the correctly identified characters. A team composed of 14 researchers manually verified and annotated the processed data into several classes: Angry, Sad, Happy, Fearful, Disgusted, Surprised and Neutral.

    IFEED can be valuable for the development of intelligent facial expression recognition solutions and emotion detection software, enabling binary or multi-class classification, or even anomaly detection or clustering tasks. The images with ambiguous or very subtle facial expressions can be repurposed for adversarial learning. The dataset can be combined with additional data recordings to create more complete and extensive datasets and improve the generalization of robust deep learning models.

  8. Expression in-the-Wild (ExpW) Dataset

    • kaggle.com
    Updated Jul 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shahzad Abbas (2023). Expression in-the-Wild (ExpW) Dataset [Dataset]. https://www.kaggle.com/datasets/shahzadabbas/expression-in-the-wild-expw-dataset/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 27, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Shahzad Abbas
    Description

    Data Description

    The Expression in-the-Wild (ExpW) Dataset is a comprehensive and diverse collection of facial images carefully curated to capture spontaneous and unscripted facial expressions exhibited by individuals in real-world scenarios. This extensively annotated dataset serves as a valuable resource for advancing research in the fields of computer vision, facial expression analysis, affective computing, and human behavior understanding.

    Key Features:

    1. Real-world Expressions: The ExpW dataset stands apart from traditional lab-controlled datasets as it focuses on capturing facial expressions in real-life environments. This authenticity ensures that the dataset reflects the natural diversity of emotions experienced by individuals in everyday situations, making it highly relevant for real-world applications.

    2. Large and Diverse: Comprising a vast number of images, the ExpW dataset encompasses an extensive range of subjects, ethnicities, ages, and genders. This diversity allows researchers and developers to build more robust and inclusive models for facial expression recognition and emotion analysis.

    3. Annotated Emotions: Each facial image in the dataset is meticulously annotated with corresponding emotion labels, including but not limited to happiness, sadness, anger, surprise, fear, disgust, and neutral expressions. The emotion annotations provide ground truth data for training and validating machine learning algorithms.

    4. Various Pose and Illumination: To account for the varying challenges posed by real-life scenarios, the ExpW dataset includes images captured under different lighting conditions and poses. This variability helps researchers create algorithms that are robust to changes in illumination and head orientation.

    5. Privacy and Ethics: ExpW has been compiled adhering to strict privacy and ethical guidelines, ensuring the subjects' consent and data protection. The dataset maintains a high level of anonymity by excluding any personal information or sensitive details.

    This dataset has been downloaded from the following Public Directory... https://drive.google.com/drive/folders/1SDcI273EPKzzZCPSfYQs4alqjL01Kybq

    Dataset contains 91,793 faces manually labeled with expressions (Figure 1). Each of the face images is annotated as one of the seven basic expression categories: “angry (0)”, “disgust (1)”, “fear (2)”, “happy (3)”, “sad (4)”, “surprise (5)”, or “neutral (6)”.

  9. Z

    The Japanese Female Facial Expression (JAFFE) Dataset

    • data.niaid.nih.gov
    Updated Mar 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kamachi, Miyuki (2025). The Japanese Female Facial Expression (JAFFE) Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3451523
    Explore at:
    Dataset updated
    Mar 5, 2025
    Dataset provided by
    Gyoba, Jiro
    Kamachi, Miyuki
    Lyons, Michael
    Description

    The JAFFE images may be used only for non-commercial scientific research.

    The source and background of the dataset must be acknowledged by citing the following two articles. Users should read both carefully.

    Michael J. Lyons, Miyuki Kamachi, Jiro Gyoba.Coding Facial Expressions with Gabor Wavelets (IVC Special Issue)arXiv:2009.05938 (2020) https://arxiv.org/pdf/2009.05938.pdf

    Michael J. Lyons"Excavating AI" Re-excavated: Debunking a Fallacious Account of the JAFFE Dataset arXiv: 2107.13998 (2021) https://arxiv.org/abs/2107.13998

    The following is not allowed:

    Redistribution of the JAFFE dataset (incl. via Github, Kaggle, Colaboratory, GitCafe, CSDN etc.)

    Posting JAFFE images on the web and social media

    Public exhibition of JAFFE images in museums/galleries etc.

    Broadcast in the mass media (tv shows, films, etc.)

    A few sample images (not more than 10) may be displayed in scientific publications.

  10. Human Gene Expression Database Data Package

    • johnsnowlabs.com
    csv
    Updated Jan 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Snow Labs (2021). Human Gene Expression Database Data Package [Dataset]. https://www.johnsnowlabs.com/marketplace/human-gene-expression-database-data-package/
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jan 20, 2021
    Dataset authored and provided by
    John Snow Labs
    Description

    This data package contains expression profiles for proteins in normal and cancer tissues. It also contains data on sequence based RNA levels in human tissue and cell line.

  11. c

    Data from: Sharing and reusing gene expression profiling data in...

    • portal.conp.ca
    • borealisdata.ca
    • +1more
    Updated Mar 6, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xian Wan; Tomi Pastinen; Paul Pavlidis (2020). Sharing and reusing gene expression profiling data in neuroscience [Dataset]. https://portal.conp.ca/dataset?id=projects/Reusing-Neuro-Data
    Explore at:
    Dataset updated
    Mar 6, 2020
    Dataset provided by
    Department of Psychiatry and Bioinformatics Centre, UBC
    McGill University
    Authors
    Xian Wan; Tomi Pastinen; Paul Pavlidis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    As public availability of gene expression profiling data increases, it is natural to ask how these data can be used by neuroscientists. Here we review the public availability of high-throughput expression data in neuroscience and how it has been reused, and tools that have been developed to facilitate reuse. There is increasing interest in making expression data reuse a routine part of the neuroscience tool-kit, but there are a number of challenges. Data must become more readily available in public databases; efforts to encourage investigators to make data available are important, as is education on the benefits of public data release. Once released, data must be better-annotated. Techniques and tools for data reuse are also in need of improvement. Integration of expression profiling data with neuroscience-specific resources such as anatomical atlases will further increase the value of expression data. (2018-02)

  12. i

    Facial Expression Dataset (Sri Lankan)

    • ieee-dataport.org
    Updated Sep 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Amod Pathirana (2024). Facial Expression Dataset (Sri Lankan) [Dataset]. https://ieee-dataport.org/documents/facial-expression-dataset-sri-lankan
    Explore at:
    Dataset updated
    Sep 26, 2024
    Authors
    Amod Pathirana
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Sri Lanka
    Description

    often based on foreign samples

  13. d

    Data from: Gene Expression Omnibus (GEO)

    • catalog.data.gov
    • data.virginia.gov
    • +2more
    Updated Jul 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institutes of Health (NIH) (2023). Gene Expression Omnibus (GEO) [Dataset]. https://catalog.data.gov/dataset/gene-expression-omnibus-geo
    Explore at:
    Dataset updated
    Jul 26, 2023
    Dataset provided by
    National Institutes of Health (NIH)
    Description

    Gene Expression Omnibus is a public functional genomics data repository supporting MIAME-compliant submissions of array- and sequence-based data. Tools are provided to help users query and download experiments and curated gene expression profiles.

  14. expression data.csv

    • figshare.com
    txt
    Updated Jan 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jihan Wang (2022). expression data.csv [Dataset]. http://doi.org/10.6084/m9.figshare.19093307.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 30, 2022
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Jihan Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In this research, we proposed the SNR-PPFS feature selection algorithms to identify key gene signatures for distinguishing COAD tumor samples from normal colon tissues. Using machine learning-based feature selection approaches to select key gene signatures from high-dimensional datasets can be an effective way for studying cancer genomic characteristics.

  15. 4

    GTEx (Genotype-Tissue Expression) data normalized

    • data.4tu.nl
    • figshare.com
    zip
    Updated Oct 26, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Erdogan Taskesen, GTEx (Genotype-Tissue Expression) data normalized [Dataset]. http://doi.org/10.4121/uuid:ec5bfa66-5531-482a-904f-b693aa999e8b
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 26, 2015
    Dataset provided by
    TU Delft
    Authors
    Erdogan Taskesen
    License

    https://doi.org/10.4121/resource:terms_of_usehttps://doi.org/10.4121/resource:terms_of_use

    Description

    This is a normalized dataset from the original RNAseq dataset downloaded from Genotype-Tissue Expression (GTEx) project: www.gtexportal.org: RNA-SeQCv1.1.8 gene rpkm Pilot V3 patch1. The data was used to analyze how tissue samples are related to each other in terms of gene expression data The data can be used to get insights in how gene expression levels behave in in the different human tissues.

  16. Gene expression data

    • figshare.com
    txt
    Updated Jan 5, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark Izraelson (2021). Gene expression data [Dataset]. http://doi.org/10.6084/m9.figshare.13522748.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jan 5, 2021
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Mark Izraelson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Gene expression data

  17. 1,323 Drivers - 7 Expression Recognition Data

    • m.nexdata.ai
    Updated Oct 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nexdata (2023). 1,323 Drivers - 7 Expression Recognition Data [Dataset]. https://m.nexdata.ai/datasets/computervision/1293
    Explore at:
    Dataset updated
    Oct 14, 2023
    Dataset authored and provided by
    Nexdata
    Variables measured
    Device, Accuracy, Data size, Data Format, Vehicle Type, Data diversity, Collecting time, Shooting position, Collecting environment, Population distribution
    Description

    Seven facial expressions recognition data of 1,323 drivers cover multiple ages, multiple time periods and multiple expressions. In terms of acquisition equipment, visible and infrared binocular cameras are used. This set of driver expression recognition data can be used for driver expression recognition analysis and other tasks.

  18. Multimodel Pathway Enrichment Methods for Functional Evaluation of...

    • acs.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ufuk Kirik; Paolo Cifani; Ann-Sofie Albrekt; Malin Lindstedt; Anders Heyden; Fredrik Levander (2023). Multimodel Pathway Enrichment Methods for Functional Evaluation of Expression Regulation [Dataset]. http://doi.org/10.1021/pr300038b.s007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    ACS Publications
    Authors
    Ufuk Kirik; Paolo Cifani; Ann-Sofie Albrekt; Malin Lindstedt; Anders Heyden; Fredrik Levander
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Functional analysis of quantitative expression data is becoming common practice within the proteomics and transcriptomics fields; however, a gold standard for this type of analysis has yet not emerged. To grasp the systemic changes in biological systems, efficient and robust methods are needed for data analysis following expression regulation experiments. We discuss several conceptual and practical challenges potentially hindering the emergence of such methods and present a novel method, called FEvER, that utilizes two enrichment models in parallel. We also present analysis of three disparate differential expression data sets using our method and compare our results to other established methods. With many useful features such as pathway hierarchy overview, we believe the FEvER method and its software implementation will provide a useful tool for peers in the field of proteomics. Furthermore, we show that the method is also applicable to other types of expression data.

  19. Expression Data from International C.elegans Experiment 1st

    • catalog.data.gov
    • omicsdi.org
    • +2more
    Updated Apr 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Aeronautics and Space Administration (2025). Expression Data from International C.elegans Experiment 1st [Dataset]. https://catalog.data.gov/dataset/expression-data-from-international-c-elegans-experiment-1st-578fb
    Explore at:
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    The effect of microgravity on gene expression in C.elegans was comprehensively analysed by DNA microarray. This is the first DNA microarray analysis for C.elegans grown under microgravity. Hyper gravity and clinorotation experiments were performed as reference against the flight experiment.

  20. o

    The Singapore Nanopore Expression Data Set

    • registry.opendata.aws
    Updated Aug 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Genome Institute of Singapore (https://www.a-star.edu.sg/gis) (2022). The Singapore Nanopore Expression Data Set [Dataset]. https://registry.opendata.aws/sgnex/
    Explore at:
    Dataset updated
    Aug 14, 2022
    Dataset provided by
    The Genome Institute of Singapore (<a href="https://www.a-star.edu.sg/gis">https://www.a-star.edu.sg/gis</a>)
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Area covered
    Singapore
    Description

    The Singapore Nanopore Expression (SG-NEx) project is an international collaboration to generate reference transcriptomes and a comprehensive benchmark data set for long read Nanopore RNA-Seq. Transcriptome profiling is done using PCR-cDNA sequencing (PCR-cDNA), amplification-free cDNA sequencing (direct cDNA), direct sequencing of native RNA (direct RNA), and short read RNA-Seq. The SG-NEx core data includes 5 of the most commonly used cell lines and it is extended with additional cell lines and samples that cover a broad range of human tissues. All core samples are sequenced with at least 3 high quality replicates. For a subset of samples spike-in RNAs are used and matched m6A profiling data is available.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
FutureBee AI (2022). East Asian Facial Expression Images Dataset [Dataset]. https://www.futurebeeai.com/dataset/image-dataset/facial-images-expression-east-asia

East Asian Facial Expression Images Dataset

Emotion Detection Image Dataset

Explore at:
wavAvailable download formats
Dataset updated
Aug 1, 2022
Dataset provided by
FutureBeeAI
Authors
FutureBee AI
License

https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

Area covered
East Asia
Dataset funded by
FutureBeeAI
Description

Introduction

Welcome to the East Asian Facial Expression Image Dataset, meticulously curated to enhance expression recognition models and support the development of advanced biometric identification systems, KYC models, and other facial recognition technologies.

Facial Expression Data

This dataset comprises over 2000 facial expression images, divided into participant-wise sets with each set including:

Expression Images: 5 different high-quality images per individual, each capturing a distinct facial emotion like Happy, Sad, Angry, Shocked, and Neutral.

Diversity and Representation

The dataset includes contributions from a diverse network of individuals across East Asian countries, such as:

Geographical Representation: Participants from East Asian countries, including China, Japan, Philippines, Malaysia, Singapore, Thailand, Vietnam, Indonesia, and more.
Participant Profile: Participants range from 18 to 70 years old, representing both males and females in 60:40 ratio, respectively.
File Format: The dataset contains images in JPEG and HEIC file format.

Quality and Conditions

To ensure high utility and robustness, all images are captured under varying conditions:

Lighting Conditions: Images are taken in different lighting environments to ensure variability and realism.
Backgrounds: A variety of backgrounds are available to enhance model generalization.
Device Quality: Photos are taken using the latest mobile devices to ensure high resolution and clarity.

Metadata

Each facial expression image set is accompanied by detailed metadata for each participant, including:

Participant Identifier
File Name
Age
Gender
Country
Expression
Demographic Information
File Format

This metadata is essential for training models that can accurately recognize and identify expressions across different demographics and conditions.

Usage and Applications

This facial emotion dataset is ideal for various applications in the field of computer vision, including but not limited to:

Expression Recognition Models: Improving the accuracy and reliability of facial expression recognition systems.
KYC Models: Streamlining the identity verification processes for financial and other services.
Biometric Identity Systems: Developing robust biometric identification solutions.
Generative AI Models: Training generative AI models to create realistic and diverse synthetic facial images.

Secure and Ethical Collection

Data Security: Data was securely stored and processed within our platform, ensuring data security and confidentiality.
Ethical Guidelines: The biometric data collection process adhered to strict ethical guidelines, ensuring the privacy and consent of all participants.
Participant Consent: All participants were informed of the purpose of collection and potential use of the data, as agreed through written consent.

Updates and Customization

We understand the evolving nature of AI and machine learning requirements. Therefore, we continuously add more assets with diverse conditions to this off-the-shelf facial expression dataset.

Customization & Custom

Search
Clear search
Close search
Google apps
Main menu