33 datasets found
  1. h

    face-re-identification-image-dataset

    • huggingface.co
    Updated Mar 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UniData (2025). face-re-identification-image-dataset [Dataset]. https://huggingface.co/datasets/UniDataPro/face-re-identification-image-dataset
    Explore at:
    Dataset updated
    Mar 30, 2025
    Authors
    UniData
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Dataset of face images with different angles and head positions

    Dataset contains 23,110 individuals, each contributing 28 images featuring various angles and head positions, diverse backgrounds, and attributes, along with 1 ID photo. In total, the dataset comprises over 670,000 images in formats such as JPG and PNG. It is designed to advance face recognition and facial recognition research, focusing on person re-identification and recognition systems. By utilizing this dataset… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/face-re-identification-image-dataset.

  2. F

    Face Key Point Detection Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated May 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Face Key Point Detection Report [Dataset]. https://www.datainsightsmarket.com/reports/face-key-point-detection-531368
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    May 15, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Face Key Point Detection market is experiencing robust growth, driven by the increasing adoption of facial recognition technologies across diverse sectors. This surge is fueled by advancements in deep learning algorithms, leading to improved accuracy and efficiency in detecting key facial features. Applications span a wide range, from security and surveillance systems leveraging face recognition for authentication and identification, to the burgeoning field of emotion AI, using expression recognition for personalized user experiences. Head pose recognition further enhances the capabilities, enabling more natural and intuitive human-computer interactions. The market is segmented by application (Face Recognition, Expression Recognition, Head Pose Recognition, Others) and by the methods used (Holistic Approach, Constrained Local Model (CLM) Method, Regression-Based Methods). The holistic approach, offering a comprehensive analysis of the entire face, is currently dominant, although CLM and regression-based methods are gaining traction due to their computational efficiency. Major players like ULUCU, Roboflow, Oosto, and MathWorks are driving innovation and market penetration, while platforms like GitHub and Kaggle facilitate community development and resource sharing. Geographical growth is widespread, with North America and Europe currently leading the market due to higher technological adoption and infrastructure. However, the Asia-Pacific region is poised for significant expansion fueled by rapid technological advancements and growing demand in sectors like security and consumer electronics. The market's continued growth trajectory is projected to be influenced by several factors. The increasing availability of large, high-quality facial datasets for training advanced algorithms will enhance accuracy and reliability. Furthermore, the integration of face key point detection with other technologies like augmented reality (AR) and virtual reality (VR) will unlock new applications in entertainment, healthcare, and retail. However, challenges remain, including concerns surrounding data privacy and ethical considerations related to facial recognition technology. Addressing these concerns through robust regulatory frameworks and responsible development practices will be crucial for the market's sustainable and ethical growth. Considering a conservative CAGR of 15% (a reasonable estimate given the rapid technological advancements in this space), the market size, currently estimated around $1.5 billion in 2025, is likely to exceed $4 billion by 2033.

  3. m

    HFRD: Human Face Recognition Datasets on COVID-2019 pandemic stage (v2020)

    • data.mendeley.com
    • narcis.nl
    Updated Sep 14, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ra'ed M. Al-Khatib (2020). HFRD: Human Face Recognition Datasets on COVID-2019 pandemic stage (v2020) [Dataset]. http://doi.org/10.17632/v992cb6bw7.1
    Explore at:
    Dataset updated
    Sep 14, 2020
    Authors
    Ra'ed M. Al-Khatib
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The HFRD (v2020) dataset includes 4835 images of masked human faces. Originally, the faces were taken from three different publicly available datasets: MUCT, FASSEG, and AT&T datasets. The collected subsets of the HFRD dataset contain standard images that are previously used in the field of facial recognition, and then we added masks to cover the nose, mouth, and chin area, as for what remains visible from the face are the eyes, forehead, and hair. Thus, we have introduced new standard images for use in the field of partial facial recognition (PFR) domain. The new structure of the current HFRD store inherits the structure of each original dataset, which means there are three main datasets: MaskedMUCT contains five folders, MaskedFASSEG contains four folders, and MaskedAT&T has organized in 40 folders. This store of HFRD masked face datasets are not only inheriting the structure of the original datasets, but also inheriting the varsity of them. Finally, all folders consist of facial images surrounded and covered by various shapes of masks. Consequently, the proposed new enhanced HFRD datasets have been developed based on the impact of the pandemic COVID-2019 stage. Our enhanced improved Human Face Recognition Datasets (HFRD) datasets could be used to test studies of FR algorithms for human identification. The obtained outcomes from this enhanced HFRD data can also be useful in providing more knowledge for the Artificial Intelligence (AI) tools, and decision support system for predicting the spread of COVID-2019.

  4. RAVDESS Facial Landmark Tracking

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Riley Swanson; Steven R. Livingstone; Steven R. Livingstone; Frank A. Russo; Frank A. Russo; Riley Swanson (2025). RAVDESS Facial Landmark Tracking [Dataset]. http://doi.org/10.5281/zenodo.3255102
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Riley Swanson; Steven R. Livingstone; Steven R. Livingstone; Frank A. Russo; Frank A. Russo; Riley Swanson
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Contact Information

    If you would like further information about the RAVDESS Facial Landmark Tracking data set, or if you experience any issues downloading files, please contact us at ravdess@gmail.com.

    Tracking Examples

    Watch a sample of the facial tracking results.

    Description

    The RAVDESS Facial Landmark Tracking dataset set contains tracked facial landmark movements from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) [RAVDESS Zenodo page]. Motion tracking of actors' faces was produced by OpenFace 2.1.0 (Baltrusaitis, T., Zadeh, A., Lim, Y. C., & Morency, L. P., 2018). Tracked information includes: facial landmark detection, head pose estimation, facial action unit recognition, and eye-gaze estimation.

    The Facial Landmark Tracking dataset was created in the Affective Data Science Lab.

    This data set contains tracking for all 2452 RAVDESS trials. All tracking movement data are contained in "FacialTracking_Actors_01-24.zip", which contains 2452 .CSV files. Each actor has 104 tracked trials (60 speech, 44 song). Note, there are no song files for Actor 18.

    Total Tracked Files = (24 Actors x 60 Speech trials) + (23 Actors x 44 Song trials) = 2452 files.

    Tracking results for each trial are provided as individual comma separated value files (CSV format). File naming convention of tracked files is identical to that of the RAVDESS. For example, tracked file "01-01-01-01-01-01-01.csv" corresponds to RAVDESS audio-video file "01-01-01-01-01-01-01.mp4". For a complete description of the RAVDESS file naming convention and experimental manipulations, please see the RAVDESS Zenodo page.

    Tracking overlay videos for all trials are also provided (720p Xvid, .avi), one zip file per Actor. As the RAVDESS does not contain "ground truth" facial landmark locations, the overlay videos provide a visual 'sanity check' for researchers to confirm the general accuracy of the tracking results. The file naming convention of tracking overlay videos also matches that of the RAVDESS. For example, tracking video "01-01-01-01-01-01-01.avi" corresponds to RAVDESS audio-video file "01-01-01-01-01-01-01.mp4".

    Tracking File Output Format

    This data set retained OpenFace's data output format, described here in detail. The resolution of all input videos was 1280x720. When tracking output units are in pixels, their range of values is (0,0) (top left corner) to (1280,720) (bottom right corner).

    Columns 1-3 = Timing and Detection Confidence

    • 1. Frame - The number of the frame (source videos 30 fps), range = 1 to n
    • 2. Timestamp - Time of frame, range = 0 to m
    • 3. Confidence - Tracker confidence level in current landmark detection estimate, range = 0 to 1

    Columns 4-291 = Eye Gaze Detection

    • 4-6. gaze_0_x, gaze_0_y, gaze_0_z - Eye gaze direction vector in world coordinates for eye 0 (normalized), eye 0 is the leftmost eye in the image (think of it as a ray going from the left eye in the image in the direction of the eye gaze).
    • 7-9. gaze_1_x, gaze_1_y, gaze_1_z - Eye gaze direction vector in world coordinates for eye 1 (normalized), eye 1 is the rightmost eye in the image (think of it as a ray going from the right eye in the image in the direction of the eye gaze).
    • 10-11. gaze_angle_x, gaze_angle_y - Eye gaze direction in radians in world coordinates, averaged for both eyes. If a person is looking left-right this will results in the change of gaze_angle_x (from positive to negative) and, if a person is looking up-down this will result in change of gaze_angle_y (from negative to positive), if a person is looking straight ahead both of the angles will be close to 0 (within measurement error).
    • 12-123. eye_lmk_x_0, ..., eye_lmk_x55, eye_lmk_y_0,..., eye_lmk_y_55 - Location of 2D eye region landmarks in pixels. A figure describing the landmark index can be found here.
    • 124-291. eye_lmk_X_0, ..., eye_lmk_X55, eye_lmk_Y_0,..., eye_lmk_Y_55,..., eye_lmk_Z_0,..., eye_lmk_Z_55 - Location of 3D eye region landmarks in millimeters. A figure describing the landmark index can be found here.

    Columns 292-297 = Head pose

    • 292-294. pose_Tx, pose_Ty, pose_Tz - Location of the head with respect to camera in millimeters (positive Z is away from the camera).
    • 295-297. pose_Rx, pose_Ry, pose_Rz - Rotation of the head in radians around X,Y,Z axes with the convention R = Rx * Ry * Rz, left-handed positive sign. This can be seen as pitch (Rx), yaw (Ry), and roll (Rz). The rotation is in world coordinates with the camera being located at the origin.

    Columns 298-433 = Facial Landmarks locations in 2D

    • 298-433. x_0, ..., x_67, y_0,...y_67 - Location of 2D landmarks in pixels. A figure describing the landmark index can be found here.

    Columns 434-637 = Facial Landmarks locations in 3D

    • 434-637. X_0, ..., X_67, Y_0,..., Y_67, Z_0,..., Z_67 - Location of 3D landmarks in millimetres. A figure describing the landmark index can be found here. For these values to be accurate, OpenFace needs to have good estimates for fx,fy,cx,cy.

    Columns 638-677 = Rigid and non-rigid shape parameters

    Parameters of a point distribution model (PDM) that describe the rigid face shape (location, scale and rotation) and non-rigid face shape (deformation due to expression and identity). For more details, please refer to chapter 4.2 of my Tadas Baltrusaitis's PhD thesis [download link].

    • 638-643. p_scale, p_rx, p_ry, p_rz, p_tx, p_ty - Scale, rotation, and translation terms of the PDM.
    • 644-677. p_0, ..., p_33 - Non-rigid shape parameters.

    Columns 687-712 = Facial Action Units

    Facial Action Units (AUs) are a way to describe human facial movements (Ekman, Friesen, and Hager, 2002) [wiki link]. More information on OpenFace's implementation of AUs can be found here.

    • 678-694. AU01_r, AU02_r, AU04_r, AU05_r, AU06_r, AU07_r, AU09_r, AU10_r, AU12_r, AU14_r, AU15_r, AU17_r, AU20_r, AU23_r, AU25_r, AU26_r, AU45_r - Intensity of AU movement, range from 0 (no muscle contraction) to 5 (maximal muscle contraction).
    • 695-712. AU01_c, AU02_c, AU04_c, AU05_c, AU06_c, AU07_c, AU09_c, AU10_c, AU12_c, AU14_c, AU15_c, AU17_c, AU20_c, AU23_c, AU25_c, AU26_c, AU28_c, AU45_c - Presence or absence of 18 AUs, range 0 (absent, not detected) to 1 (present, detected).

    Note, OpenFace's columns 2 and 5 (face_id and success, respectively) were not included in this data set. These values were redundant as a single face was detected in all frames, in all 2452 trials.

    Tracking Overlay Videos

    Tracking overlay videos visualize most aspects of the tracking output described above.

    • Frame - Column 1, Top left corner of video
    • Eye Gaze - Columns 4-11. Indicated by green ray emanating from left and right eyes.
    • Eye region landmarks 2D - Columns 12-123. Red landmarks around left and right eyes, and black circles surrounding left and right irises.
    • Head pose - Columns 292-297. Blue bounding box surrounding the actor's head.
    • Facial landmarks 2D - Columns 298-433. Red landmarks on the participant's left and right eyebrows, nose, lips, and jaw.
    • Facial Action Unit Intensity - Columns 687-694. All 17 AUs are listed on the left side of the video in black text. Intensity level (0-5) of each AU is indicated by the numeric value and blue bar.
    • Facial Action Unit Presence - Columns 695-712. All 18 AUs are listed on the right side of the video in black & green text. Absence of an AU (0) is in black text with the numeric value 0.0. Presence of an AU (1) is in green text with the numeric value 1.0.

    Camera Parameters and 3D Calibration Procedure

    This data set contains accurate estimates of actors' 3D head poses. To produce these, camera parameters at the time of recording were required (distance from camera to actor, and camera field of view). These values were used with OpenCV's camera calibration procedure, described here, to produce estimates of the camera's focal length and optical center at the time of actor recordings. The four values produced by the calibration procedure (fx,fy,cx,cy) were input to OpenFace as command line arguments during facial tracking, described here, to produce accurate estimates of 3D head pose.

    Camera

  5. s

    Isi na olu Semantic Nkebi Dataset

    • ig.shaip.com
    json
    Updated Dec 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Isi na olu Semantic Nkebi Dataset [Dataset]. https://ig.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 1, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Isi na olu Semantic Segmentation Dataset emebere maka e-azụmahịa & mkpọsa na ngalaba mgbasa ozi & ntụrụndụ, na-egosipụta mkpokọta eserese eserese AI mepụtara nwere mkpebi dị elu karịa 1024 x 1024 pikselụ. Nke a dataset na-elekwasị anya na semantic segmentation, kpọmkwem ezubere iche isi agwa, gụnyere ihu, ntutu isi, na ihe ọ bụla ngwa, nakwa dị ka n'olu ebe ruo n'olu, na-enye ohere maka obere, adịghị ekewa akụkụ n'akụkụ.

  6. s

    Head and Neck Semantic Segmentation Dataset

    • tr.shaip.com
    • maadaa.ai
    json
    Updated Aug 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Head and Neck Semantic Segmentation Dataset [Dataset]. https://tr.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Aug 22, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Head and Neck Semantic Segmentation Dataset is designed for the e-commerce & retail and media & entertainment sectors, featuring a collection of AI-generated cartoon images with resolutions above 1024 x 1024 pixels. This dataset focuses on semantic segmentation, specifically targeting the main character's head, including face, hair, and any accessories, as well as the neck area up to the collarbone, with an allowance for small, unsegmented parts on the edges.

  7. s

    Dataset Segmentation Semantic Loha sy Tenda

    • mg.shaip.com
    json
    Updated Jan 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2025). Dataset Segmentation Semantic Loha sy Tenda [Dataset]. https://mg.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Jan 4, 2025
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Ny Dataset Semantic Segmentation Head and Neck dia natao ho an'ny sehatry ny e-varotra & antsinjarany ary haino aman-jery & fialamboly, ahitana fitambarana sary sariitatra noforonin'ny AI miaraka amin'ny fanapahan-kevitra mihoatra ny 1024 x 1024 piksel. Ity tahirin-kevitra ity dia mifantoka amin'ny fizarana semantika, mikendry manokana ny lohan'ilay mpilalao fototra, ao anatin'izany ny tarehy, ny volo, ary ny kojakoja rehetra, ary koa ny faritry ny vozony ka hatreo amin'ny taolam-paty, miaraka amin'ny fanomezan-dàlana ho an'ny ampahany kely tsy voazara amin'ny sisiny.

  8. f

    Distribution of data in the FERPlus dataset.

    • plos.figshare.com
    xls
    Updated Jan 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dandan Song; Chao Liu (2025). Distribution of data in the FERPlus dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0312359.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 16, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Dandan Song; Chao Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Facial expression recognition faces great challenges due to factors such as face similarity, image quality, and age variation. Although various existing end-to-end Convolutional Neural Network (CNN) architectures have achieved good classification results in facial expression recognition tasks, these network architectures share a common drawback that the convolutional kernel can only compute the correlation between elements of a localized region when extracting expression features from an image. This leads to difficulties for the network to explore the relationship between all the elements that make up a complete expression. In response to this issue, this article proposes a facial expression recognition network called HFE-Net. In order to capture the subtle changes of expression features and the whole facial expression information at the same time, HFE-Net proposed a Hybrid Feature Extraction Block. Specifically, Hybrid Feature Extraction Block consists of parallel Feature Fusion Device and Multi-head Self-attention. Among them, Feature Fusion Device not only extracts the local information in expression features, but also measures the correlation between distant elements in expression features, which helps the network to focus more on the target region while realizing the information interaction between distant features. And Multi-head Self-attention can calculate the correlation between the overall elements in the feature map, which helps the network to extract the overall information of the expression features. We conducted a lot of experiments on four publicly available facial expression datasets and verified that the Hybrid Feature Extraction Block constructed in this paper can improve the network’s recognition ability for facial expressions.

  9. s

    Isethi Yedatha Yengxenye Yekhanda Nentamo

    • zu.shaip.com
    json
    Updated Dec 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Isethi Yedatha Yengxenye Yekhanda Nentamo [Dataset]. https://zu.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 7, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    I-Head and Neck Semantic Segmentation Dataset yakhelwe imikhakha ye-e-commerce & yokuthengisa nemidiya nezokuzijabulisa, ehlanganisa iqoqo lezithombe zopopayi ezikhiqizwe yi-AI ezinezinqumo ezingaphezu kwamaphikseli angu-1024 x 1024. Le dathasethi igxile ekuhlukaniseni kwe-semantic, eqondise ngokukhethekile ikhanda lomlingiswa oyinhloko, okuhlanganisa ubuso, izinwele, nanoma yiziphi izisetshenziswa, kanye nendawo yentamo kuze kufike kukholomu, nemvume yezingxenye ezincane, ezingahlukanisiwe emaphethelweni.

  10. s

    د سر او غاړې سیمنټیک سیګمینټیشن ډیټاسیټ

    • ps.shaip.com
    json
    Updated Dec 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). د سر او غاړې سیمنټیک سیګمینټیشن ډیټاسیټ [Dataset]. https://ps.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 8, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    د سر او غاړې د سیمانټیک سیګمینټیشن ډیټا سیټ د ای کامرس او پرچون پلور او رسنیو او تفریحي سکتورونو لپاره ډیزاین شوی، چې د 1024 x 1024 پکسلز څخه پورته ریزولوشن سره د AI لخوا رامینځته شوي کارټون عکسونو ټولګه وړاندې کوي. دا ډیټا سیمانټیک سیګمینټیشن باندې تمرکز کوي، په ځانګړي توګه د اصلي کرکټر سر په نښه کوي، پشمول د مخ، ویښتو، او هر ډول لوازمو، او همدارنګه د غاړې ساحه تر کالر هډوکي پورې، په څنډو کې د کوچنیو، غیر سیګمینټ شوي برخو لپاره د تخصیص سره.

  11. f

    Comparison of experimental results on the FERPlus dataset.

    • plos.figshare.com
    xls
    Updated Jan 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dandan Song; Chao Liu (2025). Comparison of experimental results on the FERPlus dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0312359.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 16, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Dandan Song; Chao Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparison of experimental results on the FERPlus dataset.

  12. f

    Comparison of experimental results on the RAF-DB dataset.

    • plos.figshare.com
    xls
    Updated Jan 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dandan Song; Chao Liu (2025). Comparison of experimental results on the RAF-DB dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0312359.t008
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 16, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Dandan Song; Chao Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparison of experimental results on the RAF-DB dataset.

  13. s

    هيڊ ۽ نيڪ سيمينٽڪ سيگمينٽيشن ڊيٽا سيٽ

    • sd.shaip.com
    json
    Updated Dec 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). هيڊ ۽ نيڪ سيمينٽڪ سيگمينٽيشن ڊيٽا سيٽ [Dataset]. https://sd.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 7, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    هيڊ اينڊ نيڪ سيمينٽڪ سيگمينٽيشن ڊيٽا سيٽ اي-ڪامرس ۽ پرچون ۽ ميڊيا ۽ تفريحي شعبن لاءِ ٺاهيو ويو آهي، جنهن ۾ 1024 x 1024 پڪسل کان مٿي ريزوليوشن سان AI-جنريٽر ٿيل ڪارٽون تصويرن جو مجموعو شامل آهي. هي ڊيٽا سيٽ سيمينٽڪ سيگمينٽيشن تي ڌيان ڏئي ٿو، خاص طور تي مکيه ڪردار جي مٿي کي نشانو بڻائي ٿو، جنهن ۾ چهرو، وار، ۽ ڪنهن به لوازمات شامل آهن، انهي سان گڏ ڳچيءَ جو علائقو ڪالر بون تائين، ڪنارن تي ننڍڙن، غير سيگمينٽ ٿيل حصن لاءِ الاؤنس سان.

  14. s

    Sirah jeung beuheung Segmentasi Semantis Dataset

    • su.shaip.com
    json
    Updated Dec 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Sirah jeung beuheung Segmentasi Semantis Dataset [Dataset]. https://su.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 2, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Dataset Segmentasi Semantik Kepala sareng Beuheung dirancang pikeun séktor e-commerce & ritel sareng média & hiburan, nampilkeun kumpulan gambar kartun anu dibangkitkeun AI kalayan résolusi di luhur 1024 x 1024 piksel. Dataset ieu museurkeun kana ségméntasi semantik, khususna nargétkeun sirah tokoh utama, kalebet raray, rambut, sareng asesoris naon waé, ogé daérah beuheung dugi ka tulang selangka, kalayan sangu pikeun bagian-bagian anu alit, henteu diségmentasi dina ujungna.

  15. f

    Multilayer perceptron, Multi-head Self-attention and Feature Fusion Device...

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jan 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dandan Song; Chao Liu (2025). Multilayer perceptron, Multi-head Self-attention and Feature Fusion Device ablation experiments on four FER datasets. [Dataset]. http://doi.org/10.1371/journal.pone.0312359.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 16, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Dandan Song; Chao Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Multilayer perceptron, Multi-head Self-attention and Feature Fusion Device ablation experiments on four FER datasets.

  16. s

    Fej és nyak szemantikai szegmentációs adatkészlete

    • hu.shaip.com
    json
    Updated Dec 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Fej és nyak szemantikai szegmentációs adatkészlete [Dataset]. https://hu.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 7, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    A Head and Neck Semantic Segmentation Dataset az e-kereskedelem és a kiskereskedelem, valamint a média és a szórakoztatóipar számára készült, és mesterséges intelligencia által generált, 1024 x 1024 pixel feletti felbontású rajzfilmek gyűjteményét tartalmazza. Ez az adatkészlet a szemantikai szegmentációra összpontosít, különös tekintettel a főszereplő fejére, beleértve az arcát, a haját és az esetleges kiegészítőket, valamint a nyak területére a kulcscsontig, a széleken lévő apró, szegmentálatlan részek figyelembevételével.

  17. f

    Comparison of experimental results on the Affectnet dataset.

    • plos.figshare.com
    xls
    Updated Jan 16, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dandan Song; Chao Liu (2025). Comparison of experimental results on the Affectnet dataset. [Dataset]. http://doi.org/10.1371/journal.pone.0312359.t009
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 16, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Dandan Song; Chao Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparison of experimental results on the Affectnet dataset.

  18. s

    Bosh va bo'yinning semantik segmentatsiyasi ma'lumotlar to'plami

    • uz.shaip.com
    json
    Updated Dec 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Bosh va bo'yinning semantik segmentatsiyasi ma'lumotlar to'plami [Dataset]. https://uz.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 7, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Bosh va boʻyinning semantik segmentatsiyasi maʼlumotlar toʻplami elektron tijorat, chakana savdo hamda media va koʻngilochar sektorlar uchun moʻljallangan boʻlib, AI tomonidan yaratilgan 1024 x 1024 pikseldan yuqori oʻlchamdagi multfilm tasvirlari toʻplamini oʻz ichiga oladi. Ushbu ma'lumotlar to'plami semantik segmentatsiyaga qaratilgan, xususan, bosh qahramonning boshi, shu jumladan yuzi, sochlari va har qanday aksessuarlari, shuningdek, bo'yinbog'igacha bo'yin sohasi, chekkalarda kichik, bo'linmagan qismlarga ruxsat berilgan.

  19. s

    Caput et Collum Semantic Segmentation Dataset

    • la.shaip.com
    json
    Updated Dec 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Caput et Collum Semantic Segmentation Dataset [Dataset]. https://la.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 28, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Series datorum Segmentationis Semanticae Capitis et Colli destinata est sectoribus commercii electronici et venditionis minutae, instrumentorum communicationis socialis et oblectationis, collectionem imaginum animatarum ab intellegentia artificiali generatarum cum resolutionibus supra 1024 x 1024 elementa punctorum exhibens. Haec series datorum in segmentatione semantica intendit, praecipue caput personae principalis spectans, incluso facie, capillis, et quibuslibet ornamentis, necnon regione colli usque ad claviculam, cum spatio parvorum partium non segmentatarum in marginibus.

  20. s

    Conjunto de datos de segmentación semántica de cabeza e pescozo

    • gl.shaip.com
    json
    Updated Dec 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2024). Conjunto de datos de segmentación semántica de cabeza e pescozo [Dataset]. https://gl.shaip.com/offerings/facial-body-part-segmentation-and-recognition-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Dec 1, 2024
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    O conxunto de datos de segmentación semántica de cabeza e pescozo está deseñado para os sectores do comercio electrónico e a venda polo miúdo, así como dos medios de comunicación e o entretemento, e inclúe unha colección de imaxes de debuxos animados xeradas por IA con resolucións superiores a 1024 x 1024 píxeles. Este conxunto de datos céntrase na segmentación semántica, dirixíndose especificamente á cabeza do personaxe principal, incluíndo a cara, o cabelo e calquera accesorio, así como a zona do pescozo ata a clavícula, cunha marxe para as partes pequenas non segmentadas nos bordos.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
UniData (2025). face-re-identification-image-dataset [Dataset]. https://huggingface.co/datasets/UniDataPro/face-re-identification-image-dataset

face-re-identification-image-dataset

UniDataPro/face-re-identification-image-dataset

Explore at:
Dataset updated
Mar 30, 2025
Authors
UniData
License

Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically

Description

Dataset of face images with different angles and head positions

Dataset contains 23,110 individuals, each contributing 28 images featuring various angles and head positions, diverse backgrounds, and attributes, along with 1 ID photo. In total, the dataset comprises over 670,000 images in formats such as JPG and PNG. It is designed to advance face recognition and facial recognition research, focusing on person re-identification and recognition systems. By utilizing this dataset… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/face-re-identification-image-dataset.

Search
Clear search
Close search
Google apps
Main menu