14 datasets found
  1. Z

    Custom Silicone Mask Attack Dataset (CSMAD)

    • data.niaid.nih.gov
    • zenodo.org
    Updated Mar 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marcel, Sébastien (2023). Custom Silicone Mask Attack Dataset (CSMAD) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4084200
    Explore at:
    Dataset updated
    Mar 8, 2023
    Dataset provided by
    Mohammadi, Amir
    Bhattacharjee, Sushil
    Marcel, Sébastien
    Description

    The Custom Silicone Mask Attack Dataset (CSMAD) contains presentation attacks made of six custom-made silicone masks. Each mask cost about USD 4000. The dataset is designed for face presentation attack detection experiments.

    The Custom Silicone Mask Attack Dataset (CSMAD) has been collected at the Idiap Research Institute. It is intended for face presentation attack detection experiments, where the presentation attacks have been mounted using a custom-made silicone mask of the person (or identity) being attacked.

    The dataset contains videos of face-presentations, as a set of files specifying the experimental protocol corresponding the experiments presented in the corresponding publication.

    Reference

    If you publish results using this dataset, please cite the following publication.

    Sushil Bhattacharjee, Amir Mohammadi and Sebastien Marcel: "Spoofing Deep Face Recognition With Custom Silicone Masks." in Proceedings of International Conference on Biometrics: Theory, Applications, and Systems (BTAS), 2018. 10.1109/BTAS.2018.8698550 http://publications.idiap.ch/index.php/publications/show/3887

    Data Collection

    Face-biometric data has been collected from 14 subjects to create this dataset. Subjects participating in this data-collection have played three roles: targets, attackers, and bona-fide clients. The subjects represented in the dataset are referred to here with letter-codes: A .. N. The subjects A..F have also been targets. That is, face-data for these six subjects has been used to construct their corresponding flexible masks (made of silicone). These masks have been made by Nimba Creations Ltd., a special effects company.

    Bona fide presentations have been recorded for all subjects A..N. Attack presentations (presentations where the subject wears one of 6 masks) have been recorded for all six targets, made by different subjects. That is, each target has been attacked several times, each time by a different attacker wearing the mask in question. This is one way of increasing the variability in the dataset. Another way we have augmented the variability of the dataset is by capturing presentations under different illumination conditions. Presentations have been captured in four different lighting conditions:

    flourescent ceiling light only

    halogen lamp illuminating from the left of the subject only

    halogen lamp illuminating from the right only

    both halogen lamps illuminating from both sides simultaneously

    All presentations have been captured with a green uniform background. See the paper mentioned above for more details of the data-collection process.

    Dataset Structure

    The dataset is organized in three subdirectories: ‘attack’, ‘bonafide’, ‘protocols’. The two directories: ‘attack’ and ‘bonafide’ contain presentation-videos and still images for attacks and bona fide presentations, respectively. The folder ‘protocols’ contains text files specifying the experimental protocol for vulnerability analysis of face-recognition (FR) systems.

    The number of data-files per category are as follows:

    ‘bonafide’: 87 videos, and 17 still images (in .JPG format). The still images are frontal face images captured using a Nikon Coolpix digital camera.

    ‘attack’: 159, organized in two sub-folders – ‘WEAR’ (108 videos), and ‘STAND’ (51 videos)

    The folder ‘attack/WEAR’ contains videos where the attack has been made by a person (attacker) wearing the mask of the target being attacked. The ‘attack/STAND’ folder contains videos where the attack has been made using a the target’s mask mounted on an appropriate stand.

    Video File Format

    The video files for the face-presentations are in ‘hdf5’ format (with file-extensions ‘.h5’. The folder structure of the hdf5 file is shown in Figure 1. Each file contains data collected using two cameras:

    RealSense SR300 (from Intel): collects images/videos in visible-light (RGB color) , near infrared (NIR) @ 860nm wavelength, and depth maps

    Compact Pro (from Seek Thermal): collects thermal (long-wave infrared (LWIR)) images.

    As shown in Figure 1, frames from the different channels (color, infrared, depth, thermal) from he two cameras are stored in separate directory-hierarchies in the hdf5 file. Each file respresents a video of approximately 10 seconds, or roughly, 300 frames.

    In the hdf5 file, the directory for SR300 also contains a subdirectory named ‘aligned_color_to_depth’. This folder contains post-processed data, where the frames of depth channel have been aligned with those of the color channel based on the time-stamps of the frames.

    Experimental Protocol

    The ‘protocols’ folder contains text files that specify the protocols for vulnerability analysis experiments reported in the paper mentioned above. Please see the README file in the protocols folder for details.

  2. h

    silicone-mask-attack

    • huggingface.co
    Updated Oct 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UniData (2024). silicone-mask-attack [Dataset]. https://huggingface.co/datasets/UniDataPro/silicone-mask-attack
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 31, 2024
    Authors
    UniData
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Silicone Mask Attack dataset

    The dataset contains 6,500+ videos of attacks from 50 different people, filmed using 5 devices, providing a valuable resource for researching presentation attacks in facial recognition technologies. By focusing on this area, the dataset facilitates experiments designed to improve biometric security and anti-spoofing measures, ultimately aiding in the creation of more robust and reliable authentication systems. By utilizing this dataset, researchers can… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/silicone-mask-attack.

  3. o

    Mobile Custom Silicone Mask Attack Dataset (CSMAD-Mobile)

    • explore.openaire.eu
    • zenodo.org
    Updated Sep 5, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Raghavendra Ramachandra; Sushma Venkatesh; Kiran B. Raja; Sushil Bhattacharjee; Pankaj Wasnik; Sébastien Marcel; ChristophBusch, (2019). Mobile Custom Silicone Mask Attack Dataset (CSMAD-Mobile) [Dataset]. http://doi.org/10.34777/q7xf-0216
    Explore at:
    Dataset updated
    Sep 5, 2019
    Authors
    Raghavendra Ramachandra; Sushma Venkatesh; Kiran B. Raja; Sushil Bhattacharjee; Pankaj Wasnik; Sébastien Marcel; ChristophBusch,
    Description

    CSMAD-Mobile is a dataset for mobile face recognition and presentation attack detection (anti-spoofing). The dataset contains face and silicon masks images captured with different smartphones. This dataset consists of images captured from 8 different Bona Fide subjects using three different smartphones (iPhone X, Samsung S7 and Samsung S8). For each subject within the database, varying number of samples are collected using all the three phones. Similarly, the silicone masks of each of the subject is collected using three phones.

  4. Z

    eXtended Custom Silicone Mask Attack Dataset (XCSMAD)

    • data.niaid.nih.gov
    • zenodo.org
    Updated Mar 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marcel, Sébastien (2023). eXtended Custom Silicone Mask Attack Dataset (XCSMAD) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3822047
    Explore at:
    Dataset updated
    Mar 6, 2023
    Dataset provided by
    Bhattacharjee, Sushil
    Kotwal, Ketan
    Marcel, Sébastien
    Description

    Description

    The eXtended Custom Silicone Mask Attack Dataset (XCSMAD) consists of 535 short video recordings of both bona fide and presentation attacks (PA) from 72 subjects. The attacks have been created from custom silicone masks. Videos have been recorded in RGB (visual spectra), near infrared (NIR), and thermal (LWIR) channels.

    A complete preprocessed data for the aforementioned videos and bona fide images (as a part of experiments related to vulnerability assessment) have been provided to facilitate reproducing experiments from the reference publication, as well as to conduct new experiments. The details of preprocessing can be found in the reference publication.

    The implementation of all experiments described in the reference publication is available at https://gitlab.idiap.ch/bob/bob.paper.xcsmad_facepad

    Experimental protocols

    The reference publication considers two experimental protocols: grandtest and cross-validation (cv). For a frame-level evaluation, 50 frames from each video have been used in both protocols. For the grandtest protocol, videos were divided into train, dev, and eval groups. Each group consists of unique subset of clients. (The videos corresponding to any specific subjects in one group are a part of single group).

    For cross-validation (cv) experiments, a 5-fold protocol has been devised. Videos from XCSMAD have been split into 5 folds with non-overlapping clients. Using these five partitions, 5 testprotocols (cv0, · · · , cv4) have been created such that in each protocol, four of the partitions are used for training, and the remaining one is used for evaluation.

    Reference

    If you use this dataset, please cite the following publication:

    @article{Kotwal_TBIOM_2019, author = {Kotwal, Ketan and Bhattacharjee, Sushil and Marcel, S\'{e}bastien}, title = {Multispectral Deep Embeddings As a Countermeasure To Custom Silicone Mask Presentation Attacks}, journal = {IEEE Transactions on Biometrics, Behavior, and Identity Science}, publisher = {{IEEE}}, year = {2019}, }

  5. h

    latex-mask-attack

    • huggingface.co
    Updated Nov 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UniData (2024). latex-mask-attack [Dataset]. https://huggingface.co/datasets/UniDataPro/latex-mask-attack
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 23, 2024
    Authors
    UniData
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Face mask dataset for facial recognition

    This dataset contains over 11,100+ video recordings of people wearing latex masks, captured using 5 different devices.It is designed for liveness detection algorithms, specifically aimed at enhancing anti-spoofing capabilities in biometric security systems. By utilizing this dataset, researchers can develop more accurate facial recognition technologies, which is crucial for achieving the iBeta Level 2 certification, a benchmark for robust and… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/latex-mask-attack.

  6. Wrapped 3D Paper Mask Spoofing Dataset

    • kaggle.com
    Updated Mar 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Axon Labs (2025). Wrapped 3D Paper Mask Spoofing Dataset [Dataset]. https://www.kaggle.com/datasets/axondata/wrapped-3d-attacks
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 27, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Axon Labs
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Wrapped 3D Attacks Dataset is designed to enhance Liveness Detection models by simulating Wrapped 3D Attacks — a more advanced version of 3D Print Attacks, where facial prints include 3D elements and additional attributes. It is particularly useful for iBeta Level 2 certification and anti-spoofing model training. It is more advanced version of 3D paper mask attack Dataset

    Dataset summary

    • Dataset Size: ~2k videos shoot on 20 IDs demonstrating various spoofing attacks
    • Active Liveness Features: Includes zoom-in and zoom-out to enhance training scenarios
    • Attributes: Different hairstyles, glasses, wigs and beards to enhance diversity
    • Variability: 3 indoor locations with different types of lighting: from warm to cold
    • Main Applications: Preparation for iBeta Level 2 certification, Active and passive liveness for anti spoofing systems

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F20109613%2Fc3ecd3043c4139ef185a461e47ba257c%2F2025-03-26%20%2022.44.44.png?generation=1743018625916049&alt=media" alt="">

    Source and collection methodology

    The videos capture realistic spoofing conditions using different recording devices and variety of environments. Additionally, each attack video employs a zoom-in effect, adding to its effectiveness in active liveness detection. The videos were shot using a back-facing camera

    To create wrapped 3D attacks, we:

    • Constructed 3D facial structures by cutting out A4-sized face prints, shaping volume for the nose, forehead, and chin, and mounting them on mannequin heads or cylindrical objects
    • Added attributes, including wigs, beards, mustaches, glasses, hats, and hoods, to increase spoofing complexity
    • Simulated a human torso using clothing on mannequins, chairs, or surfaces

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F20109613%2F6eda9058fdaf9ca0118e3fdceee02f30%2FFrame%2065.png?generation=1743018709824907&alt=media" alt="">

    Full version of dataset is availible for commercial usage - leave a request on our website Axonlabs to purchase the dataset 💰

    Potential Use Cases:

    • iBeta Level 2 Certification Compliance: Helps to train the models for iBeta level 2 certification tests Allows pre-certification testing to assess system performance before submission
    • Inhouse Liveness Detection Models: Used for training and validation of anti-spoofing models Enables testing of existing algorithms and identification of their vulnerabilities against spoofing attacks

    This is a more advanced version of 3D paper mask attack dataset for Liveness

    Wrapped 3D vs. 3D Paper:

    • Volume: In Wrapped 3D masks are really three-dimensional (nose, chin and forehead protrude), while in 3D Paper the image is flat, the volume is created only by the angle and surroundings

    • Body: In Wrapped 3D the mask is put on a mannequin, you can see a part of the body with clothes. In 3D Paper, only the printed head and shoulders are visible

    • Detail: Wrapped 3D uses wigs, glasses and other realistic details. 3D Paper is just a paper photo, with no extras

    Based on our experience, this similar datasets:

    Silicone Mask Attack Dataset

    Latex Mask Attack Dataset

  7. h

    iBeta_level_2_Silicone_masks

    • huggingface.co
    Updated May 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AxonLabs (2024). iBeta_level_2_Silicone_masks [Dataset]. https://huggingface.co/datasets/AxonData/iBeta_level_2_Silicone_masks
    Explore at:
    Dataset updated
    May 20, 2024
    Authors
    AxonLabs
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Silicone Mask Biometric Attack Dataset for Liveness Detection

    10,000+ videos of attacks with Silicone 3D Masks for iBeta 2. The Dataset is designed to address security challenges in liveness detection systems through 3D silicone mask attacks. These presentation attacks are key for testing Passive Liveness Detection systems crucial for iBeta Level 2 certification. This dataset significantly enhances the capabilities of liveness detection models

      Full version of dataset is… See the full description on the dataset page: https://huggingface.co/datasets/AxonData/iBeta_level_2_Silicone_masks.
    
  8. P

    MLFP Dataset

    • paperswithcode.com
    Updated Feb 7, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Akshay Agarwal; Daksha Yadav; Naman Kohli; Richa Singh; Mayank Vatsa; Afzel Noore (2021). MLFP Dataset [Dataset]. https://paperswithcode.com/dataset/mlfp
    Explore at:
    Dataset updated
    Feb 7, 2021
    Authors
    Akshay Agarwal; Daksha Yadav; Naman Kohli; Richa Singh; Mayank Vatsa; Afzel Noore
    Description

    The MLFP dataset consists of face presentation attacks captured with seven 3D latex masks and three 2D print attacks. The dataset contains videos captured from color, thermal and infrared channels.

  9. h

    web-camera-face-liveness-detection

    • huggingface.co
    Updated Dec 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Training Data (2023). web-camera-face-liveness-detection [Dataset]. https://huggingface.co/datasets/TrainingDataPro/web-camera-face-liveness-detection
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 27, 2023
    Authors
    Training Data
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Web Camera Face Liveness Detection

    The dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under different lighting conditions and with different attributes (glasses, masks, hats, hoods, wigs, and mustaches for men). In the dataset, there are 7 types of videos filmed on a web camera:

    Silicone Mask - demonstration of a silicone mask attack (silicone) 2D mask with holes for eyes - demonstration of an attack with a paper/cardboard mask… See the full description on the dataset page: https://huggingface.co/datasets/TrainingDataPro/web-camera-face-liveness-detection.

  10. h

    3d_cloth_face_mask_spoofing_dataset

    • huggingface.co
    Updated Jun 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AxonLabs (2025). 3d_cloth_face_mask_spoofing_dataset [Dataset]. https://huggingface.co/datasets/AxonData/3d_cloth_face_mask_spoofing_dataset
    Explore at:
    Dataset updated
    Jun 12, 2025
    Authors
    AxonLabs
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Textile 3D Face Mask Attack Dataset

    This Dataset is specifically designed to enhance Face Anti-Spoofing and Liveness Detection models by simulating Nylon Mask Attacks — an accessible alternative to expensive silicone and latex mask datasets. These attacks utilize thin elastic fabric masks worn like a balaclava, featuring printed facial images that conform to the wearer's head shape through textile elasticity. The dataset is particularly valuable for PAD model training and iBeta… See the full description on the dataset page: https://huggingface.co/datasets/AxonData/3d_cloth_face_mask_spoofing_dataset.

  11. Z

    ERPA

    • data.niaid.nih.gov
    • zenodo.org
    Updated Mar 8, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marcel, Sébastien (2023). ERPA [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4105693
    Explore at:
    Dataset updated
    Mar 8, 2023
    Dataset provided by
    Bhattacharjee, Sushil
    Marcel, Sébastien
    Description

    Description

    This dataset contains images of 5 subjects. The images have been captured using the Intel Realsense SR300 camera, and the Xenics Gobi thermal camera. The SR300 returns 3 kinds of data: color (RGB) images, near-infrared (NIR) images, and depth information.

    For four subjects (subject1 – subject4), images have been captured with both cameras under two conditions:

    1with the face visible, and

    with the subject wearing a rigid (resin-coated) mask

    Each subject has used 3 sets of rigid masks (corresponding to three identities (‘id0’, ‘id1’, ‘id2’, not necessarily corresponding to the subjects in this dataset), with two masks (‘mask0’, ‘mask1’) per identity.

    For subject5, data has been captured using both cameras under two conditions:

    with the face visible

    with the subject wearing a flexible (silicone) mask resembling subject5.

    For each combination (subject, camera, condition), several seconds of video have been captured, and the video-frames have been stored in uncompressed form (in .png) format.

    All images in .png format have been captured at a resolution of 640x480 pixels.

    Reference

    If you use this database, please cite the following publication:

    Sushil Bhattacharjee and Sébastien Marcel, "What you can't see can help you -- extended-range imaging for 3Dmask presentation attack detection", BIOSIG2017. 10.23919/BIOSIG.2017.8053524 https://publications.idiap.ch/index.php/publications/show/3710

  12. h

    Wrapped_3D_Attacks

    • huggingface.co
    Updated Mar 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AxonLabs (2025). Wrapped_3D_Attacks [Dataset]. https://huggingface.co/datasets/AxonData/Wrapped_3D_Attacks
    Explore at:
    Dataset updated
    Mar 31, 2025
    Authors
    AxonLabs
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Silicone Mask Biometric Attack Dataset

    Wrapped 3D Attacks Dataset

      Full version of dataset is availible for commercial usage - leave a request on our website Axon Labs to purchase the dataset 💰
    
    
    
    
    
      Introduction
    

    This dataset is designed to enhance Liveness Detection models by simulating Wrapped 3D Attacks — a more advanced version of 3D Print Attacks, where facial prints include 3D elements and additional attributes. It is particularly useful for iBeta Level 2… See the full description on the dataset page: https://huggingface.co/datasets/AxonData/Wrapped_3D_Attacks.

  13. h

    Latex_Mask_dataset

    • huggingface.co
    Updated Apr 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AxonLabs (2025). Latex_Mask_dataset [Dataset]. https://huggingface.co/datasets/AxonData/Latex_Mask_dataset
    Explore at:
    Dataset updated
    Apr 7, 2025
    Authors
    AxonLabs
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Latex Mask Dataset for Face Anti-Spoofing and Liveness Detection

    Anti spoofing dataset with Latex 3D mask attacks (4 000 videos) for iBeta 2. The Biometric Attack Dataset offers a robust solution for enhancing security in liveness detection systems by simulating 3D latex mask attacks. This dataset is invaluable for assessing and fine-tuning Passive Liveness Detection models, an essential step toward achieving iBeta Level 2 certification. By integrating diverse realistic presentation… See the full description on the dataset page: https://huggingface.co/datasets/AxonData/Latex_Mask_dataset.

  14. in-Vehicle Face Presentation Attack Detection (VFPAD)

    • zenodo.org
    Updated Mar 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sushil Bhattacharjee; Sushil Bhattacharjee; Ketan Kotwal; Ketan Kotwal; Sébastien Marcel; Sébastien Marcel (2023). in-Vehicle Face Presentation Attack Detection (VFPAD) [Dataset]. http://doi.org/10.34777/m4kd-5h87
    Explore at:
    Dataset updated
    Mar 6, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sushil Bhattacharjee; Sushil Bhattacharjee; Ketan Kotwal; Ketan Kotwal; Sébastien Marcel; Sébastien Marcel
    Description

    Description

    The in-Vehicle Face Presentation Attack Detection (VFPAD) dataset consists of 4046 bona-fide recordings from 40 subjects, and 1790 attack presentation videos from a total of 89 PAIs (presentation attack instruments). These presentations have been captured using an NIR camera (940 nm) placed on the steering wheel of the car, while NIR illuminators have been fixed on both front pillars (adjacent to the wind-shield) of the car. The bona-fide videos represent 24 male and 16 female subjects of various ethnicities. The PAI species used to construct this dataset include photo-prints, digital displays (for replay attacks), rigid 3D masks, and flexible 3D masks made of silicone.

    Data Collection

    The videos comprising this dataset represent bona-fide and attack presentations under a range of variations:

    • Environmental variations: presentations have been recorded in four sessions, each under different environmental conditions (outdoor sunny; outdoor cloudy; indoor dimly-lit; and indoor brightly-lit)
    • Different scenarios: bona-fide presentations for each subject have been captured with variety of appearances: with/without glasses, with/without hat, etc.
    • Illumination variations: two illumination conditions have been used: ‘uniform’ (both NIR illuminators switched on), and ‘non-uniform’ (only the left NIR-illuminator switched on), and
    • Pose variations: two poses (‘angles’) have been used: ‘front’: the subject looks ahead at the road; and ‘below’: subject looks straight into the camera.

    Citation

    If you use the dataset, please cite the following publication:

    @article{IEEE_TBIOM_2021,
    author = {Kotwal, Ketan and Bhattacharjee, Sushil and Abbet, Philip and Mostaani, Zohreh and Wei, Huang and Wenkang, Xu and Yaxi, Zhao and Marcel, S\'{e}bastien},
    title = {Domain-Specific Adaptation of CNN for Detecting Face Presentation Attacks in NIR},
    journal = {IEEE Transactions on Biometrics, Behavior, and Identity Science},
    publisher = {{IEEE}},
    year={2022},
    volume={4},
    number={1},
    pages={135--147},
    doi={10.1109/TBIOM.2022.3143569}
    }

  15. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Marcel, Sébastien (2023). Custom Silicone Mask Attack Dataset (CSMAD) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4084200

Custom Silicone Mask Attack Dataset (CSMAD)

Explore at:
Dataset updated
Mar 8, 2023
Dataset provided by
Mohammadi, Amir
Bhattacharjee, Sushil
Marcel, Sébastien
Description

The Custom Silicone Mask Attack Dataset (CSMAD) contains presentation attacks made of six custom-made silicone masks. Each mask cost about USD 4000. The dataset is designed for face presentation attack detection experiments.

The Custom Silicone Mask Attack Dataset (CSMAD) has been collected at the Idiap Research Institute. It is intended for face presentation attack detection experiments, where the presentation attacks have been mounted using a custom-made silicone mask of the person (or identity) being attacked.

The dataset contains videos of face-presentations, as a set of files specifying the experimental protocol corresponding the experiments presented in the corresponding publication.

Reference

If you publish results using this dataset, please cite the following publication.

Sushil Bhattacharjee, Amir Mohammadi and Sebastien Marcel: "Spoofing Deep Face Recognition With Custom Silicone Masks." in Proceedings of International Conference on Biometrics: Theory, Applications, and Systems (BTAS), 2018. 10.1109/BTAS.2018.8698550 http://publications.idiap.ch/index.php/publications/show/3887

Data Collection

Face-biometric data has been collected from 14 subjects to create this dataset. Subjects participating in this data-collection have played three roles: targets, attackers, and bona-fide clients. The subjects represented in the dataset are referred to here with letter-codes: A .. N. The subjects A..F have also been targets. That is, face-data for these six subjects has been used to construct their corresponding flexible masks (made of silicone). These masks have been made by Nimba Creations Ltd., a special effects company.

Bona fide presentations have been recorded for all subjects A..N. Attack presentations (presentations where the subject wears one of 6 masks) have been recorded for all six targets, made by different subjects. That is, each target has been attacked several times, each time by a different attacker wearing the mask in question. This is one way of increasing the variability in the dataset. Another way we have augmented the variability of the dataset is by capturing presentations under different illumination conditions. Presentations have been captured in four different lighting conditions:

flourescent ceiling light only

halogen lamp illuminating from the left of the subject only

halogen lamp illuminating from the right only

both halogen lamps illuminating from both sides simultaneously

All presentations have been captured with a green uniform background. See the paper mentioned above for more details of the data-collection process.

Dataset Structure

The dataset is organized in three subdirectories: ‘attack’, ‘bonafide’, ‘protocols’. The two directories: ‘attack’ and ‘bonafide’ contain presentation-videos and still images for attacks and bona fide presentations, respectively. The folder ‘protocols’ contains text files specifying the experimental protocol for vulnerability analysis of face-recognition (FR) systems.

The number of data-files per category are as follows:

‘bonafide’: 87 videos, and 17 still images (in .JPG format). The still images are frontal face images captured using a Nikon Coolpix digital camera.

‘attack’: 159, organized in two sub-folders – ‘WEAR’ (108 videos), and ‘STAND’ (51 videos)

The folder ‘attack/WEAR’ contains videos where the attack has been made by a person (attacker) wearing the mask of the target being attacked. The ‘attack/STAND’ folder contains videos where the attack has been made using a the target’s mask mounted on an appropriate stand.

Video File Format

The video files for the face-presentations are in ‘hdf5’ format (with file-extensions ‘.h5’. The folder structure of the hdf5 file is shown in Figure 1. Each file contains data collected using two cameras:

RealSense SR300 (from Intel): collects images/videos in visible-light (RGB color) , near infrared (NIR) @ 860nm wavelength, and depth maps

Compact Pro (from Seek Thermal): collects thermal (long-wave infrared (LWIR)) images.

As shown in Figure 1, frames from the different channels (color, infrared, depth, thermal) from he two cameras are stored in separate directory-hierarchies in the hdf5 file. Each file respresents a video of approximately 10 seconds, or roughly, 300 frames.

In the hdf5 file, the directory for SR300 also contains a subdirectory named ‘aligned_color_to_depth’. This folder contains post-processed data, where the frames of depth channel have been aligned with those of the color channel based on the time-stamps of the frames.

Experimental Protocol

The ‘protocols’ folder contains text files that specify the protocols for vulnerability analysis experiments reported in the paper mentioned above. Please see the README file in the protocols folder for details.

Search
Clear search
Close search
Google apps
Main menu