Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
3D Mask Attack for detection methods
Dataset comprises 3,500+ videos captured using 5 different devices, featuring individuals holding photo fixed on cylinder designed to simulate potential presentation attacks against facial recognition systems. It supports research in attack detection and improves spoofing detection techniques, specifically for fraud prevention and compliance with iBeta Level 1 certification standards. By utilizing this dataset, researchers can enhance liveness… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/3d-mask-attack.
The Custom Silicone Mask Attack Dataset (CSMAD) contains presentation attacks made of six custom-made silicone masks. Each mask cost about USD 4000. The dataset is designed for face presentation attack detection experiments.
The Custom Silicone Mask Attack Dataset (CSMAD) has been collected at the Idiap Research Institute. It is intended for face presentation attack detection experiments, where the presentation attacks have been mounted using a custom-made silicone mask of the person (or identity) being attacked.
The dataset contains videos of face-presentations, as a set of files specifying the experimental protocol corresponding the experiments presented in the corresponding publication.
Reference
If you publish results using this dataset, please cite the following publication.
Sushil Bhattacharjee, Amir Mohammadi and Sebastien Marcel: "Spoofing Deep Face Recognition With Custom Silicone Masks." in Proceedings of International Conference on Biometrics: Theory, Applications, and Systems (BTAS), 2018. 10.1109/BTAS.2018.8698550 http://publications.idiap.ch/index.php/publications/show/3887
Data Collection
Face-biometric data has been collected from 14 subjects to create this dataset. Subjects participating in this data-collection have played three roles: targets, attackers, and bona-fide clients. The subjects represented in the dataset are referred to here with letter-codes: A .. N. The subjects A..F have also been targets. That is, face-data for these six subjects has been used to construct their corresponding flexible masks (made of silicone). These masks have been made by Nimba Creations Ltd., a special effects company.
Bona fide presentations have been recorded for all subjects A..N. Attack presentations (presentations where the subject wears one of 6 masks) have been recorded for all six targets, made by different subjects. That is, each target has been attacked several times, each time by a different attacker wearing the mask in question. This is one way of increasing the variability in the dataset. Another way we have augmented the variability of the dataset is by capturing presentations under different illumination conditions. Presentations have been captured in four different lighting conditions:
flourescent ceiling light only
halogen lamp illuminating from the left of the subject only
halogen lamp illuminating from the right only
both halogen lamps illuminating from both sides simultaneously
All presentations have been captured with a green uniform background. See the paper mentioned above for more details of the data-collection process.
Dataset Structure
The dataset is organized in three subdirectories: ‘attack’, ‘bonafide’, ‘protocols’. The two directories: ‘attack’ and ‘bonafide’ contain presentation-videos and still images for attacks and bona fide presentations, respectively. The folder ‘protocols’ contains text files specifying the experimental protocol for vulnerability analysis of face-recognition (FR) systems.
The number of data-files per category are as follows:
‘bonafide’: 87 videos, and 17 still images (in .JPG format). The still images are frontal face images captured using a Nikon Coolpix digital camera.
‘attack’: 159, organized in two sub-folders – ‘WEAR’ (108 videos), and ‘STAND’ (51 videos)
The folder ‘attack/WEAR’ contains videos where the attack has been made by a person (attacker) wearing the mask of the target being attacked. The ‘attack/STAND’ folder contains videos where the attack has been made using a the target’s mask mounted on an appropriate stand.
Video File Format
The video files for the face-presentations are in ‘hdf5’ format (with file-extensions ‘.h5’. The folder structure of the hdf5 file is shown in Figure 1. Each file contains data collected using two cameras:
RealSense SR300 (from Intel): collects images/videos in visible-light (RGB color) , near infrared (NIR) @ 860nm wavelength, and depth maps
Compact Pro (from Seek Thermal): collects thermal (long-wave infrared (LWIR)) images.
As shown in Figure 1, frames from the different channels (color, infrared, depth, thermal) from he two cameras are stored in separate directory-hierarchies in the hdf5 file. Each file respresents a video of approximately 10 seconds, or roughly, 300 frames.
In the hdf5 file, the directory for SR300 also contains a subdirectory named ‘aligned_color_to_depth’. This folder contains post-processed data, where the frames of depth channel have been aligned with those of the color channel based on the time-stamps of the frames.
Experimental Protocol
The ‘protocols’ folder contains text files that specify the protocols for vulnerability analysis experiments reported in the paper mentioned above. Please see the README file in the protocols folder for details.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Silicone Mask Attack dataset
The dataset contains 6,500+ videos of attacks from 50 different people, filmed using 5 devices, providing a valuable resource for researching presentation attacks in facial recognition technologies. By focusing on this area, the dataset facilitates experiments designed to improve biometric security and anti-spoofing measures, ultimately aiding in the creation of more robust and reliable authentication systems. By utilizing this dataset, researchers can… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/silicone-mask-attack.
The 3D Mask Attack Database (3DMAD) is a biometric (face) spoofing database. It contains 76500 frames of 17 persons, recorded using Kinect for both real-access and spoofing attacks. Each frame consists of:
a depth image (640x480 pixels – 1x11 bits)
the corresponding RGB image (640x480 pixels – 3x8 bits)
manually annotated eye positions (with respect to the RGB image).
The data is collected in 3 different sessions for all subjects and for each session 5 videos of 300 frames are captured. The recordings are done under controlled conditions, with frontal-view and neutral expression. The first two sessions are dedicated to the real access samples, in which subjects are recorded with a time delay of ~2 weeks between the acquisitions. In the third session, 3D mask attacks are captured by a single operator (attacker).
In each video, the eye-positions are manually labelled for every 1st, 61st, 121st, 181st, 241st and 300th frames and they are linearly interpolated for the rest.
The real-size masks are obtained using "ThatsMyFace.com". The database additionally contains the face images used to generate these masks (1 frontal and 2 profiles) and paper-cut masks that are also produced by the same service and using the same images.
The satellite package which contains the Bob accessor methods to use this database directly from Python, with the certified protocols, is available in two different distribution formats:
You can download it from PyPI, or
You can download it in its source form from its git repository.
Acknowledgments
If you use this database, please cite the following publication:
Nesli Erdogmus and Sébastien Marcel, "Spoofing in 2D Face Recognition with 3D Masks and Anti-spoofing with Kinect", Biometrics: Theory, Applications and Systems, 2013. 10.1109/BTAS.2013.6712688 https://publications.idiap.ch/index.php/publications/show/2657
This dataset consists of face & silicon masks images from 8 different subjects captured with 3 different smartphones.
This dataset consists of images captured from 8 different bona fide subjects using three different smartphones (iPhone X, Samsung S7 and Samsung S8). For each subject within the database, varying number of samples are collected using all the three phones. Similarly, the silicone masks of each of the subject is collected using three phones. The masks, each costing about USD 4000, have been manufactured by a professional special-effects company.
For the bona fide presentations of the same eight subjects, each data subject is asked to pose in a manner compliant to standard portrait capture. The data is captured indoors, with adequate artificial lighting. Silicone mask presentations have been captured under similar conditions, by placing the masks on their bespoke support provided by the manufacturer, with prosthetic eyes and silicone eye sockets.
The database is organized in three folders corresponding to three smartphones and further each subject within the database is organized in sub-folders.
The files are named using the convention "PHONE/CLASS/SUBJECTNUMBER/PHONEIDENTIFIER-PRESENTATION-SUBJECTNUMBER-SAMPLENUMBER.jpg".
Reference
If you publish results using this dataset, please cite the following publication.
“Custom Silicone Face Masks - Vulnerability of Commercial Face Recognition Systems & Presentation Attack Detection”, R. Raghavendra, S. Venkatesh, K. B. Raja, S. Bhattacharjee, P. Wasnik, S. Marcel, and C. Busch. IAPR/IEEE International Workshop on Biometrics and Forensics (IWBF), 2019.
10.1109/IWBF.2019.8739236
https://publications.idiap.ch/index.php/publications/show/4065
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Face mask dataset for facial recognition
This dataset contains over 11,100+ video recordings of people wearing latex masks, captured using 5 different devices.It is designed for liveness detection algorithms, specifically aimed at enhancing anti-spoofing capabilities in biometric security systems. By utilizing this dataset, researchers can develop more accurate facial recognition technologies, which is crucial for achieving the iBeta Level 2 certification, a benchmark for robust and… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/latex-mask-attack.
Explore the 3D Volume-Based Paper Attack Dataset, featuring advanced liveness detection data with 40+ diverse participants.
This dataset was used to perform the experiments reported in the IJCB2023 paper "Can personalised hygienic masks be used to attack face recognition systems?".
The dataset consists of face videos captured using the ‘selfie’ cameras of five different smartphones : Apple iPhone 12, Apple iPhone 6s, Xiaomi Redmi 6 Pro, Xiaomi Redmi 9A and Samsung Galaxy S9. The dataset contains :
Bona-fide face videos: 1400 videos of bona-fide (real, non-attack) faces. In total, there are 70 identities (data subjects). Each video is 10 seconds long, where for the first 5 seconds the data subject was required to stay still and look at the camera, then for the last 5 seconds the subject was asked to turn their head from one side to the other (such that profile views could be captured). The videos were acquired indoors, under normal office lighting conditions. The data subjects were volunteers, who were required to be present during two recording sessions, which on average were separated by about three weeks. In each recording session, the volunteers were asked to record a video of their own face using the front (i.e., selfie) camera of each of the five smartphones mentioned earlier. The face data was additionally captured while the data subjects wore plain (not personalised) hygienic masks, to simulate the scenario where face recognition might need to be performed on a masked face (e.g., during a pandemic like COVID-19).
Attacks:
Personalised hygienic mask attacks: Video recordings of an impostor wearing personalised hygienic masks (one at a time), on which the bottom part of each data subject’s face is printed. Please note that the dataset contains 350 personalised hygienic mask attack videos, whereas the IJCB2023 paper mentioned 345 videos. This is because, for the experiments reported in the paper, we excluded the videos of the attacker wearing their own hygienic mask (since the "attacker" was one of the 70 data subjects).
Print attacks: 1400 video recordings of the data subjects’ face photos printed on A4 matte paper, which was held up to the smartphone’s camera.
Replay attacks: 2800 video recordings of bona-fide face videos that were replayed to the target smartphone’s camera. Different phones were paired, such that one of the pair was used to replay the bona-fide videos while the second (attacked) phone recorded the videos using its front camera.
Reference
If you use the data for your research or publication, please cite the following paper :
Komaty, Alain, Krivokuca Hahn, Vedrana, Ecabert, Christophe and Marcel, Sébastien, Can personalised hygienic masks be used to attack face recognition systems?, in: Proceedings of IEEE International Joint Conference on Biometrics (IJCB2023), 2023
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
2D Masks Attack for facial recogniton system
The dataset consists of 4,800+ videos of people wearing of holding 2D printed masks filmed using 5 devices. It is designed for liveness detection algorithms, specifically aimed at enhancing anti-spoofing capabilities in biometric security systems. By leveraging this dataset, researchers can create more sophisticated recognition system, crucial for achieving iBeta Level 1 & 2 certification – a key standard for secure and reliable biometric… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/printed-2d-masks-attacks.
Description
The eXtended Custom Silicone Mask Attack Dataset (XCSMAD) consists of 535 short video recordings of both bona fide and presentation attacks (PA) from 72 subjects. The attacks have been created from custom silicone masks. Videos have been recorded in RGB (visual spectra), near infrared (NIR), and thermal (LWIR) channels.
A complete preprocessed data for the aforementioned videos and bona fide images (as a part of experiments related to vulnerability assessment) have been provided to facilitate reproducing experiments from the reference publication, as well as to conduct new experiments. The details of preprocessing can be found in the reference publication.
The implementation of all experiments described in the reference publication is available at https://gitlab.idiap.ch/bob/bob.paper.xcsmad_facepad
Experimental protocols
The reference publication considers two experimental protocols: grandtest and cross-validation (cv). For a frame-level evaluation, 50 frames from each video have been used in both protocols. For the grandtest protocol, videos were divided into train, dev, and eval groups. Each group consists of unique subset of clients. (The videos corresponding to any specific subjects in one group are a part of single group).
For cross-validation (cv) experiments, a 5-fold protocol has been devised. Videos from XCSMAD have been split into 5 folds with non-overlapping clients. Using these five partitions, 5 testprotocols (cv0, · · · , cv4) have been created such that in each protocol, four of the partitions are used for training, and the remaining one is used for evaluation.
Reference
If you use this dataset, please cite the following publication:
@article{Kotwal_TBIOM_2019, author = {Kotwal, Ketan and Bhattacharjee, Sushil and Marcel, S\'{e}bastien}, title = {Multispectral Deep Embeddings As a Countermeasure To Custom Silicone Mask Presentation Attacks}, journal = {IEEE Transactions on Biometrics, Behavior, and Identity Science}, publisher = {{IEEE}}, year = {2019}, }
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This dataset delivers a single, end-to-end resource for training and benchmarking facial liveness-detection systems. By aggregating live sessions and eleven realistic presentation-attack classes into one collection, it accelerates development toward iBeta Level 1/2 compliance and strengthens model robustness against the full spectrum of spoofing tactics
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F20109613%2F6432e95d7b7fef1d271457f172e11e0c%2FFrame%20103-3.png?generation=1753867895186569&alt=media" alt="">
Modern certification pipelines demand proof that a system resists all common attack vectors—not just prints or replays. This dataset delivers those vectors in one place, allowing you to: - Benchmark a model’s true generalisation - Fine-tune against rare but high-impact threats (e.g., silicone or textile masks) - Streamline audits by demonstrating coverage of every ISO 30107-3 attack category
Ideal for companies pursuing or maintaining iBeta Level 1/2 certification, research groups exploring new PAD architectures, and vendors stress-testing production face-verification pipelines
This dataset’s scale, breadth of attack types, and real-world capture conditions make it indispensable for anyone building or evaluating biometric anti-spoofing solutions. Deploy it to harden your systems against today’s—and tomorrow’s—most sophisticated presentation attacks
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Silicone Mask Attack Dataset - 6,500+ videos
Dataset comprises 6,500+ videos of individuals wearing silicone masks, captured using 5 different devices. It is designed for research in presentation attacks, focusing on 3D masks, spoofing detection, and facial recognition challenges, particularly for achieving iBeta Level 2 certification. - Get the data
Dataset characteristics:
Characteristic Data
Description Videos of people in silicone masks training… See the full description on the dataset page: https://huggingface.co/datasets/ud-ibeta/Silicone-Mask-Dataset.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
2D Mask Attack Dataset - 26 436 videos
The dataset comprises 26,436 videos of real faces, 2D print attacks (printed photos), and replay attacks (faces displayed on screens), captured under varied conditions. Designed for attack detection research, it supports the development of robust face antispoofing and spoofing detection methods, critical for facial recognition security. Ideal for training models and refining anti-spoofing methods, the dataset enhances detection accuracy in… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/2d-printed-mask-dataset.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Textile 3D Face Mask Attack Dataset
This Dataset is specifically designed to enhance Face Anti-Spoofing and Liveness Detection models by simulating Nylon Mask Attacks — an accessible alternative to expensive silicone and latex mask datasets. These attacks utilize thin elastic fabric masks worn like a balaclava, featuring printed facial images that conform to the wearer's head shape through textile elasticity. The dataset is particularly valuable for PAD model training and iBeta… See the full description on the dataset page: https://huggingface.co/datasets/AxonData/3d_cloth_face_mask_spoofing_dataset.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Face Spoofing Dataset with 3D Elastic Mesh Masks and Real Videos
This Dataset is a specialized face anti-spoofing dataset featuring personalized resin mask attacks targeting facial recognition systems and biometric authentication. This unique dataset employs custom-manufactured rubber face masks designed to replicate specific individuals’ facial features with high precision What sets Rubber Mask Attack Dataset apart from conventional PAD datasets is its dual-recording methodology —… See the full description on the dataset page: https://huggingface.co/datasets/AxonData/high_precision_3d_resin_masks_real_faces.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Latex Mask Dataset for Face Anti-Spoofing and Liveness Detection
Anti spoofing dataset with Latex 3D mask attacks (4 000 videos) for iBeta 2. The Biometric Attack Dataset offers a robust solution for enhancing security in liveness detection systems by simulating 3D latex mask attacks. This dataset is invaluable for assessing and fine-tuning Passive Liveness Detection models, an essential step toward achieving iBeta Level 2 certification. By integrating diverse realistic presentation… See the full description on the dataset page: https://huggingface.co/datasets/AxonData/Latex_Mask_dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Overview of attack rates schools stratified by mask mandates for staff members.
Description This dataset contains images of 5 subjects. The images have been captured using the Intel Realsense SR300 camera, and the Xenics Gobi thermal camera. The SR300 returns 3 kinds of data: color (RGB) images, near-infrared (NIR) images, and depth information. For four subjects (subject1 – subject4), images have been captured with both cameras under two conditions: 1with the face visible, and with the subject wearing a rigid (resin-coated) mask Each subject has used 3 sets of rigid masks (corresponding to three identities (‘id0’, ‘id1’, ‘id2’, not necessarily corresponding to the subjects in this dataset), with two masks (‘mask0’, ‘mask1’) per identity. For subject5, data has been captured using both cameras under two conditions: with the face visible with the subject wearing a flexible (silicone) mask resembling subject5. For each combination (subject, camera, condition), several seconds of video have been captured, and the video-frames have been stored in uncompressed form (in .png) format. All images in .png format have been captured at a resolution of 640x480 pixels. Reference If you use this database, please cite the following publication: Sushil Bhattacharjee and Sébastien Marcel, "What you can't see can help you -- extended-range imaging for 3Dmask presentation attack detection", BIOSIG2017. 10.23919/BIOSIG.2017.8053524 https://publications.idiap.ch/index.php/publications/show/3710
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The dataset consists of more than 42,000 video attacks of 7 different types specifically curated for evaluating liveness detection algorithms. The dataset aims to incorporate different scenarios and challenges to enable robust assessment and comparison of liveness detection systems.
The iBeta Liveness Detection Level 1 dataset serves as a benchmark for the development and assessment of liveness detection systems and for evaluating and improving the performance of algorithms.
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2F72c514066335ed6b6d1d273d1c4198ef%2FMASK_TYPES.png?generation=1678913343424357&alt=media" alt="">
Each attack was filmed on an Apple iPhone and Google Pixel. The videos were filmed on various backgrounds and using additional accessories such as faked facial hair, scarves, hats, and others.
The dataset comprises videos of genuine facial presentations using various methods, including 3D masks and photos, as well as real and spoof faces. It proposes a novel approach that learns and extracts facial features to prevent spoofing attacks, based on deep neural networks and advanced biometric techniques.
Our results show that this technology works effectively in securing most applications and prevents unauthorized access by distinguishing between genuine and spoofed inputs. Additionally, it addresses the challenging task of identifying unseen spoofing cues, making it one of the most effective techniques in the field of anti-spoofing research.
keywords: ibeta level 1, ibeta level 2, liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, replay attack dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, presentation attack detection, presentation attack dataset, 2D print attacks, 3D print attacks, phone attack dataset, face anti spoofing, large-scale face anti spoofing, rich annotations anti spoofing dataset, cut prints spoof attack
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The dataset consists of videos of individuals and attacks with printed 2D masks and silicone masks . Videos are filmed in different lightning conditions (in a dark room, daylight, light room and nightlight). Dataset includes videos of people with different attributes (glasses, mask, hat, hood, wigs and mustaches for men).
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
3D Mask Attack for detection methods
Dataset comprises 3,500+ videos captured using 5 different devices, featuring individuals holding photo fixed on cylinder designed to simulate potential presentation attacks against facial recognition systems. It supports research in attack detection and improves spoofing detection techniques, specifically for fraud prevention and compliance with iBeta Level 1 certification standards. By utilizing this dataset, researchers can enhance liveness… See the full description on the dataset page: https://huggingface.co/datasets/UniDataPro/3d-mask-attack.