3 datasets found
  1. BED: Biometric EEG dataset

    • zenodo.org
    • producciocientifica.uv.es
    Updated Apr 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pablo Arnau-González; Pablo Arnau-González; Stamos Katsigiannis; Stamos Katsigiannis; Miguel Arevalillo-Herráez; Miguel Arevalillo-Herráez; Naeem Ramzan; Naeem Ramzan (2022). BED: Biometric EEG dataset [Dataset]. http://doi.org/10.5281/zenodo.4309472
    Explore at:
    Dataset updated
    Apr 20, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Pablo Arnau-González; Pablo Arnau-González; Stamos Katsigiannis; Stamos Katsigiannis; Miguel Arevalillo-Herráez; Miguel Arevalillo-Herráez; Naeem Ramzan; Naeem Ramzan
    Description

    The BED dataset

    Version 1.0.0

    Please cite as: Arnau-González, P., Katsigiannis, S., Arevalillo-Herráez, M., Ramzan, N., "BED: A new dataset for EEG-based biometrics", IEEE Internet of Things Journal, vol. 8, no. 15, pp. 12219 - 12230, 2021.

    Disclaimer

    While every care has been taken to ensure the accuracy of the data included in the BED dataset, the authors and the University of the West of Scotland, Durham University, and Universitat de València do not provide any guaranties and disclaim all responsibility and all liability (including without limitation, liability in negligence) for all expenses, losses, damages (including indirect or consequential damage) and costs which you might incur as a result of the provided data being inaccurate or incomplete in any way and for any reason. 2020, University of the West of Scotland, Scotland, United Kingdom.

    Contact

    For inquiries regarding the BED dataset, please contact:

    1. Dr Pablo Arnau-González, arnau.pablo [*AT*] gmail.com
    2. Dr Stamos Katsigiannis, stamos.katsigiannis [*AT*] durham.ac.uk
    3. Prof. Miguel Arevalillo-Herráez, miguel.arevalillo [*AT*] uv.es
    4. Prof. Naeem Ramzan, Naeem.Ramzan [*AT*] uws.ac.uk

    Dataset summary

    BED (Biometric EEG Dataset) is a dataset specifically designed to test EEG-based biometric approaches that use relatively inexpensive consumer-grade devices, more specifically the Emotiv EPOC+ in this case. This dataset includes EEG responses from 21 subjects to 12 different stimuli, across 3 different chronologically disjointed sessions. We have also considered stimuli aimed to elicit different affective states, so as to facilitate future research on the influence of emotions on EEG-based biometric tasks. In addition, we provide a baseline performance analysis to outline the potential of consumer-grade EEG devices for subject identification and verification. It must be noted that, in this work, EEG data were acquired in a controlled environment in order to reduce the variability in the acquired data stemming from external conditions.

    The stimuli include:

    • Images selected to elicit specific emotions
    • Mathematical computations (2-digit additions)
    • Resting-state with eyes closed
    • Resting-state with eyes open
    • Visual Evoked Potentials at 2, 5, 7, 10 Hz - Standard checker-board pattern with pattern reversal
    • Visual Evoked Potentials at 2, 5, 7, 10 Hz - Flashing with a plain colour, set as black

    For more details regarding the experimental protocol and the design of the dataset, please refer to the associated publication: Arnau-González, P., Katsigiannis, S., Arevalillo-Herráez, M., Ramzan, N., "BED: A new dataset for EEG-based biometrics", IEEE Internet of Things Journal, 2021. (Under review)

    Dataset structure and contents

    The BED dataset contains EEG recordings from 21 subjects, acquired during 3 similar sessions for each subject. The sessions were spaced one week apart from each other.

    The BED dataset includes:

    • The raw EEG recordings with no pre-processing and the log files of the experimental procedure, in text format
    • The EEG recordings with no pre-processing, segmented, structured and annotated according to the presented stimuli, in Matlab format
    • The features extracted from each EEG segment, as described in the associated publication

    The dataset is organised in 3 folders:

    • RAW
    • RAW_PARSED
    • Features

    RAW/ Contains the RAW files
    RAW/sN/ Contains the RAW files associated with subject N
    Each folder sN is composed by the following files:
    - sN_s1.csv, sN_s2.csv, sN_s3.csv -- Files containing the EEG recordings for subject N and session 1, 2, and 3, respectively. These files contain 39 columns:
    COUNTER INTERPOLATED F3 FC5 AF3 F7 T7 P7 O1 O2 P8 T8 F8 AF4 FC6 F4 ...UNUSED DATA... UNIX_TIMESTAMP
    - subject_N_session_1_time_X.log, subject_N_session_2_time_X.log, subject_N_session_3_time_X.log -- Log files containing the sequence of events for the subject N and the session 1,2, and 3 respectively.

    RAW_PARSED/
    Contains Matlab files named sN_sM.mat. The files contain the recordings for the subject N in the session M. These files are composed by two variables:
    - recording: size (time@256Hz x 17), Columns: COUNTER INTERPOLATED F3 FC5 AF3 F7 T7 P7 O1 O2 P8 T8 F8 AF4 FC6 F4 UNIX_TIMESTAMP
    - events: cell array with size (events x 3) START_UNIX END_UNIX ADDITIONAL_INFO
    START_UNIX is the UNIX timestamp in which the event starts
    END_UNIX is the UNIX timestamp in which the event ends
    ADDITIONAL INFO contains a struct with additional information regarding the specific event, in the case of the images, the expected score, the voted score, in the case of the cognitive task the input, in the case of the VEP the pattern and the frequency, etc..

    Features/
    Features/Identification
    Features/Identification/[ARRC|MFCC|SPEC]/: Each of these folders contain the extracted features ready for classification for each of the stimuli, each file is composed by two variables, "feat" the feature matrix and "Y" the label matrix.
    - feat: N x number of features
    - Y: N x 2 (the #subject and the #session)
    - INFO: Contains details about the event same as the ADDITIONAL INFO
    Features/Verification: This folder is composed by 3 different files each of them with one different set of features extracted. Each file is composed by one cstruct array composed by:
    - data: the time-series features, as described in the paper
    - y: the #subject
    - stimuli: the stimuli by name
    - session: the #session
    - INFO: Contains details about the event

    The features provided are in sequential order, so index 1 and index 2, etc. are sequential in time if they belong to the same stimulus.

    Additional information

    For additional information regarding the creation of the BED dataset, please refer to the associated publication: Arnau-González, P., Katsigiannis, S., Arevalillo-Herráez, M., Ramzan, N., "BED: A new dataset for EEG-based biometrics", IEEE Internet of Things Journal, vol. 8, no. 15, pp. 12219 - 12230, 2021.

  2. f

    Data_Sheet_1_On the Minimal Amount of EEG Data Required for Learning...

    • frontiersin.figshare.com
    pdf
    Updated Jun 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carlos Gómez-Tapia; Bojan Bozic; Luca Longo (2023). Data_Sheet_1_On the Minimal Amount of EEG Data Required for Learning Distinctive Human Features for Task-Dependent Biometric Applications.pdf [Dataset]. http://doi.org/10.3389/fninf.2022.844667.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 6, 2023
    Dataset provided by
    Frontiers
    Authors
    Carlos Gómez-Tapia; Bojan Bozic; Luca Longo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Biometrics is the process of measuring and analyzing human characteristics to verify a given person's identity. Most real-world applications rely on unique human traits such as fingerprints or iris. However, among these unique human characteristics for biometrics, the use of Electroencephalogram (EEG) stands out given its high inter-subject variability. Recent advances in Deep Learning and a deeper understanding of EEG processing methods have led to the development of models that accurately discriminate unique individuals. However, it is still uncertain how much EEG data is required to train such models. This work aims at determining the minimal amount of training data required to develop a robust EEG-based biometric model (+95% and +99% testing accuracies) from a subject for a task-dependent task. This goal is achieved by performing and analyzing 11,780 combinations of training sizes, by employing various neural network-based learning techniques of increasing complexity, and feature extraction methods on the affective EEG-based DEAP dataset. Findings suggest that if Power Spectral Density or Wavelet Energy features are extracted from the artifact-free EEG signal, 1 and 3 s of data per subject is enough to achieve +95% and +99% accuracy, respectively. These findings contributes to the body of knowledge by paving a way for the application of EEG to real-world ecological biometric applications and by demonstrating methods to learn the minimal amount of data required for such applications.

  3. f

    figure and table from Capturing synchronization with complexity measure of...

    • rs.figshare.com
    application/x-rar
    Updated Feb 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiaobi Chen; Guanghua Xu; Bo He; Sicong Zhang; Zijvn Su; Yaguang Jia; Xun Zhang; Zhe Zhao (2024). figure and table from Capturing synchronization with complexity measure of ordinal pattern transition network constructed by crossplot [Dataset]. http://doi.org/10.6084/m9.figshare.23531525.v1
    Explore at:
    application/x-rarAvailable download formats
    Dataset updated
    Feb 28, 2024
    Dataset provided by
    The Royal Society
    Authors
    Xiaobi Chen; Guanghua Xu; Bo He; Sicong Zhang; Zijvn Su; Yaguang Jia; Xun Zhang; Zhe Zhao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    To evaluate the synchronization of bivariate time series has been a hot topic and a number of measures have been proposed. In this work, by introducing the ordinal pattern transition network into the crossplot, a new method for measuring the synchronization of bivariate time series is proposed. After the crossplot been partitioned and coded, the coded partitions are defined as network nodes and a directed weighted network is constructed based on the temporal adjacency of the nodes. The crossplot transition entropy of the network is proposed as an indicator of the synchronization between two time series. To test the characteristics and performance of the method, it is used to analyse the unidirectional coupled Lorentz model and compared it with existing methods. The results showed the new method had the advantages of easy parameter setting, efficiency, robustness, good consistency and suitable for short time series. Finally, EEG data from auditory evoked potential EEG-Biometric dataset are investigated, and some useful and interesting results are obtained.

  4. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Pablo Arnau-González; Pablo Arnau-González; Stamos Katsigiannis; Stamos Katsigiannis; Miguel Arevalillo-Herráez; Miguel Arevalillo-Herráez; Naeem Ramzan; Naeem Ramzan (2022). BED: Biometric EEG dataset [Dataset]. http://doi.org/10.5281/zenodo.4309472
Organization logo

BED: Biometric EEG dataset

Explore at:
7 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Apr 20, 2022
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Pablo Arnau-González; Pablo Arnau-González; Stamos Katsigiannis; Stamos Katsigiannis; Miguel Arevalillo-Herráez; Miguel Arevalillo-Herráez; Naeem Ramzan; Naeem Ramzan
Description

The BED dataset

Version 1.0.0

Please cite as: Arnau-González, P., Katsigiannis, S., Arevalillo-Herráez, M., Ramzan, N., "BED: A new dataset for EEG-based biometrics", IEEE Internet of Things Journal, vol. 8, no. 15, pp. 12219 - 12230, 2021.

Disclaimer

While every care has been taken to ensure the accuracy of the data included in the BED dataset, the authors and the University of the West of Scotland, Durham University, and Universitat de València do not provide any guaranties and disclaim all responsibility and all liability (including without limitation, liability in negligence) for all expenses, losses, damages (including indirect or consequential damage) and costs which you might incur as a result of the provided data being inaccurate or incomplete in any way and for any reason. 2020, University of the West of Scotland, Scotland, United Kingdom.

Contact

For inquiries regarding the BED dataset, please contact:

  1. Dr Pablo Arnau-González, arnau.pablo [*AT*] gmail.com
  2. Dr Stamos Katsigiannis, stamos.katsigiannis [*AT*] durham.ac.uk
  3. Prof. Miguel Arevalillo-Herráez, miguel.arevalillo [*AT*] uv.es
  4. Prof. Naeem Ramzan, Naeem.Ramzan [*AT*] uws.ac.uk

Dataset summary

BED (Biometric EEG Dataset) is a dataset specifically designed to test EEG-based biometric approaches that use relatively inexpensive consumer-grade devices, more specifically the Emotiv EPOC+ in this case. This dataset includes EEG responses from 21 subjects to 12 different stimuli, across 3 different chronologically disjointed sessions. We have also considered stimuli aimed to elicit different affective states, so as to facilitate future research on the influence of emotions on EEG-based biometric tasks. In addition, we provide a baseline performance analysis to outline the potential of consumer-grade EEG devices for subject identification and verification. It must be noted that, in this work, EEG data were acquired in a controlled environment in order to reduce the variability in the acquired data stemming from external conditions.

The stimuli include:

  • Images selected to elicit specific emotions
  • Mathematical computations (2-digit additions)
  • Resting-state with eyes closed
  • Resting-state with eyes open
  • Visual Evoked Potentials at 2, 5, 7, 10 Hz - Standard checker-board pattern with pattern reversal
  • Visual Evoked Potentials at 2, 5, 7, 10 Hz - Flashing with a plain colour, set as black

For more details regarding the experimental protocol and the design of the dataset, please refer to the associated publication: Arnau-González, P., Katsigiannis, S., Arevalillo-Herráez, M., Ramzan, N., "BED: A new dataset for EEG-based biometrics", IEEE Internet of Things Journal, 2021. (Under review)

Dataset structure and contents

The BED dataset contains EEG recordings from 21 subjects, acquired during 3 similar sessions for each subject. The sessions were spaced one week apart from each other.

The BED dataset includes:

  • The raw EEG recordings with no pre-processing and the log files of the experimental procedure, in text format
  • The EEG recordings with no pre-processing, segmented, structured and annotated according to the presented stimuli, in Matlab format
  • The features extracted from each EEG segment, as described in the associated publication

The dataset is organised in 3 folders:

  • RAW
  • RAW_PARSED
  • Features

RAW/ Contains the RAW files
RAW/sN/ Contains the RAW files associated with subject N
Each folder sN is composed by the following files:
- sN_s1.csv, sN_s2.csv, sN_s3.csv -- Files containing the EEG recordings for subject N and session 1, 2, and 3, respectively. These files contain 39 columns:
COUNTER INTERPOLATED F3 FC5 AF3 F7 T7 P7 O1 O2 P8 T8 F8 AF4 FC6 F4 ...UNUSED DATA... UNIX_TIMESTAMP
- subject_N_session_1_time_X.log, subject_N_session_2_time_X.log, subject_N_session_3_time_X.log -- Log files containing the sequence of events for the subject N and the session 1,2, and 3 respectively.

RAW_PARSED/
Contains Matlab files named sN_sM.mat. The files contain the recordings for the subject N in the session M. These files are composed by two variables:
- recording: size (time@256Hz x 17), Columns: COUNTER INTERPOLATED F3 FC5 AF3 F7 T7 P7 O1 O2 P8 T8 F8 AF4 FC6 F4 UNIX_TIMESTAMP
- events: cell array with size (events x 3) START_UNIX END_UNIX ADDITIONAL_INFO
START_UNIX is the UNIX timestamp in which the event starts
END_UNIX is the UNIX timestamp in which the event ends
ADDITIONAL INFO contains a struct with additional information regarding the specific event, in the case of the images, the expected score, the voted score, in the case of the cognitive task the input, in the case of the VEP the pattern and the frequency, etc..

Features/
Features/Identification
Features/Identification/[ARRC|MFCC|SPEC]/: Each of these folders contain the extracted features ready for classification for each of the stimuli, each file is composed by two variables, "feat" the feature matrix and "Y" the label matrix.
- feat: N x number of features
- Y: N x 2 (the #subject and the #session)
- INFO: Contains details about the event same as the ADDITIONAL INFO
Features/Verification: This folder is composed by 3 different files each of them with one different set of features extracted. Each file is composed by one cstruct array composed by:
- data: the time-series features, as described in the paper
- y: the #subject
- stimuli: the stimuli by name
- session: the #session
- INFO: Contains details about the event

The features provided are in sequential order, so index 1 and index 2, etc. are sequential in time if they belong to the same stimulus.

Additional information

For additional information regarding the creation of the BED dataset, please refer to the associated publication: Arnau-González, P., Katsigiannis, S., Arevalillo-Herráez, M., Ramzan, N., "BED: A new dataset for EEG-based biometrics", IEEE Internet of Things Journal, vol. 8, no. 15, pp. 12219 - 12230, 2021.

Search
Clear search
Close search
Google apps
Main menu