100+ datasets found
  1. EEG driver fatigue detection

    • kaggle.com
    Updated Apr 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiacheng Xu (2023). EEG driver fatigue detection [Dataset]. https://www.kaggle.com/datasets/jcxuitsme/eeg-driver-fatigue-detection
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 2, 2023
    Dataset provided by
    Kaggle
    Authors
    Jiacheng Xu
    Description

    The original EEG data for driver fatigue detection Doi: https://doi.org/10.6084/m9.figshare.5202739.v1 This is the original EEG data of twelve healthy subjects for driver fatigue detection. Due to personal privacy, the digital number represents different participants. The .cnt files were created by a 40-channel Neuroscan amplifier, including the EEG data in two states in the process of driving.

    1. Materials The data that support the findings of this study are openly available at [https://figshare.com/articles/dataset/The_original_EEG_data_for_driver_fatigue_detection/5202739] which was provided by Jianliang Min et al [51]. The data was acquired using a static driving simulator in a controlled lab environment at Jiangxi University of Technology-China. [52].
  2. Epilepsy Dataset

    • kaggle.com
    Updated Feb 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DatasetEngineer (2025). Epilepsy Dataset [Dataset]. http://doi.org/10.34740/kaggle/dsv/10816261
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 21, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    DatasetEngineer
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Epilepsy Detection Dataset for Federated Deep Learning Overview This dataset contains comprehensive EEG-derived features collected for the purpose of developing federated deep learning models aimed at epilepsy detection. The dataset comprises 289,010 records, each representing an EEG recording segment annotated with various time-domain, frequency-domain, wavelet transform, nonlinear, seizure-specific, and demographic features.

    The primary objective of this dataset is to facilitate research in real-time epilepsy detection while ensuring data privacy through federated learning techniques. The dataset includes both multi-class labels (Normal, Pre-Seizure, Seizure, Post-Seizure) and seizure type classifications (Normal, Generalized Seizure, Focal Seizure).

    Data Collection Details The data was collected in real-time from multiple clinical EEG recording systems across diverse hospitals. Signals were recorded at standard sampling rates with consistent preprocessing protocols to ensure data uniformity and high-quality feature extraction. Subjects include patients aged between 1 and 90 years, ensuring a broad demographic representation.

    Features Description The dataset includes 75 features categorized into six main groups:

    1. Time-Domain Features (15 Features) These features capture statistical properties of the EEG signal in the time domain, providing valuable insights into the signal’s amplitude variations and temporal patterns.

    Mean_EEG_Amplitude: Average amplitude across EEG segments. EEG_Std_Dev: Standard deviation of the EEG signal, reflecting variability. EEG_Skewness: Measures asymmetry of the EEG signal distribution. EEG_Kurtosis: Degree of peakedness or flatness in the EEG signal. Zero_Crossing_Rate: Frequency of signal sign changes. Root_Mean_Square: Signal energy magnitude indicator. Peak_to_Peak_Amplitude: Difference between maximum and minimum amplitude. Signal_Energy: Energy content of EEG segments. Variance_of_EEG_Signals: Variability of signal amplitude. Interquartile_Range: Range between the 25th and 75th percentile amplitudes. Auto_Correlation_of_EEG_Signals: Similarity between signal values at different lags. Cross_Correlation_Between_Channels: Measures inter-channel dependencies. Hjorth_Mobility: Frequency-dependent signal descriptor. Hjorth_Complexity: Complexity of EEG waveform changes. Line_Length_Feature: Cumulative length of the EEG waveform trajectory. 2. Frequency-Domain Features (10 Features) Frequency features highlight spectral content and distribution, essential for capturing seizure-related oscillations.

    Delta_Band_Power: Power within the delta frequency range. Theta_Band_Power: Theta band power variations. Alpha_Band_Power: EEG activity in the alpha band. Beta_Band_Power: Beta frequency energy (notable for cognitive activity). Gamma_Band_Power: High-frequency brain activity measures. Low_to_High_Frequency_Power_Ratio: Indicator of frequency band shifts during seizures. Power_Spectral_Density: Power distribution across frequencies. Spectral_Edge_Frequency: Frequency below which a certain percentage of power is contained. Spectral_Entropy: Signal complexity in the frequency domain. Fourier_Transform_Features: Global frequency representation through Fourier analysis. 3. Wavelet Transform Features (5 Features) Wavelet-based features capture transient events and non-stationary patterns in the EEG signal.

    Wavelet_Entropy: Information content using wavelet decomposition. Wavelet_Energy: Energy derived from wavelet coefficients. Discrete_Wavelet_Transform: Detailed frequency analysis at different scales. Continuous_Wavelet_Transform: Continuous frequency-time representation. Wavelet_Based_Shannon_Entropy: Entropy-based wavelet feature. 4. Nonlinear Features (10 Features) Nonlinear measures provide insights into the dynamic and chaotic nature of EEG signals.

    Sample_Entropy: Complexity and unpredictability of the signal. Approximate_Entropy: Regularity of signal fluctuations. Shannon_Entropy: Signal randomness indicator. Permutation_Entropy: Complexity through sequence ordering. Lyapunov_Exponent: Sensitivity to initial conditions (chaotic behavior). Hurst_Exponent: Long-term memory effect measurement. Detrended_Fluctuation_Analysis: Scale-dependent correlations. Higuchi_Fractal_Dimension: Signal complexity measure using fractal geometry. Katz_Fractal_Dimension: Alternative fractal dimension metric. Lempel_Ziv_Complexity: Signal compressibility and complexity measure. 5. Seizure-Specific Features (6 Features) Features tailored to capture seizure onset, duration, and recovery patterns.

    Seizure_Duration: Duration of seizure episodes (in seconds). Pre_Seizure_Pattern: Indicators preceding seizure onset. Post_Seizure_Recovery: Recovery patterns after seizure termination. Seizure_Frequency_Per_Hour: Number of seizures occurring per hour. Interictal_Spike_Rate: Frequency of spikes between se...

  3. o

    Hypnosis TMS-EEG dataset

    • osf.io
    Updated Feb 13, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jarno Tuominen; Henry Railo (2019). Hypnosis TMS-EEG dataset [Dataset]. http://doi.org/10.17605/OSF.IO/E2PKT
    Explore at:
    Dataset updated
    Feb 13, 2019
    Dataset provided by
    Center For Open Science
    Authors
    Jarno Tuominen; Henry Railo
    Description

    No description was included in this Dataset collected from the OSF

  4. Features-EEG dataset

    • researchdata.edu.au
    • openneuro.org
    Updated Jun 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Grootswagers Tijl; Tijl Grootswagers (2023). Features-EEG dataset [Dataset]. http://doi.org/10.18112/OPENNEURO.DS004357.V1.0.0
    Explore at:
    Dataset updated
    Jun 29, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Western Sydney University
    Authors
    Grootswagers Tijl; Tijl Grootswagers
    License

    ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
    License information was derived automatically

    Description

    Experiment Details Electroencephalography recordings from 16 subjects to fast streams of gabor-like stimuli. Images were presented in rapid serial visual presentation streams at 6.67Hz and 20Hz rates. Participants performed an orthogonal fixation colour change detection task.

    Experiment length: 1 hour Raw and preprocessed data are available online through openneuro: https://openneuro.org/datasets/ds004357. Supplementary Material and analysis scripts are available on github: https://github.com/Tijl/features-eeg

  5. P

    CHB-MIT Dataset

    • paperswithcode.com
    Updated Aug 2, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). CHB-MIT Dataset [Dataset]. https://paperswithcode.com/dataset/chb-mit
    Explore at:
    Dataset updated
    Aug 2, 2016
    Description

    The CHB-MIT dataset is a dataset of EEG recordings from pediatric subjects with intractable seizures. Subjects were monitored for up to several days following withdrawal of anti-seizure mediation in order to characterize their seizures and assess their candidacy for surgical intervention. The dataset contains 23 patients divided among 24 cases (a patient has 2 recordings, 1.5 years apart). The dataset consists of 969 Hours of scalp EEG recordings with 173 seizures. There exist various types of seizures in the dataset (clonic, atonic, tonic). The diversity of patients (Male, Female, 10-22 years old) and different types of seizures contained in the datasets are ideal for assessing the performance of automatic seizure detection methods in realistic settings.

  6. EEG Dataset for 'Immediate effects of short-term meditation on sensorimotor...

    • figshare.com
    pdf
    Updated Dec 9, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeehyun Kim; Xiyuan Jiang; Dylan Forenzo; Bin He (2022). EEG Dataset for 'Immediate effects of short-term meditation on sensorimotor rhythm-based brain–computer interface performance' [Dataset]. http://doi.org/10.6084/m9.figshare.21644429.v5
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Dec 9, 2022
    Dataset provided by
    figshare
    Authors
    Jeehyun Kim; Xiyuan Jiang; Dylan Forenzo; Bin He
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This database includes the de-identified EEG data from 37 healthy individuals who participated in a brain-computer interface (BCI) study. All but one subject underwent 2 sessions of BCI experiments that involved controlling a computer cursor to move in one-dimensional space using their “intent”. EEG data were recorded with 62 electrodes. In addition to the EEG data, behavioral data including the online success rate and results of BCI cursor control are also included. This dataset was collected under support from the National Institutes of Health via grant AT009263 to Dr. Bin He. Correspondence about the dataset: Dr. Bin He, Carnegie Mellon University, Department of Biomedical Engineering, Pittsburgh, PA 15213. E-mail: bhe1@andrew.cmu.edu This dataset has been used and analyzed to study the immediate effect of short meditation on BCI performance. The results are reported in: Kim et al, “Immediate effects of short-term meditation on sensorimotor rhythm-based brain–computer interface performance,” Frontiers in Human Neuroscience, 2022 (https://doi.org/10.3389/fnhum.2022.1019279). Please cite this paper if you use any data included in this dataset.

  7. EEG Dataset

    • kaggle.com
    zip
    Updated Apr 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Saksham Dwivedi (2023). EEG Dataset [Dataset]. https://www.kaggle.com/datasets/botsaksham/eeg-dataset/data
    Explore at:
    zip(20363666 bytes)Available download formats
    Dataset updated
    Apr 20, 2023
    Authors
    Saksham Dwivedi
    Description

    Dataset

    This dataset was created by Saksham Dwivedi

    Contents

  8. MAMEM EEG SSVEP Dataset I (256 channels, 11 subjects, 5 frequencies...

    • figshare.com
    • zenodo.org
    application/x-rar
    Updated May 30, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Spiros Nikolopoulos (2023). MAMEM EEG SSVEP Dataset I (256 channels, 11 subjects, 5 frequencies presented in isolation) [Dataset]. http://doi.org/10.6084/m9.figshare.2068677.v6
    Explore at:
    application/x-rarAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Authors
    Spiros Nikolopoulos
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    EEG signals with 256 channels captured from 11 subjects executing a SSVEP-based experimental protocol. Five different frequencies (6.66, 7.50, 8.57, 10.00 and 12.00 Hz) have been used for the visual stimulation, and the EGI 300 Geodesic EEG System (GES 300), using a 256-channel HydroCel Geodesic Sensor Net (HCGSN) and a sampling rate of 250 Hz has been used for capturing the signals. Check https://www.youtube.com/watch?v=8lGBVvCX5d8&feature=youtu.be for a video demonstrating one trial.Check https://github.com/MAMEM/ssvep-eeg-processing-toolbox for the processing toolbox.Check http://arxiv.org/abs/1602.00904 for the technical report.

  9. P

    BMI/OpenBMI dataset for MI. Dataset

    • paperswithcode.com
    Updated Jan 7, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2019). BMI/OpenBMI dataset for MI. Dataset [Dataset]. https://paperswithcode.com/dataset/bmi-openbmi-dataset-for-mi
    Explore at:
    Dataset updated
    Jan 7, 2019
    Description

    BMI/OpenBMI dataset for MI.

    Dataset from Lee et al 2019 [1].

    Dataset Description EEG signals were recorded with a sampling rate of 1,000 Hz and collected with 62 Ag/AgCl electrodes. The EEG amplifier used in the experiment was a BrainAmp (Brain Products; Munich, Germany). The channels were nasion-referenced and grounded to electrode AFz. Additionally, an EMG electrode recorded from each flexor digitorum profundus muscle with the olecranon used as reference. The EEG/EMG channel configuration and indexing numbers are described in Fig. 1. The impedances of the EEG electrodes were maintained below 10 k during the entire experiment.

    MI paradigm The MI paradigm was designed following a well-established system protocol. For all blocks, the first 3 s of each trial began with a black fixation cross that appeared at the center of the monitor to prepare subjects for the MI task. Afterwards, the subject performed the imagery task of grasping with the appropriate hand for 4 s when the right or left arrow appeared as a visual cue. After each task, the screen remained blank for 6 s (± 1.5 s). The experiment consisted of training and test phases; each phase had 100 trials with balanced right and left hand imagery tasks. During the online test phase, the fixation cross appeared at the center of the monitor and moved right or left, according to the real-time classifier output of the EEG signal.

    References [1] Lee, M. H., Kwon, O. Y., Kim, Y. J., Kim, H. K., Lee, Y. E., Williamson, J., … Lee, S. W. (2019). EEG dataset and OpenBMI toolbox for three BCI paradigms: An investigation into BCI illiteracy. GigaScience, 8(5), 1–16. https://doi.org/10.1093/gigascience/giz002

  10. Dataset of intracranial EEG, scalp EEG and beamforming sources from epilepsy...

    • openneuro.org
    Updated Sep 20, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vasileios Dimakopoulos; Lennart Stieglitz; Lukas Imbach; Johannes Sarnthein (2023). Dataset of intracranial EEG, scalp EEG and beamforming sources from epilepsy patients performing a verbal working memory task [Dataset]. http://doi.org/10.18112/openneuro.ds004752.v1.0.1
    Explore at:
    Dataset updated
    Sep 20, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Vasileios Dimakopoulos; Lennart Stieglitz; Lukas Imbach; Johannes Sarnthein
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Dataset of intracranial EEG, scalp EEG and beamforming sources from human epilepsy patients performing a verbal working memory task

    Description

    We present an electrophysiological dataset recorded from fifteen subjects during a verbal working memory task. Subjects were epilepsy patients undergoing intracranial monitoring for localization of epileptic seizures. Subjects performed a modified Sternberg task in which the encoding of memory items, maintenance, and recall were temporally separated. The dataset includes simultaneously recorded scalp EEG with the 10-20 system, intracranial EEG (iEEG) recorded with depth electrodes, waveforms, and the MNI coordinates and anatomical labels of all intracranial electrodes. The dataset includes also reconstructed virtual sensor data that were created by performing LCMV beamforming on the EEG at specific brain regions including, temporal superior lobe, lateral prefrontal cortex, occipital cortex, posterior parietal cortex, and Broca. Subject characteristics and information on sessions (set size, match/mismatch, correct/incorrect, response, response time for each trial) are also provided. This dataset enables the investigation of working memory by providing simultaneous scalp EEG and iEEG recordings, which can be used for connectivity analysis, alongside reconstructed beamforming EEG sources that can enable further cognitive analysis such as replay of memory items.

    Repository structure

    Main directory (verbal WM)

    Contains metadata files in the BIDS standard about the participants and the study. Folders are explained below.

    Subfolders

    • verbalWM/sub-/: Contains folders for each subject, named sub- and session information.
    • verbalWM/sub-/ses-/ieeg/: Contains the raw iEEG data in .edf format for each subject. Each subject performed more than 1 working memory session (ses-0x) each of which includes ~50 trials. Each *ieeg.edf file contains continuous iEEG data during the working memory task. Details about the channels are given in the corresponding .tsv file. We also provide the information on the trial start and end in the events.tsv files by specifying the start and end sample of each trial.
    • verbalWM/sub-/ses-/eeg/: Contains the raw EEG data in .edf format for each subject. Each subject performed more than 1 working memory session (ses-0x) each of which includes ~50 trials. Each *eeg.edf file contains continuous EEG data during the working memory task. Details about the channels are given in the corresponding .tsv file. We also provide the information on the trial start and end in the events.tsv files by specifying the start and end sample of each trial.
    • verbalWM/derivatives/sub-/: Contains the LCMV beamforming sources during encoding and maintenance. The beamforming sources are in the form of virtual EEG sensors each of which corresponds to a specific brain region. The naming convention used for the virtual sensors is the following: DLPFC; dorsolateral pre-frontal cortex, OFC; orbitofrontal cortex, PPC; posterior parietal cortex, AC; auditory cortex, V1; primary visual cortex

    BIDS Conversion

    bids-starter-kid and custom Matlab scripts were used to convert the dataset into BIDS format.

    References

    [1] Dimakopoulos V, Megevand P, Stieglitz LH, Imbach L, Sarnthein J. Information flows from hippocampus to auditory cortex during replay of verbal working memory items. Elife 2022;11. 10.7554/eLife.78677

    [2] Boran E, Fedele T, Klaver P, Hilfiker P, Stieglitz L, Grunwald T, et al. Persistent hippocampal neural firing and hippocampal-cortical coupling predict verbal working memory load. Science Advances 2019;5(3):eaav3687. 10.1126/sciadv.aav3687

    [3] Boran E, Fedele T, Steiner A, Hilfiker P, Stieglitz L, Grunwald T, et al. Dataset of human medial temporal lobe neurons, scalp and intracranial EEG during a verbal working memory task. Scientific Data 2020;7(1):30. 10.1038/s41597-020-0364-3

  11. MindBigData2023_MNIST-2B

    • kaggle.com
    • huggingface.co
    Updated Jul 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rakesh Chandrashekhar Biswas (2024). MindBigData2023_MNIST-2B [Dataset]. https://www.kaggle.com/datasets/rakeshb4r/bigmind
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 22, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Rakesh Chandrashekhar Biswas
    Description

    MindBigData 2023 MNIST-2B is a reduced subset of the MindBigData 2023 MNIST-8B https://huggingface.co/datasets/DavidVivancos/MindBigData2023_MNIST-8B (June 1st 2023), brain signals open dataset created for Machine Learning, based on EEG signals from a single subject captured using a custom 128 channels device, replicating the full 70,000 digits from Yaan LeCun et all MNIST dataset. The brain signals were captured while the subject was watching the pixels of the original digits one by one on a screen and listening at the same time to the spoken number 0 to 9 from the real label.

    Supporting dataset for paper https://arxiv.org/abs/2306.00455

    The dataset contains 140,000 records from 128 EEG channels, each of 256 samples ( a bit more than 1 second), recorded at 250hz (From the Original 8 Billion datapoints dataset, the EEG signals were reduced from 500 samples to 256 samples(a bit more than 1 second))

    It consists of 2 main csv data files:

    “train.csv” 23.1 GB Header + 120,000 rows 33,559 columns “test.csv” 3.87 GB Header + 20,000 rows 33,559 columns 10 audio files at a folder named “audiolabels”: “0.wav”, “1.wav”......“9.wav”

    And 1 csv file with 3d coordinates of the EEG electrodes: “3Dcoords.csv” 4,27Kb Header + 130 rows 4 columns

    Dataset Structure review supporting paper https://arxiv.org/abs/2306.00455

    Data Fields review supporting paper https://arxiv.org/abs/2306.00455

    Citation @article{MindBigData_2023_MNIST-8B, title={MindBigData 2023 MNIST-8B The 8 billion datapoints Multimodal Dataset of Brain Signals}, author={David Vivancos}, journal={arXiv preprint arXiv:2306.00455}, year={2023} }

  12. P

    STEW Dataset

    • paperswithcode.com
    Updated Jul 10, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). STEW Dataset [Dataset]. https://paperswithcode.com/dataset/stew-dataset
    Explore at:
    Dataset updated
    Jul 10, 2018
    Description

    This dataset consists of raw EEG data from 48 subjects who participated in a multitasking workload experiment utilizing the SIMKAP multitasking test. The subjects’ brain activity at rest was also recorded before the test and is included as well. The Emotiv EPOC device, with sampling frequency of 128Hz and 14 channels was used to obtain the data, with 2.5 minutes of EEG recording for each case. Subjects were also asked to rate their perceived mental workload after each stage on a rating scale of 1 to 9 and the ratings are provided in a separate file.

  13. EEG dataset

    • figshare.com
    bin
    Updated Dec 6, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    minho lee (2019). EEG dataset [Dataset]. http://doi.org/10.6084/m9.figshare.8091242.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 6, 2019
    Dataset provided by
    figshare
    Authors
    minho lee
    License

    https://www.gnu.org/copyleft/gpl.htmlhttps://www.gnu.org/copyleft/gpl.html

    Description

    This dataset has collected for the study of "Robust Detection of Event-Related Potentials in a User-Voluntary Short-Term Imagery Task.

  14. A multi-subject and multi-session EEG dataset for modelling human visual...

    • openneuro.org
    Updated Oct 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shuning Xue; Bu Jin; Jie Jiang; Longteng Guo; Jin Zhou; Changyong Wang; Jing Liu (2024). A multi-subject and multi-session EEG dataset for modelling human visual object recognition [Dataset]. http://doi.org/10.18112/openneuro.ds005589.v1.0.1
    Explore at:
    Dataset updated
    Oct 26, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Shuning Xue; Bu Jin; Jie Jiang; Longteng Guo; Jin Zhou; Changyong Wang; Jing Liu
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    We share a multi-subject and multi-session (MSS) dataset with 122-channel electroencephalographic (EEG) signals collected from 32 human participants. The data was obtained during serial visual presentation experiments in two paradigms. Dataset of first paradigm consists of around 800,000 trials presenting stimulus sequences at 5 Hz. Dataset of second paradigm comprises around 40,000 trials displaying each image for 1 second. Each participant completed between 1 to 5 sessions on different days, and each session lasted for approximately 1.5 hours of EEG recording. The stimulus set used in the experiments included 10,000 images, with 500 images per class, manually selected from ImageNet and PASCAL image databases. The MSS dataset can be useful for various studies, including but not limited to (1) exploring the characteristics of EEG visual response, (2) comparing the differences in EEG response of different visual paradigms, and (3) designing machine learning algorithms for cross-subject and cross-session brain-computer interfaces (BCIs) using EEG data from multiple subjects and sessions.

  15. Bipolar EEG dataset - music

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 28, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Octave Etard; Octave Etard; Rémy Ben Messaoud; Gabriel Gaugain; Tobias Reichenbach; Tobias Reichenbach; Rémy Ben Messaoud; Gabriel Gaugain (2021). Bipolar EEG dataset - music [Dataset]. http://doi.org/10.5281/zenodo.4470135
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 28, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Octave Etard; Octave Etard; Rémy Ben Messaoud; Gabriel Gaugain; Tobias Reichenbach; Tobias Reichenbach; Rémy Ben Messaoud; Gabriel Gaugain
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset setting out to investigate neural responses to continuous musical pieces with bipolar EEG. Analysis code (and usage instructions) to derive neural responses to the temporal fine structure of the stimuli is on Github. The EEG data processed to this end is provided here, as well as the raw data to enable different analyses (e.g. slower cortical responses).

    # Introduction

    This dataset contains bipolar scalp EEG responses of 17 subjects listening to continuous musical pieces (Bach's Two-Part Inventions), and performing a vibrato detection task.

    Naming conventions:

    - The subject IDs are EBIP01, EBIP02 ... EBIP17.
    - The different conditions are labelled to indicate the instrument that was being attended: fG and fP for the Guitar and Piano in quiet (Single Instrument (SI) conditions), respectively; and fGc and fPc for Competing conditions where both the instruments are playing together, but where the subjects should be selectively attending to the Guitar or Piano, respectively (Competing Instrument (CI) conditions).
    - An appended index from 2 to 7 designates the invention that was played (index 1 corresponds to the training block for which no EEG data was recorded). Note that this index does not necessarily corresponds to the order in which the stimuli were played (order was pseudo-randomised).

    For example, the EEG file named EBIP08_fGc_4 contains EEG data from subject EBIP08 performing the competing instrument task (CI condition), attending to the guitar (ignoring the piano), and the stimulus that was played was the invention #4.

    # Content

    The general organisation of the dataset is as follow:

    data
    ├─── behav              folder containing the behavioural data
    ├─── EEG            folder containing the EEG data
    │   ├─── processed
    │   └─── raw
    ├─── linearModelResults       folder containing the results from the analysis code
    └─── stimuli folder containing the stimuli
         ├─── features
         ├─── processedInventions
         └─── rawInventions

    This general organisation is the one expected by the code. The location of the data folder and/or these main folders can be personalised in the functions/+EEGmusic2020/getPath.m function in the Github repository. The architecture of the sub-folders in each of these folders is specified by the functions makePathEEGFolder, makePathFeatureFiles and makePathSaveResults. The naming of the files within them is implemented by makeNameEEGDataFile and makeNameEEGDataFile (all these functions being in functions/+EEGmusic2020).

    - The behav folder is structured as follow:

    behav
    ├─── EBIP02
    │ ├─── EBIP02_keyboardInputs_fGc_2.mat file containing variables:
    │ │ ├─── timePressed key press time (in seconds, relative to stimulus onset)
    │ │ └─── keyCode ID of the keys that were pressed
    │ └─── ...
    ├─── ...
    ├─── vibTime
    │ ├─── vibTime_2.mat file containing variables:
    │ │ ├─── idxNoteVib index (in the MIDI files) of the notes in which vibratos were inserted
    │ │ ├─── instrumentOrder order of the instruments in idxNoteVib and vibTiming variables
    │ │ └─── vibTiming timing of vibrato onsets in the track (in s)
    │ ├─── ...
    └─── clickPerformance_RT_2.0.mat file containing behavioural results for all subjects (FPR, TPR, etc.):

    instrumentOrder indicates to what instrument each column of idxNoteVib and vibTiming refers to. The data for EBIP01 missing due to a technical error.

    - The EEG/raw folder contains unprocessed EEG data for all subjects, and files indicating the order in which the inventions were played. It is structured as follow:

    EEG
    ├─── raw
    │ ├─── EBIP01
    │ │ ├─── EBIP01_EEGExpParam.mat file containing variables:
    │ │ │ ├─── conditionOrder whether this subject started by listening to the guitar or piano
    │ │ │ └─── partsOrder order in which the inventions were presented to this subject
    │ │ ├─── EBIP01_fGc_2.[eeg/vhdr/vmrbk] raw EEG data files
    │ │ ├─── ...
    │ ├─── ...

    The conditionOrder variable can assume two values: either {'fG','fP'} indicating the subject started by listening to the guitar or {'fP','fG'} indicating the subject started by listening to the piano. The partsOrder variable is a 2 x 6 matrix containing the indices (2 to 7) of the inventions that were played, ordered in the presentation order. During the first block, the instrument conditionOrder{1} was attended, and the invention # partsOrder(1,1) was played. During the second block, the instrument conditionOrder{2} was attended, and the invention #partsOrder(2,1) was played, etc.

    Each EEG files contains 3 channels: 2 are the bipolar electrophysiological channels, and one (labelled Sound) contains a recording of the stimuli that were played and that was simultaneously recorded at the same sampling rate as the EEG data (5 kHz) by the amplifier through an acoustic adapter. The files also contain triggers that indicate the beginning and end of the stimuli (labelled S 1 and S 2 respectively). The sound channel and triggers can be used to temporally align the EEG data and stimuli.

    The EEG/processed folder contains processed EEG data for all subjects, as required for the analyses carried out in the code. It is organised as follow:

    EEG
    ├─── processed
    │ └─── Fs-5000 sampling rate
    │ └─── HP-130 processing that was applied
    │ │ ├─── EBIP01
    │ │ │ ├─── ... processed EEG data files
    │ │ ├─── ...
    │ └─── noProc
    │ ├─── ...

    This structure is specified by the makePathEEGFolder function, and the file names by makeNameEEGDataFile. In the files in the noProc folder, the EEG data was simply aligned with the stimuli, but is otherwise unprocessed. Events were added to mark stimulus onset and offset (labelled stimBegin and stimEnd). In the other folders, the EEG data was furthermore high-pass filtered at 130 Hz (HP-130).

    - The linearModelResults folder contains the results from the linear model analyses:

    linearModelResults
    └─── Fs-5000 sampling rate
    │ └─── HP-130 processing of the EEG data
    │ │ └─── LP-2000 processing of the stimulus feature
    │ │ ├─── ... result files
    │ ├─── ...

    This structure and file names are specified by the makePathSaveResults function.

    - The rawInventions folder contains the orignal data that was used to construct the stimuli:

    rawInventions
    ├─── invent1 invention index
    │ ├─── invent1_60bpm.mid MIDI file
    │ ├─── invent1_60bpm_guitar.wav guitar track
    │ └─── invent1_60bpm_piano.wav piano track

    ├─── ...

    In this folder (and only in this folder), the numbering of the inventions differs from the one otherwise used throughout. The correspondence is as shown below:
    Raw invention # |

  16. EEG Motor Movement/Imagery Dataset

    • openneuro.org
    • opendatalab.com
    • +1more
    Updated Dec 15, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gerwin Schalk; Dennis J McFarland; Thilo Hinterberger; Niels Birbaumer; Jonathan R Wolpaw (2022). EEG Motor Movement/Imagery Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds004362.v1.0.0
    Explore at:
    Dataset updated
    Dec 15, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Gerwin Schalk; Dennis J McFarland; Thilo Hinterberger; Niels Birbaumer; Jonathan R Wolpaw
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Acknowledgements

    This data set was originally created and contributed to PhysioBank by Gerwin Schalk (schalk at wadsworth dot org) and his colleagues at the BCI R&D Program, Wadsworth Center, New York State Department of Health, Albany, NY. W.A. Sarnacki collected the data. Aditya Joshi compiled the dataset and prepared the documentation. D.J. McFarland and J.R. Wolpaw were responsible for experimental design and project oversight, respectively. This work was supported by grants from NIH/NIBIB ((EB006356 (GS) and EB00856 (JRW and GS)).

    To access the initial publication of this dataset, please visit this link to PhysioBank: https://physionet.org/content/eegmmidb/1.0.0/

    Experiment Protocol

    This data set consists of over 1500 one- and two-minute EEG recordings, obtained from 109 volunteers, as described below.

    Subjects performed different motor/imagery tasks while 64-channel EEG were recorded using the BCI2000 system (http://www.bci2000.org). Each subject performed 14 experimental runs: two one-minute baseline runs (one with eyes open, one with eyes closed), and three two-minute runs of each of the four following tasks:

    [Task 1] A target appears on either the left or the right side of the screen. The subject opens and closes the corresponding fist until the target disappears. Then the subject relaxes.
    [Task 2] A target appears on either the left or the right side of the screen. The subject imagines opening and closing the corresponding fist until the target disappears. Then the subject relaxes.
    [Task 3] A target appears on either the top or the bottom of the screen. The subject opens and closes either both fists (if the target is on top) or both feet (if the target is on the bottom) until the target disappears. Then the subject relaxes.
    [Task 4] A target appears on either the top or the bottom of the screen. The subject imagines opening and closing either both fists (if the target is on top) or both feet (if the target is on the bottom) until the target disappears. Then the subject relaxes.

    In summary, the experimental runs were:
    1. Baseline, eyes open
    2. Baseline, eyes closed
    3. Task 1 (open and close left or right fist)
    4. Task 2 (imagine opening and closing left or right fist)
    5. Task 3 (open and close both fists or both feet)
    6. Task 4 (imagine opening and closing both fists or both feet)
    7. Task 1
    8. Task 2
    9. Task 3
    10. Task 4
    11. Task 1
    12. Task 2
    13. Task 3
    14. Task 4

    Each event code includes an event type indicator (T0, T1, or T2) that is concatenated to the Task # it belongs with (i.e TASK1T2). The event type indicators change definition depending on the Task # it is associated with. For example, TASK1T2 would correspond to the onset of real motion in the right fist, while TASK3T2 would correspond to onset of real motion in both feet:

    [T0] corresponds to rest

    [T1] corresponds to onset of motion (real or imagined) of:

    • the left fist (in runs 3, 4, 7, 8, 11, and 12; for Task 1 (real) and Task 2 (imagined))
    • both fists (in runs 5, 6, 9, 10, 13, and 14; for Task 3 (real) and Task 4 (imagined))

    [T2] corresponds to onset of motion (real or imagined) of:

    • the right fist (in runs 3, 4, 7, 8, 11, and 12; Task 1 (real) and Task 2 (imagined))
    • both feet (in runs 5, 6, 9, 10, 13, and 14; for Task 3 (real) and Task 4 (imagined))

    Note: The data files in this dataset were converted into the .set format for EEGLAB. The event codes in the .set files of this dataset will contain the concatenated event codes above for all event files for clarity purposes. The non-converted .edf files along with the accompanying PhysioBank-compatible annotation files for all the runs of each subject can be found in the sourcedata folder. In the non-converted .edf files the event codes will only be shown as T0, T1, and T2 regardless of task type. All the Matlab scripts used for the .set conversion and renaming of event codes of the PhysioBank .edf files can be found in the code folder.

    Montage

    The EEGs were recorded from 64 electrodes as per

  17. Z

    Ultra high-density 255-channel EEG-AAD dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jun 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bertrand, Alexander (2024). Ultra high-density 255-channel EEG-AAD dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4518753
    Explore at:
    Dataset updated
    Jun 13, 2024
    Dataset provided by
    Zink, Rob
    Bertrand, Alexander
    Mundanad Narayanan, Abhijith
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    If using this dataset, please cite the following paper above and the current Zenodo repository:A. Mundanad Narayanan, R. Zink, and A. Bertrand, "EEG miniaturization limits for stimulus decoding with EEG sensor networks", Journal of Neural Engineering, vol. 18, 2021, doi: 10.1088/1741-2552/ac2629

    Experiment*************

    This dataset contains 255-channel electroencephalography (EEG) data collected during an auditory attention decoding experiment (AAD). The EEG was recorded using a SynAmps RT device (Compumedics, Australia) at a sampling rate of 1 kHz and using active Ag/Cl electrodes. The electrodes were placed on the head according to the international 10-5 (5%) system. 30 normal hearing male subjects between 22 and 35 years old participated in the experiment. All of them signed an informed consent form approved by the KU Leuven ethical committee.

    Two Dutch stories narrated by different male speakers divided into two parts of 6 minutes each were used as the stimuli in the experiment [1]. A single trial of the experiment involved the presentation of these two parts (one of both stories) to the subject through insert phones (Etymotic ER3A) at 60dBA. These speech stimuli were filtered using a head-related transfer function (HRTF) such that the stories seemed to arrive from two distinct spatial locations, namely left and right with respect to the subject with 180 degrees separation. In each trial, the subjects were asked to attend to only one ear while ignoring the other. Four trials of 6 minutes each were carried out, in which each story part is used twice. The order of presentations was randomized and balanced over different subjects. Thus approximately 24 minutes of EEG data was recorded per subject.

    File organization and details********************************

    The EEG data of each of the 30 subjects are uploaded as a ZIP file with the name Sx.tar.gzip here x=0,1,2,..,29. When a zip file is extracted, the EEG data are in their original raw format as recorded by the CURRY software [2]. The data files of each recording consist of four files with the same name but different extensions, namely, .dat, ,dap, .rs3 and .ceo. The name of each file follows the following convention: Sx_AAD_P. With P taking one of the following values for each file:1. 1L2. 1R3. 2L4. 2R

    The letter 'L' or 'R' in P indicates the attended direction of each subject in a recording: left and right respectively. A MATLAB function to read the software is provided in the directory called scripts. A python function to read the file is available in this Github repository [3].The original version of stimuli presented to subjects, i.e. without the HRTF filtering, can be found after extracting the stimuli.zip file in WAV format. There are 4 WAV files corresponding to the two parts of each of the two stories. These files have been sampled at 44.1 kHz. The order of presentation of these WAV files is given in the table below: Stimuli presentation and attention information of files

    Trial (P) Stimuli: Left-ear Stimuli: Right-ear Attention

    1L part1_track1_dry part1_track2_dry Left

    1R part1_track1_dry part1_track2_dry Right

    2L part2_track2_dry part2_track1_dry Left

    2R part2_track2_dry part2_track1_dry Right

    Additional files (after extracting scripts.zip and misc.zip):

    scripts/sample_script.m: Demonstrates reading an EEG-AAD recording and extracting the start and end of the experiment.

    misc/channel-layout.jpeg: The 255-channel EEG cap layout

    misc/eeg255ch_locs.csv: The channel names, numbers and their spherical (theta and phi) scalp coordinates.

    [1] Radioboeken voor kinderen, http://radioboeken.eu/kinderradioboeken.php?lang=NL, 2007 (Accessed: 8 Feb 2021)

    [2] CURRY 8 X – Data Acquisition and Online Processing, https://compumedicsneuroscan.com/product/curry-data-acquisition-online-processing-x/ (Accessed: 8, Feb, 2021)

    [3] Abhijith Mundanad Narayanan, "EEG analysis in python", 2021. https://github.com/mabhijithn/eeg-analyse , (Accessed: 8 Feb, 2021)

  18. Video-EEG Encoding-Decoding Dataset KU Leuven

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jun 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuanyuan Yao; Yuanyuan Yao; Axel Stebner; Axel Stebner; Tinne Tuytelaars; Tinne Tuytelaars; Simon Geirnaert; Simon Geirnaert; Alexander Bertrand; Alexander Bertrand (2024). Video-EEG Encoding-Decoding Dataset KU Leuven [Dataset]. http://doi.org/10.5281/zenodo.10512414
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 11, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Yuanyuan Yao; Yuanyuan Yao; Axel Stebner; Axel Stebner; Tinne Tuytelaars; Tinne Tuytelaars; Simon Geirnaert; Simon Geirnaert; Alexander Bertrand; Alexander Bertrand
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Leuven
    Description

    If using this dataset, please cite the following paper and the current Zenodo repository.

    This dataset is described in detail in the following paper:

    Yao, Y., Stebner, A., Tuytelaars, T., Geirnaert, S., & Bertrand, A. (2024). Identifying temporal correlations between natural single-shot videos and EEG signals. Journal of Neural Engineering, 21(1), 016018. doi:10.1088/1741-2552/ad2333

    Introduction

    The research work leading to this dataset was conducted at the Department of Electrical Engineering (ESAT), KU Leuven.

    This dataset contains electroencephalogram (EEG) data collected from 19 young participants with normal or corrected-to-normal eyesight when they were watching a series of carefully selected YouTube videos. The videos were muted to avoid the confounds introduced by audio. For synchronization, a square box was encoded outside of the original frames and flashed every 30 seconds in the top right corner of the screen. A photosensor, detecting the light changes from this flashing box, was affixed to that region using black tape to ensure that the box did not distract participants. The EEG data was recorded using a BioSemi ActiveTwo system at a sample rate of 2048 Hz. Participants wore a 64-channel EEG cap, and 4 electrooculogram (EOG) sensors were positioned around the eyes to track eye movements.

    The dataset includes a total of (19 subjects x 63 min + 9 subjects x 24 min) of data. Further details can be found in the following section.

    Content

    • YouTube Videos: Due to copyright constraints, the dataset includes links to the original YouTube videos along with precise timestamps for the segments used in the experiments.
    • Raw EEG Data: Organized by subject ID, the dataset contains EEG segments corresponding to the presented videos. Both EEGLAB .set files (containing metadata) and .fdt files (containing raw data) are provided, which can also be read by popular EEG analysis Python packages such as MNE.
      • The naming convention links each EEG segment to its corresponding video. E.g., the EEG segment 01_eeg corresponds to video 01_Dance_1, 03_eeg corresponds to video 03_Acrob_1, Mr_eeg corresponds to video Mr_Bean, etc.
      • The raw data have 68 channels. The first 64 channels are EEG data, and the last 4 channels are EOG data. The position coordinates of the standard BioSemi headcaps can be downloaded here: https://www.biosemi.com/download/Cap_coords_all.xls.
      • Due to minor synchronization ambiguities, different clocks in the PC and EEG recorder, and missing or extra video frames during video playback (rarely occurred), the length of the EEG data may not perfectly match the corresponding video data. The difference, typically within a few milliseconds, can be resolved by truncating the modality with the excess samples.
    • Signal Quality Information: A supplementary .txt file detailing potential bad channels. Users can opt to create their own criteria for identifying and handling bad channels.

    The dataset is divided into two subsets: Single-shot and MrBean, based on the characteristics of the video stimuli.

    Single-shot Dataset

    The stimuli of this dataset consist of 13 single-shot videos (63 min in total), each depicting a single individual engaging in various activities such as dancing, mime, acrobatics, and magic shows. All the participants watched this video collection.

    Video IDLinkStart time (s)End time (s)
    01_Dance_1https://youtu.be/uOUVE5rGmhM8.54231.20
    03_Acrob_1https://youtu.be/DjihbYg6F2Y4.24231.91
    04_Magic_1https://youtu.be/CvzMqIQLiXE3.68348.17
    05_Dance_2https://youtu.be/f4DZp0OEkK45.05227.99
    06_Mime_2https://youtu.be/u9wJUTnBdrs5.79347.05
    07_Acrob_2https://youtu.be/kRqdxGPLajs183.61519.27
    08_Magic_2https://youtu.be/FUv-Q6EgEFI3.36270.62
    09_Dance_3https://youtu.be/LXO-jKksQkM5.61294.17
    12_Magic_3https://youtu.be/S84AoWdTq3E1.76426.36
    13_Dance_4https://youtu.be/0wc60tA1klw14.28217.18
    14_Mime_3https://youtu.be/0Ala3ypPM3M21.87386.84
    15_Dance_5https://youtu.be/mg6-SnUl0A015.14233.85
    16_Mime_6https://youtu.be/8V7rhAJF6Gc31.64388.61

    MrBean Dataset

    Additionally, 9 participants watched an extra 24-minute clip from the first episode of Mr. Bean, where multiple (moving) objects may exist and interact, and the camera viewpoint may change. The subject IDs and the signal quality files are inherited from the single-shot dataset.

    Video IDLinkStart time (s)End time (s)
    Mr_Beanhttps://www.youtube.com/watch?v=7Im2I6STbms39.771495.00

    Acknowledgement

    This research is funded by the Research Foundation - Flanders (FWO) project No G081722N, junior postdoctoral fellowship fundamental research of the FWO (for S. Geirnaert, No. 1242524N), the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 802895), the Flemish Government (AI Research Program), and the PDM mandate from KU Leuven (for S. Geirnaert, No PDMT1/22/009).

    We also thank the participants for their time and effort in the experiments.

    Contact Information

    Executive researcher: Yuanyuan Yao, yuanyuan.yao@kuleuven.be

    Led by: Prof. Alexander Bertrand, alexander.bertrand@kuleuven.be

  19. d

    Replication Data for: A cross-session motor imagery EEG dataset

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Sep 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pan, Lincong (2024). Replication Data for: A cross-session motor imagery EEG dataset [Dataset]. https://search.dataone.org/view/sha256%3A56f8fe22376e10516022525cb56864e8c5d460296b0b159c8ddc5d1f1fff97e9
    Explore at:
    Dataset updated
    Sep 24, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Pan, Lincong
    Description

    Pan2023 Dataset Documentation # This is a replication of the "A cross-session motor imagery EEG dataset" dataset, the .mat file version is v7.0. ## Abstract The Pan2023 dataset is a collection of electroencephalography (EEG) signals from 14 subjects performing motor imagery (MI) tasks across two sessions. The dataset aims to facilitate the study of cross-session variability in MI-EEG signals and to support the development of robust brain-computer interface (BCI) systems. ## Dataset Composition The dataset encompasses EEG recordings from 14 subjects, each participating in two sessions. The sessions involve MI tasks with visual cues for left-handed and right-handed movements. Data acquisition was performed using a Neuroscan SynAmps2 amplifier, equipped with 28 scalp electrodes following the international 10-20 system. The EEG signals were sampled at a frequency of 250Hz, with a band-pass filter applied from 0.01 to 200Hz to mitigate power line noise. The collected data is stored in Matlab format, labeled by subject and session number. ## Participants The participant cohort includes 14 individuals (five females), aged 22 to 25, with two reporting left-handedness. All subjects were screened for neurological and movement disorders, ensuring a healthy participant profile for the study. ## Experimental Paradigm Each experimental session comprised 120 trials, segmented into three distinct phases: Rest, Preparation, and Task. During the Rest Period (2 seconds), subjects were instructed to remain relaxed without engaging in mental tasks. The Preparation Period (1 second) involved a 'Ready' cue on the monitor, prompting subjects to focus and prepare for the upcoming MI task. The Task Period (4 seconds) required subjects to perform the MI task, visualizing the movement corresponding to the provided cues, either left or right-handed. This paradigm was designed to occur in a controlled, distraction-free environment. ## Data Acquisition and Preprocessing EEG signals were captured using a Neuroscan SynAmps2 amplifier and 28 scalp electrodes positioned per the 10-20 system. The sampling rate was set at 1000Hz, and a band-pass filter from 0.01 to 200Hz and a notch filter at 50Hz were employed to exclude power line interference. The signals were downsampled to 250Hz and archived in Matlab format, systematically named by subject and session identifiers. ## Data Structure The dataset's structure is encapsulated in a Matlab file, comprising a struct with the following components: - data: A 3D matrix ([n_trials, n_channels, n_samples]) containing the EEG signals. - label: A vector ([n_trials]) denoting each trial's label (1 for left-handed, 2 for right-handed movement). - trial_info: A struct detailing each trial's phase (1 for Rest, 2 for Preparation, 3 for Task), the visual cue (1 for left-handed, 2 for right-handed movement), and the subject's identifier.

  20. Data from: Computational methods of EEG signals analysis for Alzheimer's...

    • osf.io
    Updated Jan 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mário L. Vicchietti; Fernando M. Ramos; Luiz E. Betting; Andriana S. L. O. Campanharo (2025). Data from: Computational methods of EEG signals analysis for Alzheimer's disease classification [Dataset]. http://doi.org/10.17605/OSF.IO/2V5MD
    Explore at:
    Dataset updated
    Jan 27, 2025
    Dataset provided by
    Center for Open Sciencehttps://cos.io/
    Authors
    Mário L. Vicchietti; Fernando M. Ramos; Luiz E. Betting; Andriana S. L. O. Campanharo
    Description

    No description was included in this Dataset collected from the OSF

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jiacheng Xu (2023). EEG driver fatigue detection [Dataset]. https://www.kaggle.com/datasets/jcxuitsme/eeg-driver-fatigue-detection
Organization logo

EEG driver fatigue detection

Explore at:
5 scholarly articles cite this dataset (View in Google Scholar)
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Apr 2, 2023
Dataset provided by
Kaggle
Authors
Jiacheng Xu
Description

The original EEG data for driver fatigue detection Doi: https://doi.org/10.6084/m9.figshare.5202739.v1 This is the original EEG data of twelve healthy subjects for driver fatigue detection. Due to personal privacy, the digital number represents different participants. The .cnt files were created by a 40-channel Neuroscan amplifier, including the EEG data in two states in the process of driving.

  1. Materials The data that support the findings of this study are openly available at [https://figshare.com/articles/dataset/The_original_EEG_data_for_driver_fatigue_detection/5202739] which was provided by Jianliang Min et al [51]. The data was acquired using a static driving simulator in a controlled lab environment at Jiangxi University of Technology-China. [52].
Search
Clear search
Close search
Google apps
Main menu