100+ datasets found
  1. i

    EEG Signal Dataset

    • ieee-dataport.org
    Updated Jun 11, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rahul Kher (2020). EEG Signal Dataset [Dataset]. https://ieee-dataport.org/documents/eeg-signal-dataset
    Explore at:
    Dataset updated
    Jun 11, 2020
    Authors
    Rahul Kher
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PCA

  2. h

    General-Disorders-EEG-Dataset-v1

    • huggingface.co
    Updated Oct 5, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neurazum (2024). General-Disorders-EEG-Dataset-v1 [Dataset]. http://doi.org/10.57967/hf/3321
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 5, 2024
    Dataset authored and provided by
    Neurazum
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset

    Synthetic EEG data generated by the ‘bai’ model based on real data.

      Features/Columns:
    

    No: "Number" Sex: "Gender" Age: "Age of participants" EEG Date: "The date of the EEG" Education: "Education level" IQ: "IQ level of participants" Main Disorder: "General class definition of the disorder" Specific Disorder: "Specific class definition of the disorder"

    Total Features/Columns: 1140

      Content:
    

    Obsessive Compulsive Disorder Bipolar Disorder Schizophrenia… See the full description on the dataset page: https://huggingface.co/datasets/Neurazum/General-Disorders-EEG-Dataset-v1.

  3. c

    Ultra high-density EEG recording of interictal migraine and controls:...

    • kilthub.cmu.edu
    txt
    Updated Jul 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alireza Chaman Zar; Sarah Haigh; Pulkit Grover; Marlene Behrmann (2020). Ultra high-density EEG recording of interictal migraine and controls: sensory and rest [Dataset]. http://doi.org/10.1184/R1/12636731
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jul 21, 2020
    Dataset provided by
    Carnegie Mellon University
    Authors
    Alireza Chaman Zar; Sarah Haigh; Pulkit Grover; Marlene Behrmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We used a high-density electroencephalography (HD-EEG) system, with 128 customized electrode locations, to record from 17 individuals with migraine (12 female) in the interictal period, and 18 age- and gender-matched healthy control subjects, during visual (vertical grating pattern) and auditory (modulated tone) stimulation which varied in temporal frequency (4 and 6Hz), and during rest. This dataset includes the EEG raw data related to the paper entitled Chamanzar, Haigh, Grover, and Behrmann (2020), Abnormalities in cortical pattern of coherence in migraine detected using ultra high-density EEG. The link to our paper will be made available as soon as it is published online.

  4. u

    EEG Datasets for Naturalistic Listening to "Alice in Wonderland" (Version 2)...

    • deepblue.lib.umich.edu
    Updated Sep 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brennan, Jonathan R (2023). EEG Datasets for Naturalistic Listening to "Alice in Wonderland" (Version 2) [Dataset]. http://doi.org/10.7302/746w-g237
    Explore at:
    Dataset updated
    Sep 1, 2023
    Dataset provided by
    Deep Blue Data
    Authors
    Brennan, Jonathan R
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These files contain the raw data and processing parameters to go with the paper "Hierarchical structure guides rapid linguistic predictions during naturalistic listening" by Jonathan R. Brennan and John T. Hale. These files include the stimulus (wav files), raw data (BrainVision format), data processing parameters (matlab), and variables used to align the stimuli with the EEG data and for the statistical analyses reported in the paper (csv spreadsheet). ;Updates in Version 2:

    • data in BrainVision format
    • added information about data analysis
    • corrected prePROCessing information for S02
  5. f

    EEG Dataset for 'Immediate effects of short-term meditation on sensorimotor...

    • figshare.com
    pdf
    Updated Dec 9, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jeehyun Kim; Xiyuan Jiang; Dylan Forenzo; Bin He (2022). EEG Dataset for 'Immediate effects of short-term meditation on sensorimotor rhythm-based brain–computer interface performance' [Dataset]. http://doi.org/10.6084/m9.figshare.21644429.v5
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Dec 9, 2022
    Dataset provided by
    figshare
    Authors
    Jeehyun Kim; Xiyuan Jiang; Dylan Forenzo; Bin He
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This database includes the de-identified EEG data from 37 healthy individuals who participated in a brain-computer interface (BCI) study. All but one subject underwent 2 sessions of BCI experiments that involved controlling a computer cursor to move in one-dimensional space using their “intent”. EEG data were recorded with 62 electrodes. In addition to the EEG data, behavioral data including the online success rate and results of BCI cursor control are also included. This dataset was collected under support from the National Institutes of Health via grant AT009263 to Dr. Bin He. Correspondence about the dataset: Dr. Bin He, Carnegie Mellon University, Department of Biomedical Engineering, Pittsburgh, PA 15213. E-mail: bhe1@andrew.cmu.edu This dataset has been used and analyzed to study the immediate effect of short meditation on BCI performance. The results are reported in: Kim et al, “Immediate effects of short-term meditation on sensorimotor rhythm-based brain–computer interface performance,” Frontiers in Human Neuroscience, 2022 (https://doi.org/10.3389/fnhum.2022.1019279). Please cite this paper if you use any data included in this dataset.

  6. i

    EEG datasets with different levels of fatigue for personal identification

    • ieee-dataport.org
    Updated May 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Haixian Wang (2023). EEG datasets with different levels of fatigue for personal identification [Dataset]. https://ieee-dataport.org/documents/eeg-datasets-different-levels-fatigue-personal-identification
    Explore at:
    Dataset updated
    May 2, 2023
    Authors
    Haixian Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    the digital number represents different participants. The .cnt files were created by a 40-channel Neuroscan amplifier

  7. b

    Harvard Electroencephalography Database

    • bdsp.io
    • registry.opendata.aws
    Updated Feb 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sahar Zafar; Tobias Loddenkemper; Jong Woo Lee; Andrew Cole; Daniel Goldenholz; Jurriaan Peters; Alice Lam; Edilberto Amorim; Catherine Chu; Sydney Cash; Valdery Moura Junior; Aditya Gupta; Manohar Ghanta; Marta Fernandes; Haoqi Sun; Jin Jing; M Brandon Westover (2025). Harvard Electroencephalography Database [Dataset]. http://doi.org/10.60508/k85b-fc87
    Explore at:
    Dataset updated
    Feb 10, 2025
    Authors
    Sahar Zafar; Tobias Loddenkemper; Jong Woo Lee; Andrew Cole; Daniel Goldenholz; Jurriaan Peters; Alice Lam; Edilberto Amorim; Catherine Chu; Sydney Cash; Valdery Moura Junior; Aditya Gupta; Manohar Ghanta; Marta Fernandes; Haoqi Sun; Jin Jing; M Brandon Westover
    License

    https://github.com/bdsp-core/bdsp-license-and-duahttps://github.com/bdsp-core/bdsp-license-and-dua

    Description

    The Harvard EEG Database will encompass data gathered from four hospitals affiliated with Harvard University: Massachusetts General Hospital (MGH), Brigham and Women's Hospital (BWH), Beth Israel Deaconess Medical Center (BIDMC), and Boston Children's Hospital (BCH). The EEG data includes three types:

    rEEG: "routine EEGs" recorded in the outpatient setting.
    EMU: recordings obtained in the inpatient setting, within the Epilepsy Monitoring Unit (EMU).
    ICU/LTM: recordings obtained from acutely and critically ill patients within the intensive care unit (ICU).
    
  8. Data from: A multi-subject and multi-session EEG dataset for modelling human...

    • openneuro.org
    Updated Jun 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shuning Xue; Bu Jin; Jie Jiang; Longteng Guo; Jin Zhou; Changyong Wang; Jing Liu (2025). A multi-subject and multi-session EEG dataset for modelling human visual object recognition [Dataset]. http://doi.org/10.18112/openneuro.ds005589.v1.0.3
    Explore at:
    Dataset updated
    Jun 7, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Shuning Xue; Bu Jin; Jie Jiang; Longteng Guo; Jin Zhou; Changyong Wang; Jing Liu
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview

    This multi-subject and multi-session EEG dataset for modelling human visual object recognition (MSS) contains:

    1. 122-channel EEG data collected on 32 participants during natural visual stimulation;
    2. totally 100 sessions for 1.5 hours each;
    3. each session consists of 4 RSVP runs and 4 low-speed presentation runs;
    4. each participant completed between 1 to 5 sessions on different days, around one week apart.

    More details about the dataset are described as follows.

    Participants

    32 participants were recruited from college students in Beijing, of which 4 were female, and 28 were male, with an age range of 21-33 years. 100 sessions were conducted. They were paid and gave written informed consent. The study was conducted under the approval of the ethical committee of the Institute of Automation at the Chinese Academy of Sciences, with the approval number: IA21-2410-020201.

    Experimental Procedures

    1. RSVP experiment: During the RSVP experiment, the participants were shown images at a rate of 5 Hz, and each run consisted of 2,000 trials. There were 20 image categories, with 100 images in each category, making up the 2,000 stimuli. The 100 images in each category were further divided into five image sequences, resulting in 100 image sequences per run. Each sequence was composed of 20 images from the same class, and the 100 sequences were presented in a pseudo-random order.

    After every 50 sequences, there was a break for the participants to rest. Each rapid serial sequence lasted approximately 7.5 seconds, starting with a 750ms blank screen with a white fixation cross, followed by 20 or 21 images presented at 5 Hz with a 50% duty cycle. The sequence ended with another 750ms blank screen.

    After the rapid serial sequence, there was a 2-second interval during which participants were instructed to blink and then report whether a special image appeared in the sequence using a keyboard. During each run, 20 sequences were randomly inserted with additional special images at random positions. The special images are logos for brain-computer interfaces.

    1. Low-speed experiment: During the low-speed experiment, each run consisted of 100 trials, with 1 second per image for a slower paradigm. The 100 stimuli were presented in a pseudo-random order and included 20 image categories, each containing 5 images. A break was given to the participants after every 20 images for them to rest.

    Each image was displayed for 1 second and was followed by 11 choice boxes (1 correct class box, 9 random class boxes, and 1 reject box). Participants were required to select the correct class of the displayed image using a mouse to increase their engagement. After the selection, a white fixation cross was displayed for 1 second in the centre of the screen to remind participants to pay attention to the upcoming task.

    Stimuli

    The stimuli are from two image databases, ImageNet and PASCAL. The final set consists of 10,000 images, with 500 images for each class.

    Annotations

    In the derivatives/annotations folder, there are additional information of MSS:

    1. Videos of two paradigms.
    2. Dataset_info: Main features of MSS.
    3. Experiment_schedule: Schedule of each session.
    4. Stimuli_source: Source categories of ImageNet and PASCAL.
    5. Subject_info: Age and sex of participants.
    6. Task_event: The meaning of eventID.

    Preprocessing

    The EEG signals were pre-processed using the MNE package, version 1.3.1, with Python 3.9.16. The data was sampled at a rate of 1,000 Hz with a bandpass filter applied between 0.1 and 100 Hz. A notch filter was used to remove 50 Hz power frequency. Epochs were created for each trial ranging from 0 to 500 ms relative to stimulus onset. No further preprocessing or artefact correction methods were applied in technical validation. However, researchers may want to consider widely used preprocessing steps such as baseline correction or eye movement correction. After the preprocessing, each session resulted in two matrices: RSVP EEG data matrix of shape (8,000 image conditions × 122 EEG channels × 125 EEG time points) and low-speed EEG data matrix of shape (400 image conditions × 122 EEG channels × 125 EEG time points).

  9. Features-EEG dataset

    • researchdata.edu.au
    • openneuro.org
    Updated Jun 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Grootswagers Tijl; Tijl Grootswagers (2023). Features-EEG dataset [Dataset]. http://doi.org/10.18112/OPENNEURO.DS004357.V1.0.0
    Explore at:
    Dataset updated
    Jun 29, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Western Sydney University
    Authors
    Grootswagers Tijl; Tijl Grootswagers
    License

    ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
    License information was derived automatically

    Description

    Experiment Details Electroencephalography recordings from 16 subjects to fast streams of gabor-like stimuli. Images were presented in rapid serial visual presentation streams at 6.67Hz and 20Hz rates. Participants performed an orthogonal fixation colour change detection task.

    Experiment length: 1 hour Raw and preprocessed data are available online through openneuro: https://openneuro.org/datasets/ds004357. Supplementary Material and analysis scripts are available on github: https://github.com/Tijl/features-eeg

  10. i

    Data from: EEG data for ADHD / Control children

    • ieee-dataport.org
    Updated Oct 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ali Motie Nasrabadi (2024). EEG data for ADHD / Control children [Dataset]. https://ieee-dataport.org/open-access/eeg-data-adhd-control-children
    Explore at:
    Dataset updated
    Oct 15, 2024
    Authors
    Ali Motie Nasrabadi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    epilepsy

  11. P

    EEGEyeNet Dataset

    • paperswithcode.com
    Updated Feb 1, 2001
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ard Kastrati; Martyna Beata Płomecka; Damián Pascual; Lukas Wolf; Victor Gillioz; Roger Wattenhofer; Nicolas Langer (2001). EEGEyeNet Dataset [Dataset]. https://paperswithcode.com/dataset/eegeyenet
    Explore at:
    Dataset updated
    Feb 1, 2001
    Authors
    Ard Kastrati; Martyna Beata Płomecka; Damián Pascual; Lukas Wolf; Victor Gillioz; Roger Wattenhofer; Nicolas Langer
    Description

    EEEyeNet is a dataset and benchmark with the goal of advancing research in the intersection of brain activities and eye movements. It consists of simultaneous Electroencephalography (EEG) and Eye-tracking (ET) recordings from 356 different subjects collected from three different experimental paradigms.

  12. SRM Resting-state EEG

    • openneuro.org
    Updated Nov 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christoffer Hatlestad-Hall; Trine Waage Rygvold; Stein Andersson (2022). SRM Resting-state EEG [Dataset]. http://doi.org/10.18112/openneuro.ds003775.v1.2.1
    Explore at:
    Dataset updated
    Nov 23, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Christoffer Hatlestad-Hall; Trine Waage Rygvold; Stein Andersson
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    SRM Resting-state EEG

    Introduction

    This EEG dataset contains resting-state EEG extracted from the experimental paradigm used in the Stimulus-Selective Response Modulation (SRM) project at the Dept. of Psychology, University of Oslo, Norway.

    The data is recorded with a BioSemi ActiveTwo system, using 64 electrodes following the positional scheme of the extended 10-20 system (10-10). Each datafile comprises four minutes of uninterrupted EEG acquired while the subjects were resting with their eyes closed. The dataset includes EEG from 111 healthy control subjects (the "t1" session), of which a number underwent an additional EEG recording at a later date (the "t2" session). Thus, some subjects have one associated EEG file, whereas others have two.

    Disclaimer

    The dataset is provided "as is". Hereunder, the authors take no responsibility with regard to data quality. The user is solely responsible for ascertaining that the data used for publications or in other contexts fulfil the required quality criteria.

    The data

    Raw data files

    The raw EEG data signals are rereferenced to the average reference. Other than that, no operations have been performed on the data. The files contain no events; the whole continuous segment is resting-state data. The data signals are unfiltered (recorded in Europe, the line noise frequency is 50 Hz). The time points for the subject's EEG recording(s), are listed in the *_scans.tsv file (particularly interesting for the subjects with two recordings).

    Please note that the quality of the raw data has not been carefully assessed. While most data files are of high quality, a few might be of poorer quality. The data files are provided "as is", and it is the user's esponsibility to ascertain the quality of the individual data file.

    /derivatives/cleaned_data

    For convenience, a cleaned dataset is provided. The files in this derived dataset have been preprocessed with a basic, fully automated pipeline (see /code/s2_preprocess.m for details) directory for details. The derived files are stored as EEGLAB .set files in a directory structure identical to that of the raw files. Please note that the *_channels.tsv files associated with the derived files have been updated with status information about each channel ("good" or "bad"). The "bad" channels are – for the sake of consistency – interpolated, and thus still present in the data. It might be advisable to remove these channels in some analyses, as they (per definition) do not provide anything to the EEG data. The cleaned data signals are referenced to the average reference (including the interpolated channels).

    Please mind the automatic nature of the employed pipeline. It might not perform optimally on all data files (e.g. over-/underestimating proportion of bad channels). For publications, we recommend implementing a more sensitive cleaning pipeline.

    Demographic and cognitive test data

    The participants.tsv file in the root folder contains the variables age, sex, and a range of cognitive test scores. See the sidecar participants.json for more information on the behavioural measures. Please note that these measures were collected in connection with the "t1" session recording.

    How to cite

    All use of this dataset in a publication context requires the following paper to be cited:

    Hatlestad-Hall, C., Rygvold, T. W., & Andersson, S. (2022). BIDS-structured resting-state electroencephalography (EEG) data extracted from an experimental paradigm. Data in Brief, 45, 108647. https://doi.org/10.1016/j.dib.2022.108647

    Contact

    Questions regarding the EEG data may be addressed to Christoffer Hatlestad-Hall (chr.hh@pm.me).

    Question regarding the project in general may be addressed to Stein Andersson (stein.andersson@psykologi.uio.no) or Trine W. Rygvold (t.w.rygvold@psykologi.uio.no).

  13. m

    EEG dataset of individuals with intellectual and developmental disorder and...

    • data.mendeley.com
    Updated Apr 11, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ekansh Sareen (2020). EEG dataset of individuals with intellectual and developmental disorder and healthy controls while observing rest and music stimuli [Dataset]. http://doi.org/10.17632/fshy54ypyh.2
    Explore at:
    Dataset updated
    Apr 11, 2020
    Authors
    Ekansh Sareen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data presents a collection of EEG recordings of seven participants with Intellectual and Developmental Disorder (IDD) and seven Typically Developing Controls (TDC). The data is recorded while the participants observe a resting state and a soothing music stimuli. The data was collected using a high-resolution multi-channel dry-electrode system from EMOTIV called EPOC+. This is a 14-channel device with two reference channels and a sampling frequency of 128 Hz. The data was collected in a noise-isolated room. The participants were informed of the experimental procedure, related risks and were asked to keep their eyes closed throughout the experiment. The data is provided in two formats, (1) Raw EEG data and (2) Pre-processed and clean EEG data for both the group of participants. This data can be used to explore the functional brain connectivity of the IDD group. In addition, behavioral information like IQ, SQ, music apprehension and facial expressions (emotion) for IDD participants is provided in file “QualitativeData.xlsx".

    Data Usage: The data is arranged as follows: 1. Raw Data: Data/RawData/RawData_TDC/Music and Rest Data/RawData/RawData_IDD/Music and Rest 2. Clean Data Data/CleanData/CleanData_TDC/Music and Rest Data/CleanData/CleanData_IDD/Music and Rest

    The dataset comes along with a fully automated EEG pre-processing pipeline. This pipeline can be used to do batch-processing of raw EEG files to obtain clean and pre-processed EEG files. Key features of this pipeline are : (1) Bandpass filtering (2) Linenoise removal (3) Channel selection (4) Independent Component Analysis (ICA) (5) Automatic artifact rejection All the required files are present in the Pipeline folder.

    If you use this dataset and/or the fully automated pre-processing pipeline for your research work, kindly cite these two articles linked to this dataset.

    (1) Sareen, E., Singh, L., Varkey, B., Achary, K., Gupta, A. (2020). EEG dataset of individuals with intellectual and developmental disorder and healthy controls under rest and music stimuli. Data in Brief, 105488, ISSN 2352-3409, DOI:https://doi.org/10.1016/j.dib.2020.105488. (2) Sareen, E., Gupta, A., Verma, R., Achary, G. K., Varkey, B (2019). Studying functional brain networks from dry electrode EEG set during music and resting states in neurodevelopment disorder, bioRxiv 759738 [Preprint]. Available from: https://www.biorxiv.org/content/10.1101/759738v1

  14. u

    Motor and Speech Imagery EEG Dataset

    • drum.um.edu.mt
    docx
    Updated Nov 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natasha Padfield; KENNETH P CAMILLERI; TRACEY CAMILLERI; MARVIN K BUGEJA; SIMON G FABRI (2023). Motor and Speech Imagery EEG Dataset [Dataset]. http://doi.org/10.60809/drum.24465871.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Nov 1, 2023
    Dataset provided by
    University of Malta
    Authors
    Natasha Padfield; KENNETH P CAMILLERI; TRACEY CAMILLERI; MARVIN K BUGEJA; SIMON G FABRI
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview and MethodologyThis dataset contains motor imagery (MI) and speech imagery (SI) electroencephalogram (EEG) data recorded from 5 healthy subjects with a mean age of 24.4 years. MI involves the subject imagining movements in their limbs, whereas SI involves the subject imagining speaking words in their mind (thought-speech). The data was recorded using the BioSemi ActiveTwo electroencephalogram (EEG) recording equipment, at a sampling frequency of 2.048kHz. 24 channels of EEG data from the 10-20 system are available in the dataset. Four classes of data were recorded for each of the MI and SI paradigms. In the case of MI, left-hand, right-hand, legs and tongue MI tasks were recorded, and in the case of SI, the words, ‘left’, ‘right’, ‘up’ and ‘down’ were recorded. Data for the idle state, when the subject is mentally relaxed and not executing any tasks, was also recorded.Forty trials were recorded for each of the classes. These trials were recorded over four runs, with two runs being used to record MI trials, and two to record SI trials. The runs were interleaved, meaning that the first and third runs were used to record MI, and the second and fourth runs were used to record SI trials. During each run, twenty trials for each class in the paradigm were recorded. These trials were randomly ordered. Note that during each run, twenty trials of the idle state were also recorded. This means that in this database there are actually eighty idle state trials, with forty being recorded during MI runs and forty being recorded during SI runs.Subjects were guided through the data recording runs by a graphical user interface which issued instructions to them. At the start of a run, subjects are given one minute to settle down before the cued trials began. During a trial, a fixation cross first appears on-screen, indicating to the subject to remain relaxed but aware that the next trial will soon begin. After 2s a cue appears on-screen for 1.25s, indicating the particular task the subject should execute. The subject starts executing the task as soon as they see the cue, and continue even when it has disappeared, until the fixation cross appears again. The cues consist of a left-facing arrow (for left-hand MI or ‘left’ SI), a right-facing arrow (for right-hand MI or ‘right’ SI), an upward facing (for tongue MI or ‘up’ SI) and a downward facing arrow (for legs MI or ‘down’ SI). Each trial lasted 4 seconds. Between each run, subjects were given a 3–5-minute break. The data was re-referenced using channel Cz and then mean-centered it. The data was also passed through an anti-aliasing filter and down-sampled to 1kHz before being stored in .mat files for the data repository. The anti-aliasing filter was a low-pass filter with a cutoff frequency of 500Hz, implemented using the lowpass function in MATLAB, which produces a 60dB attenuation above the cutoff and automatically compensates for filter-induced delays.FilesThe dataset consists of 10 MAT-files, named X_Subject_Y.mat, where X is the acronym denoting the brain imagery type, either MI for motor imagery data or SI for speech imagery data, and Z is the subject number. Each file contains the trials for each run in the structure variables ‘run_1’ and ‘run_2’. Within each run structure there are two variables:‘EEG_Data’, a matrix containing the EEG data formatted as: [number of trials x channels x data samples]. The number of data samples is 4000 since the length of each trial was 4s, sampled at 1kHz. The relationship between the EEG channels and the channel number in the second dimension of this matrix is documented in the table stored within the ‘ChannelLocations.mat’ file, which is included with the dataset;‘labels’, a vector indicating which cue was issued, with the following numbers being used to represent the different cues: 1 – Right, 2 – Left, 3 – Up, 4 – Down, 5 – Idle, 6 – Fixation Cross.Acknowledgements The authors acknowledge that data collection for this project was funded through the project: “Setting up of transdisciplinary research and knowledge exchange (TRAKE) complex at the University of Malta (ERDF.01.124)”, which is being co-financed through the European Union through the European Regional Development Fund 2014–2020. The data was recorded by the Centre for Biomedical Cybernetics at the University of Malta.

  15. P

    SEED Dataset

    • paperswithcode.com
    Updated Feb 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wei-Long Zheng; Bao-liang Lu (2021). SEED Dataset [Dataset]. https://paperswithcode.com/dataset/seed-1
    Explore at:
    Dataset updated
    Feb 2, 2021
    Authors
    Wei-Long Zheng; Bao-liang Lu
    Description

    The SEED dataset contains subjects' EEG signals when they were watching films clips. The film clips are carefully selected so as to induce different types of emotion, which are positive, negative, and neutral ones.

  16. P

    Data from: Siena Scalp EEG Database Dataset

    • paperswithcode.com
    • physionet.org
    Updated Feb 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Siena Scalp EEG Database Dataset [Dataset]. https://paperswithcode.com/dataset/siena-scalp-eeg-database
    Explore at:
    Dataset updated
    Feb 19, 2024
    Description

    The database consists of EEG recordings of 14 patients acquired at the Unit of Neurology and Neurophysiology of the University of Siena. Subjects include 9 males (ages 25-71) and 5 females (ages 20-58). Subjects were monitored with a Video-EEG with a sampling rate of 512 Hz, with electrodes arranged on the basis of the international 10-20 System. Most of the recordings also contain 1 or 2 EKG signals. The diagnosis of epilepsy and the classification of seizures according to the criteria of the International League Against Epilepsy were performed by an expert clinician after a careful review of the clinical and electrophysiological data of each patient.

  17. f

    EEG dataset for the analysis of age-related changes in motor-related...

    • figshare.com
    png
    Updated Nov 19, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nikita Frolov; Elena Pitsik; Vadim V. Grubov; Anton R. Kiselev; Vladimir Maksimenko; Alexander E. Hramov (2020). EEG dataset for the analysis of age-related changes in motor-related cortical activity during a series of fine motor tasks performance [Dataset]. http://doi.org/10.6084/m9.figshare.12301181.v2
    Explore at:
    pngAvailable download formats
    Dataset updated
    Nov 19, 2020
    Dataset provided by
    figshare
    Authors
    Nikita Frolov; Elena Pitsik; Vadim V. Grubov; Anton R. Kiselev; Vladimir Maksimenko; Alexander E. Hramov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    EEG signals were acquired from 20 healthy right-handed subjects performing a series of fine motor tasks cued by the audio command. The participants were divided equally into two distinct age groups: (i) 10 elderly adults (EA group, aged 55-72, 6 females); (ii) 10 young adults (YA group, aged 19-33, 3 females).The active phase of the experimental session included sequential execution of 60 fine motor tasks - squeezing a hand into a fist after the first audio command and holding it until the second audio command (30 repetitions per hand) (see Fig.1). Duration of the audio command determined type of the motor action to be executed: 0.25s for left hand (LH) movement and 0.75s for right rand (RH) movement. The time interval between two audio signals was selected randomly in the range 4-5s for each trial. The sequence of motor tasks was randomized and the pause between tasks was also chosen randomly in the range 6-8s to exclude possible training or motor-preparation effects caused by the sequential execution of the same tasks.Acquired EEG signals were then processed via preprocessing tools implemented in MNE Python package. Specifically, raw EEG signals were filtered by a Butterworth 5th order filter in the range 1-100 Hz, and by 50Hz Notch filter. Further, Independent Component Analysis (ICA) was applied to remove ocular and cardiac artifacts. Artifact-free EEG recordings were then segmented into 60 epochs according to the experimental protocol. Each epoch was 14s long, including 3s of baseline and 11s of motor-related brain activity, and time-locked to the first audio command indicating the start of motor execution. After visual inspection epochs that still contained artifacts were rejected. Finally, 15 epochs per movement type were stored for each subject.Individual epochs for each subject are stored in the attached MNE .fif files. Prefix EA or YA in the name of the file identifies the age group, which subject belongs to. Postfix LH or RH in the name of the file indicates the type of motor tasks.EEG signals were acquired from 20 healthy right-handed subjects performing a series of fine motor tasks cued by the audio command. The participants were divided equally into two distinct age groups: (i) 10 elderly adults (EA group, aged 55-72, 6 females); (ii) 10 young adults (YA group, aged 19-33, 3 females).

  18. EEG dataset

    • figshare.com
    bin
    Updated Dec 6, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    minho lee (2019). EEG dataset [Dataset]. http://doi.org/10.6084/m9.figshare.8091242.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 6, 2019
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    minho lee
    License

    https://www.gnu.org/copyleft/gpl.htmlhttps://www.gnu.org/copyleft/gpl.html

    Description

    This dataset has collected for the study of "Robust Detection of Event-Related Potentials in a User-Voluntary Short-Term Imagery Task.

  19. EEG Dataset for 'Decoding of selective attention to continuous speech from...

    • zenodo.org
    zip
    Updated Mar 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Octave Etard; Tobias Reichenbach; Octave Etard; Tobias Reichenbach (2023). EEG Dataset for 'Decoding of selective attention to continuous speech from the human auditory brainstem response' and 'Neural Speech Tracking in the Theta and in the Delta Frequency Band Differentially Encode Clarity and Comprehension of Speech in Noise'. [Dataset]. http://doi.org/10.5281/zenodo.7086209
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 28, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Octave Etard; Tobias Reichenbach; Octave Etard; Tobias Reichenbach
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Update: a more accessible version of this dataset is now available: https://doi.org/10.5281/zenodo.7778289

    The repository contains the unprocessed EEG data recorded for the publications [1, 2]. The stimuli are also included. 64-channel EEG was recorded from 18 participants whilst they listened to audiobooks narrated in a number of listening conditions: clean speech in their native language (English), clean speech in three levels of speech-shaped-noise, and two competing-speakers conditions (focusing on the males speaker whilst ignoring the female speaker, and vice-versa). Additionally, 12 of the participants listened to speech in a foreign language (Dutch, for which their comprehension scores were 0, as measured via behavioural experiments), in four listening conditions: noiseless conditions, and speech-in-babble-noise conditions at three different SNRs. For more information on the recording protocol and stimuli, please refer to the publications [1 ,2].

    The EEG was acquired at 1 kHz via the actiCHamp amplifier (BrainProducts, Germany), and the electrodes were positioned according to the standard 10-20 system via to EasyCap electrode cap (BrainProducts, Germany). To align the stimuli with the EEG recordings, the audio was simultaneously recorded with the EEG (also at 1 kHz), using the StimTrack device (BrainProducts, Germany). The resulting sound channel was then cross correlated with the (resampled) audio data. The resulting stimulus-onset timestamps are stored as annotations in the VHDR files. The physical EEG reference was located at P04, and the ground was located on the right earlobe.

    The raw VHDR files containing the EEG data for each trial are located in the folder ‘eeg’. The stimulus files are located in the folder stim. For convenience, the raw data are also provided in a format similar to the CND data format [3]. The CND-format files can be found in dataCND. The EEG data for each trial have been extracted so that they align with the raw audio data in dataCND/trialStories.mat. We provide the time-aligned broadband envelopes at 1 kHz in the file dataCND/stim.mat.

    Please note some details on the raw speech-in-noise and Dutch stimuli:

    For the ENGLISH speech-in-noise conditions, the babble noise amplitude increased linearly from zero to the desired SPL for 0.5 seconds. The babble noise continued to play for an additional 0.5 seconds at this SPL, before the story began. After the story finished, the babble noise was again played for 0.5 seconds, and then linearly decreased to 0 amplitude for 0.5 seconds. Therefore, the babble noise track is two seconds longer than the story track.

    For the DUTCH speech-in-noise conditions, the dutch speech was played by itself for one second. Then, the babble noise was also played for one second, linearly increasing in amplitude until it reached the desired SPL. Therefore, the babble noise track is one second shorter than the Dutch story track.

    The relevant tracks in dataCND/stim.mat were padded with zeros so that they are aligned and their lengths are equal.

    In all of the Dutch conditions, a few English sentences were embedded in the Dutch narratives in order to help participants maintain their attention. Participants were asked behavioural questions related to the English sentences after each trial. These English sentence tracks are also time-aligned and provided in dataCND/stim.mat.

    If you use this data, please cite the original publications, as well as this repository [1,2, 4].

    [1] Etard O, Kegler M, Braiman C, Forte A E and Reichenbach T. “Decoding of selective attention to continuous speech from the human auditory brainstem response” 2019. NeuroImage 200 1–11

    [2] Etard O and Reichenbach T. “Neural speech tracking in the theta and in the delta frequency band differentially encode clarity and comprehension of speech in noise” 2019. J. Neurosci. 39 5750–9

    [3] Giovanni DL and Nidiffer, A. "The Continuous-event Neural Data structure (CND) Specifications and guidelines". 2022 Jul. https://data.cnspworkshop.net/CND_Specifications.pdf

    [4] Etard O and Reichenbach T. "EEG Dataset for 'Decoding of selective attention to continuous speech from the human auditory brainstem response' and 'Neural Speech Tracking in the Theta and in the Delta Frequency Band Differentially Encode Clarity and Comprehension of Speech in Noise". Doi: 10.5281/zenodo.7086208

  20. Fourteen-channel EEG with Imagined Speech (FEIS) dataset

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Scott Wellington; Jonathan Clayton; Scott Wellington; Jonathan Clayton (2020). Fourteen-channel EEG with Imagined Speech (FEIS) dataset [Dataset]. http://doi.org/10.5281/zenodo.3554128
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Scott Wellington; Jonathan Clayton; Scott Wellington; Jonathan Clayton
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    Welcome to the FEIS (Fourteen-channel EEG with Imagined Speech) dataset.
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    The FEIS dataset comprises Emotiv EPOC+ [1] EEG recordings of:
    
    * 21 participants listening to, imagining speaking, and then actually speaking
     16 English phonemes (see supplementary, below)
    
    * 2 participants listening to, imagining speaking, and then actually speaking
     16 Chinese syllables (see supplementary, below)
    
    For replicability and for the benefit of further research, this dataset
    includes the complete experiment set-up, including participants' recorded
    audio and 'flashcard' screens for audio-visual prompts, Lua script and .mxs
    scenario for the OpenVibe [2] environment, as well as all Python scripts
    for the preparation and processing of data as used in the supporting
    studies (submitted in support of completion of the MSc Speech and Language
    Processing with the University of Edinburgh):
    
    * J. Clayton, "Towards phone classification from imagined speech using
     a lightweight EEG brain-computer interface," M.Sc. dissertation,
     University of Edinburgh, Edinburgh, UK, 2019.
    
    * S. Wellington, "An investigation into the possibilities and limitations
     of decoding heard, imagined and spoken phonemes using a low-density,
     mobile EEG headset," M.Sc. dissertation, University of Edinburgh,
     Edinburgh, UK, 2019.
    
    Each participant's data comprise 5 .csv files -- these are the 'raw'
    (unprocessed) EEG recordings for the 'stimuli', 'articulators' (see
    supplementary, below) 'thinking', 'speaking' and 'resting' phases per epoch
    for each trial -- alongside a 'full' .csv file with the end-to-end
    experiment recording (for the benefit of calculating deltas).
    
    To guard against software deprecation or inaccessability, the full repository
    of open-source software used in the above studies is also included.
    
    We hope for the FEIS dataset to be of some utility for future researchers,
    due to the sparsity of similar open-access databases. As such, this dataset
    is made freely available for all academic and research purposes (non-profit).
    
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    REFERENCING
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    If you use the FEIS dataset, please reference:
    
    * S. Wellington, J. Clayton, "Fourteen-channel EEG with Imagined Speech
     (FEIS) dataset," v1.0, University of Edinburgh, Edinburgh, UK, 2019.
     doi:10.5281/zenodo.3369178
    
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    LEGAL
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    The research supporting the distribution of this dataset has been approved by
    the PPLS Research Ethics Committee, School of Philosophy, Psychology and
    Language Sciences, University of Edinburgh (reference number: 435-1819/2).
    
    This dataset is made available under the Open Data Commons Attribution License
    (ODC-BY): http://opendatacommons.org/licenses/by/1.0
    
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    ACKNOWLEDGEMENTS
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    The FEIS database was compiled by:
    
    Scott Wellington (MSc Speech and Language Processing, University of Edinburgh)
    Jonathan Clayton (MSc Speech and Language Processing, University of Edinburgh)
    
    Principal Investigators:
    
    Oliver Watts (Senior Researcher, CSTR, University of Edinburgh)
    Cassia Valentini-Botinhao (Senior Researcher, CSTR, University of Edinburgh)
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    METADATA
    
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    For participants, dataset refs 01 to 21:
    
    01 - NNS
    02 - NNS
    03 - NNS, Left-handed
    04 - E
    05 - E, Voice heard as part of 'stimuli' portions of trials belongs to
       particpant 04, due to microphone becoming damaged and unusable prior to
       recording
    06 - E
    07 - E
    08 - E, Ambidextrous
    09 - NNS, Left-handed
    10 - E
    11 - NNS
    12 - NNS, Only sessions one and two recorded (out of three total), as
       particpant had to leave the recording session early
    13 - E
    14 - NNS
    15 - NNS
    16 - NNS
    17 - E
    18 - NNS
    19 - E
    20 - E
    21 - E
    
    E = native speaker of English
    NNS = non-native speaker of English (>= C1 level)
    
    For participants, dataset refs chinese-1 and chinese-2:
    
    chinese-1 - C
    chinese-2 - C, Voice heard as part of 'stimuli' portions of trials belongs to
          participant chinese-1
    
    C = native speaker of Chinese
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    SUPPLEMENTARY
    
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    Under the international 10-20 system, the Emotiv EPOC+ headset 14 channels:
    
    F3 FC5 AF3 F7 T7 P7 O1 O2 P8 T8 F8 AF4 FC6 F4
    
    The 16 English phonemes investigated in dataset refs 01 to 21:
    
    /i/ /u:/ /æ/ /ɔ:/ /m/ /n/ /ŋ/ /f/ /s/ /ʃ/ /v/ /z/ /ʒ/ /p /t/ /k/
    
    The 16 Chinese syllables investigated in dataset refs chinese-1 and chinese-2:
    
    mā má mǎ mà mēng méng měng mèng duō duó duǒ duò tuī tuí tuǐ tuì
    
    All references to 'articulators' (e.g. as part of filenames) refer to the
    1-second 'fixation point' portion of trials. The name is a layover from
    preliminary trials which were modelled on the KARA ONE database
    (http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html) [3].
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    [1] Emotiv EPOC+. https://emotiv.com/epoc. Accessed online 14/08/2019.
    
    [2] Y. Renard, F. Lotte, G. Gibert, M. Congedo, E. Maby, V. Delannoy,
      O. Bertrand, A. Lécuyer. “OpenViBE: An Open-Source Software Platform
      to Design, Test and Use Brain-Computer Interfaces in Real and Virtual
      Environments”, Presence: teleoperators and virtual environments,
      vol. 19, no 1, 2010.
    
    [3] S. Zhao, F. Rudzicz. "Classifying phonological categories in imagined
      and articulated speech." In Proceedings of ICASSP 2015, Brisbane
      Australia, 2015.
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Rahul Kher (2020). EEG Signal Dataset [Dataset]. https://ieee-dataport.org/documents/eeg-signal-dataset

EEG Signal Dataset

Explore at:
Dataset updated
Jun 11, 2020
Authors
Rahul Kher
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

PCA

Search
Clear search
Close search
Google apps
Main menu