100+ datasets found
  1. i

    EEG Signal Dataset

    • ieee-dataport.org
    Updated Jun 11, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rahul Kher (2020). EEG Signal Dataset [Dataset]. https://ieee-dataport.org/documents/eeg-signal-dataset
    Explore at:
    Dataset updated
    Jun 11, 2020
    Authors
    Rahul Kher
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    PCA

  2. h

    General-Disorders-EEG-Dataset-v1

    • huggingface.co
    Updated Oct 5, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neurazum (2024). General-Disorders-EEG-Dataset-v1 [Dataset]. http://doi.org/10.57967/hf/3321
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 5, 2024
    Dataset authored and provided by
    Neurazum
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset

    Synthetic EEG data generated by the ‘bai’ model based on real data.

      Features/Columns:
    

    No: "Number" Sex: "Gender" Age: "Age of participants" EEG Date: "The date of the EEG" Education: "Education level" IQ: "IQ level of participants" Main Disorder: "General class definition of the disorder" Specific Disorder: "Specific class definition of the disorder"

    Total Features/Columns: 1140

      Content:
    

    Obsessive Compulsive Disorder Bipolar Disorder Schizophrenia… See the full description on the dataset page: https://huggingface.co/datasets/Neurazum/General-Disorders-EEG-Dataset-v1.

  3. c

    Ultra high-density EEG recording of interictal migraine and controls:...

    • kilthub.cmu.edu
    txt
    Updated Jul 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alireza Chaman Zar; Sarah Haigh; Pulkit Grover; Marlene Behrmann (2020). Ultra high-density EEG recording of interictal migraine and controls: sensory and rest [Dataset]. http://doi.org/10.1184/R1/12636731
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jul 21, 2020
    Dataset provided by
    Carnegie Mellon University
    Authors
    Alireza Chaman Zar; Sarah Haigh; Pulkit Grover; Marlene Behrmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We used a high-density electroencephalography (HD-EEG) system, with 128 customized electrode locations, to record from 17 individuals with migraine (12 female) in the interictal period, and 18 age- and gender-matched healthy control subjects, during visual (vertical grating pattern) and auditory (modulated tone) stimulation which varied in temporal frequency (4 and 6Hz), and during rest. This dataset includes the EEG raw data related to the paper entitled Chamanzar, Haigh, Grover, and Behrmann (2020), Abnormalities in cortical pattern of coherence in migraine detected using ultra high-density EEG. The link to our paper will be made available as soon as it is published online.

  4. u

    EEG Datasets for Naturalistic Listening to "Alice in Wonderland" (Version 2)...

    • deepblue.lib.umich.edu
    Updated Sep 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brennan, Jonathan R (2023). EEG Datasets for Naturalistic Listening to "Alice in Wonderland" (Version 2) [Dataset]. http://doi.org/10.7302/746w-g237
    Explore at:
    Dataset updated
    Sep 1, 2023
    Dataset provided by
    Deep Blue Data
    Authors
    Brennan, Jonathan R
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These files contain the raw data and processing parameters to go with the paper "Hierarchical structure guides rapid linguistic predictions during naturalistic listening" by Jonathan R. Brennan and John T. Hale. These files include the stimulus (wav files), raw data (BrainVision format), data processing parameters (matlab), and variables used to align the stimuli with the EEG data and for the statistical analyses reported in the paper (csv spreadsheet). ;Updates in Version 2:

    • data in BrainVision format
    • added information about data analysis
    • corrected prePROCessing information for S02
  5. Data from: UC San Diego Resting State EEG Data from Patients with...

    • openneuro.org
    Updated Dec 10, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander P. Rockhill; Nicko Jackson; Jobi George; Adam Aron; Nicole C. Swann (2021). UC San Diego Resting State EEG Data from Patients with Parkinson's Disease [Dataset]. http://doi.org/10.18112/openneuro.ds002778.v1.0.5
    Explore at:
    Dataset updated
    Dec 10, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Alexander P. Rockhill; Nicko Jackson; Jobi George; Adam Aron; Nicole C. Swann
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    San Diego
    Description

    Welcome to the resting state EEG dataset collected at the University of San Diego and curated by Alex Rockhill at the University of Oregon.

    Please email arockhil@uoregon.edu before submitting a manuscript to be published in a peer-reviewed journal using this data, we wish to ensure that the data to be analyzed and interpreted with scientific integrity so as not to mislead the public about findings that may have clinical relevance. The purpose of this is to be responsible stewards of the data without an "available upon reasonable request" clause that we feel doesn't fully represent the open-source, reproducible ethos. The data is freely available to download so we cannot stop your publication if we don't support your methods and interpretation of findings, however, in being good data stewards, we would like to offer suggestions in the pre-publication stage so as to reduce conflict in published scientific literature. As far as credit, there is precedent for receiving a mention in the acknowledgements section for reading and providing feedback on the paper or, for more involved consulting, being included as an author may be warranted. The purpose of asking for this is not to inflate our number of authorships; we take ethical considerations of the best way to handle intellectual property in the form of manuscripts very seriously, and, again, sharing is at the discretion of the author although we strongly recommend it. Please be ethical and considerate in your use of this data and all open-source data and be sure to credit authors by citing them.

    An example of an analysis that we could consider problematic and would strongly advice to be corrected before submission to a publication would be using machine learning to classify Parkinson's patients from healthy controls using this dataset. This is because there are far too few patients for proper statistics. Parkinson's disease presents heterogeneously across patients, and, with a proper test-training split, there would be fewer than 8 patients in the testing set. Statistics on 8 or fewer patients for such a complicated diease would be inaccurate due to having too small of a sample size. Furthermore, if multiple machine learning algorithms were desired to be tested, a third split would be required to choose the best method, further lowering the number of patients in the testing set. We strongly advise against using any such approach because it would mislead patients and people who are interested in knowing if they have Parkinson's disease.

    Note that UPDRS rating scales were collected by laboratory personnel who had completed online training and not a board-certified neurologist. Results should be interpreted accordingly, especially that analyses based largely on these ratings should be taken with the appropriate amount of uncertainty.

    In addition to contacting the aforementioned email, please cite the following papers:

    Nicko Jackson, Scott R. Cole, Bradley Voytek, Nicole C. Swann. Characteristics of Waveform Shape in Parkinson's Disease Detected with Scalp Electroencephalography. eNeuro 20 May 2019, 6 (3) ENEURO.0151-19.2019; DOI: 10.1523/ENEURO.0151-19.2019.

    Swann NC, de Hemptinne C, Aron AR, Ostrem JL, Knight RT, Starr PA. Elevated synchrony in Parkinson disease detected with electroencephalography. Ann Neurol. 2015 Nov;78(5):742-50. doi: 10.1002/ana.24507. Epub 2015 Sep 2. PMID: 26290353; PMCID: PMC4623949.

    George JS, Strunk J, Mak-McCully R, Houser M, Poizner H, Aron AR. Dopaminergic therapy in Parkinson's disease decreases cortical beta band coherence in the resting state and increases cortical beta band power during executive control. Neuroimage Clin. 2013 Aug 8;3:261-70. doi: 10.1016/j.nicl.2013.07.013. PMID: 24273711; PMCID: PMC3814961.

    Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).

    Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8.

    Note: see this discussion on the structure of the json files that is sufficient but not optimal and will hopefully be changed in future versions of BIDS: https://neurostars.org/t/behavior-metadata-without-tsv-event-data-related-to-a-neuroimaging-data/6768/25.

  6. i

    EEG datasets with different levels of fatigue for personal identification

    • ieee-dataport.org
    Updated May 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Haixian Wang (2023). EEG datasets with different levels of fatigue for personal identification [Dataset]. https://ieee-dataport.org/documents/eeg-datasets-different-levels-fatigue-personal-identification
    Explore at:
    Dataset updated
    May 2, 2023
    Authors
    Haixian Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    the digital number represents different participants. The .cnt files were created by a 40-channel Neuroscan amplifier

  7. b

    Harvard Electroencephalography Database

    • bdsp.io
    • registry.opendata.aws
    Updated Feb 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sahar Zafar; Tobias Loddenkemper; Jong Woo Lee; Andrew Cole; Daniel Goldenholz; Jurriaan Peters; Alice Lam; Edilberto Amorim; Catherine Chu; Sydney Cash; Valdery Moura Junior; Aditya Gupta; Manohar Ghanta; Marta Fernandes; Haoqi Sun; Jin Jing; M Brandon Westover (2025). Harvard Electroencephalography Database [Dataset]. http://doi.org/10.60508/k85b-fc87
    Explore at:
    Dataset updated
    Feb 10, 2025
    Authors
    Sahar Zafar; Tobias Loddenkemper; Jong Woo Lee; Andrew Cole; Daniel Goldenholz; Jurriaan Peters; Alice Lam; Edilberto Amorim; Catherine Chu; Sydney Cash; Valdery Moura Junior; Aditya Gupta; Manohar Ghanta; Marta Fernandes; Haoqi Sun; Jin Jing; M Brandon Westover
    License

    https://github.com/bdsp-core/bdsp-license-and-duahttps://github.com/bdsp-core/bdsp-license-and-dua

    Description

    The Harvard EEG Database will encompass data gathered from four hospitals affiliated with Harvard University: Massachusetts General Hospital (MGH), Brigham and Women's Hospital (BWH), Beth Israel Deaconess Medical Center (BIDMC), and Boston Children's Hospital (BCH). The EEG data includes three types:

    rEEG: "routine EEGs" recorded in the outpatient setting.
    EMU: recordings obtained in the inpatient setting, within the Epilepsy Monitoring Unit (EMU).
    ICU/LTM: recordings obtained from acutely and critically ill patients within the intensive care unit (ICU).
    
  8. Features-EEG dataset

    • researchdata.edu.au
    • openneuro.org
    Updated Jun 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Grootswagers Tijl; Tijl Grootswagers (2023). Features-EEG dataset [Dataset]. http://doi.org/10.18112/OPENNEURO.DS004357.V1.0.0
    Explore at:
    Dataset updated
    Jun 29, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Western Sydney University
    Authors
    Grootswagers Tijl; Tijl Grootswagers
    License

    ODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
    License information was derived automatically

    Description

    Experiment Details Electroencephalography recordings from 16 subjects to fast streams of gabor-like stimuli. Images were presented in rapid serial visual presentation streams at 6.67Hz and 20Hz rates. Participants performed an orthogonal fixation colour change detection task.

    Experiment length: 1 hour Raw and preprocessed data are available online through openneuro: https://openneuro.org/datasets/ds004357. Supplementary Material and analysis scripts are available on github: https://github.com/Tijl/features-eeg

  9. Data from: A multi-subject and multi-session EEG dataset for modelling human...

    • openneuro.org
    Updated Jun 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shuning Xue; Bu Jin; Jie Jiang; Longteng Guo; Jin Zhou; Changyong Wang; Jing Liu (2025). A multi-subject and multi-session EEG dataset for modelling human visual object recognition [Dataset]. http://doi.org/10.18112/openneuro.ds005589.v1.0.3
    Explore at:
    Dataset updated
    Jun 7, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Shuning Xue; Bu Jin; Jie Jiang; Longteng Guo; Jin Zhou; Changyong Wang; Jing Liu
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview

    This multi-subject and multi-session EEG dataset for modelling human visual object recognition (MSS) contains:

    1. 122-channel EEG data collected on 32 participants during natural visual stimulation;
    2. totally 100 sessions for 1.5 hours each;
    3. each session consists of 4 RSVP runs and 4 low-speed presentation runs;
    4. each participant completed between 1 to 5 sessions on different days, around one week apart.

    More details about the dataset are described as follows.

    Participants

    32 participants were recruited from college students in Beijing, of which 4 were female, and 28 were male, with an age range of 21-33 years. 100 sessions were conducted. They were paid and gave written informed consent. The study was conducted under the approval of the ethical committee of the Institute of Automation at the Chinese Academy of Sciences, with the approval number: IA21-2410-020201.

    Experimental Procedures

    1. RSVP experiment: During the RSVP experiment, the participants were shown images at a rate of 5 Hz, and each run consisted of 2,000 trials. There were 20 image categories, with 100 images in each category, making up the 2,000 stimuli. The 100 images in each category were further divided into five image sequences, resulting in 100 image sequences per run. Each sequence was composed of 20 images from the same class, and the 100 sequences were presented in a pseudo-random order.

    After every 50 sequences, there was a break for the participants to rest. Each rapid serial sequence lasted approximately 7.5 seconds, starting with a 750ms blank screen with a white fixation cross, followed by 20 or 21 images presented at 5 Hz with a 50% duty cycle. The sequence ended with another 750ms blank screen.

    After the rapid serial sequence, there was a 2-second interval during which participants were instructed to blink and then report whether a special image appeared in the sequence using a keyboard. During each run, 20 sequences were randomly inserted with additional special images at random positions. The special images are logos for brain-computer interfaces.

    1. Low-speed experiment: During the low-speed experiment, each run consisted of 100 trials, with 1 second per image for a slower paradigm. The 100 stimuli were presented in a pseudo-random order and included 20 image categories, each containing 5 images. A break was given to the participants after every 20 images for them to rest.

    Each image was displayed for 1 second and was followed by 11 choice boxes (1 correct class box, 9 random class boxes, and 1 reject box). Participants were required to select the correct class of the displayed image using a mouse to increase their engagement. After the selection, a white fixation cross was displayed for 1 second in the centre of the screen to remind participants to pay attention to the upcoming task.

    Stimuli

    The stimuli are from two image databases, ImageNet and PASCAL. The final set consists of 10,000 images, with 500 images for each class.

    Annotations

    In the derivatives/annotations folder, there are additional information of MSS:

    1. Videos of two paradigms.
    2. Dataset_info: Main features of MSS.
    3. Experiment_schedule: Schedule of each session.
    4. Stimuli_source: Source categories of ImageNet and PASCAL.
    5. Subject_info: Age and sex of participants.
    6. Task_event: The meaning of eventID.

    Preprocessing

    The EEG signals were pre-processed using the MNE package, version 1.3.1, with Python 3.9.16. The data was sampled at a rate of 1,000 Hz with a bandpass filter applied between 0.1 and 100 Hz. A notch filter was used to remove 50 Hz power frequency. Epochs were created for each trial ranging from 0 to 500 ms relative to stimulus onset. No further preprocessing or artefact correction methods were applied in technical validation. However, researchers may want to consider widely used preprocessing steps such as baseline correction or eye movement correction. After the preprocessing, each session resulted in two matrices: RSVP EEG data matrix of shape (8,000 image conditions × 122 EEG channels × 125 EEG time points) and low-speed EEG data matrix of shape (400 image conditions × 122 EEG channels × 125 EEG time points).

  10. SRM Resting-state EEG

    • openneuro.org
    Updated Nov 23, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christoffer Hatlestad-Hall; Trine Waage Rygvold; Stein Andersson (2022). SRM Resting-state EEG [Dataset]. http://doi.org/10.18112/openneuro.ds003775.v1.2.1
    Explore at:
    Dataset updated
    Nov 23, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Christoffer Hatlestad-Hall; Trine Waage Rygvold; Stein Andersson
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    SRM Resting-state EEG

    Introduction

    This EEG dataset contains resting-state EEG extracted from the experimental paradigm used in the Stimulus-Selective Response Modulation (SRM) project at the Dept. of Psychology, University of Oslo, Norway.

    The data is recorded with a BioSemi ActiveTwo system, using 64 electrodes following the positional scheme of the extended 10-20 system (10-10). Each datafile comprises four minutes of uninterrupted EEG acquired while the subjects were resting with their eyes closed. The dataset includes EEG from 111 healthy control subjects (the "t1" session), of which a number underwent an additional EEG recording at a later date (the "t2" session). Thus, some subjects have one associated EEG file, whereas others have two.

    Disclaimer

    The dataset is provided "as is". Hereunder, the authors take no responsibility with regard to data quality. The user is solely responsible for ascertaining that the data used for publications or in other contexts fulfil the required quality criteria.

    The data

    Raw data files

    The raw EEG data signals are rereferenced to the average reference. Other than that, no operations have been performed on the data. The files contain no events; the whole continuous segment is resting-state data. The data signals are unfiltered (recorded in Europe, the line noise frequency is 50 Hz). The time points for the subject's EEG recording(s), are listed in the *_scans.tsv file (particularly interesting for the subjects with two recordings).

    Please note that the quality of the raw data has not been carefully assessed. While most data files are of high quality, a few might be of poorer quality. The data files are provided "as is", and it is the user's esponsibility to ascertain the quality of the individual data file.

    /derivatives/cleaned_data

    For convenience, a cleaned dataset is provided. The files in this derived dataset have been preprocessed with a basic, fully automated pipeline (see /code/s2_preprocess.m for details) directory for details. The derived files are stored as EEGLAB .set files in a directory structure identical to that of the raw files. Please note that the *_channels.tsv files associated with the derived files have been updated with status information about each channel ("good" or "bad"). The "bad" channels are – for the sake of consistency – interpolated, and thus still present in the data. It might be advisable to remove these channels in some analyses, as they (per definition) do not provide anything to the EEG data. The cleaned data signals are referenced to the average reference (including the interpolated channels).

    Please mind the automatic nature of the employed pipeline. It might not perform optimally on all data files (e.g. over-/underestimating proportion of bad channels). For publications, we recommend implementing a more sensitive cleaning pipeline.

    Demographic and cognitive test data

    The participants.tsv file in the root folder contains the variables age, sex, and a range of cognitive test scores. See the sidecar participants.json for more information on the behavioural measures. Please note that these measures were collected in connection with the "t1" session recording.

    How to cite

    All use of this dataset in a publication context requires the following paper to be cited:

    Hatlestad-Hall, C., Rygvold, T. W., & Andersson, S. (2022). BIDS-structured resting-state electroencephalography (EEG) data extracted from an experimental paradigm. Data in Brief, 45, 108647. https://doi.org/10.1016/j.dib.2022.108647

    Contact

    Questions regarding the EEG data may be addressed to Christoffer Hatlestad-Hall (chr.hh@pm.me).

    Question regarding the project in general may be addressed to Stein Andersson (stein.andersson@psykologi.uio.no) or Trine W. Rygvold (t.w.rygvold@psykologi.uio.no).

  11. EEG driver drowsiness dataset

    • figshare.com
    bin
    Updated Sep 8, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jian Cui (2021). EEG driver drowsiness dataset [Dataset]. http://doi.org/10.6084/m9.figshare.14273687.v3
    Explore at:
    binAvailable download formats
    Dataset updated
    Sep 8, 2021
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Jian Cui
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset contains EEG signals from 11 subjects with labels of alert and drowsy. It can be opened with Matlab. We extracted the data for our own research purpose from another public dataset:Cao, Z., et al., Multi-channel EEG recordings during a sustained-attention driving task. Scientific data, 2019. 6(1): p. 1-8.If you find the dataset useful, please give credits to their works. The details on how the data were extracted are described in our paper:"Jian Cui, Zirui Lan, Yisi Liu, Ruilin Li, Fan Li, Olga Sourina, Wolfgang Müller-Wittig, A Compact and Interpretable Convolutional Neural Network for Cross-Subject Driver Drowsiness Detection from Single-Channel EEG, Methods, 2021, ISSN 1046-2023, https://doi.org/10.1016/j.ymeth.2021.04.017."The codes of the paper above are accessible from:https://github.com/cuijiancorbin/A-Compact-and-Interpretable-Convolutional-Neural-Network-for-Single-Channel-EEGThe data file contains 3 variables and they are EEGsample, substate and subindex."EEGsample" contains 2022 EEG samples of size 20x384 from 11 subjects. Each sample is a 3s EEG data with 128Hz from 30 EEG channels."subindex" is an array of 2022x1. It contains the subject indexes from 1-11 corresponding to each EEG sample."substate" is an array of 2022x1. It contains the labels of the samples. 0 corresponds to the alert state and 1 correspond to the drowsy state.The unbalanced version of this dataset is accessible from:https://figshare.com/articles/dataset/EEG_driver_drowsiness_dataset_unbalanced_/16586957

  12. P

    EEGEyeNet Dataset

    • paperswithcode.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ard Kastrati; Martyna Beata Płomecka; Damián Pascual; Lukas Wolf; Victor Gillioz; Roger Wattenhofer; Nicolas Langer, EEGEyeNet Dataset [Dataset]. https://paperswithcode.com/dataset/eegeyenet
    Explore at:
    Authors
    Ard Kastrati; Martyna Beata Płomecka; Damián Pascual; Lukas Wolf; Victor Gillioz; Roger Wattenhofer; Nicolas Langer
    Description

    EEEyeNet is a dataset and benchmark with the goal of advancing research in the intersection of brain activities and eye movements. It consists of simultaneous Electroencephalography (EEG) and Eye-tracking (ET) recordings from 356 different subjects collected from three different experimental paradigms.

  13. m

    EEG dataset of individuals with intellectual and developmental disorder and...

    • data.mendeley.com
    Updated Apr 11, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ekansh Sareen (2020). EEG dataset of individuals with intellectual and developmental disorder and healthy controls while observing rest and music stimuli [Dataset]. http://doi.org/10.17632/fshy54ypyh.2
    Explore at:
    Dataset updated
    Apr 11, 2020
    Authors
    Ekansh Sareen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data presents a collection of EEG recordings of seven participants with Intellectual and Developmental Disorder (IDD) and seven Typically Developing Controls (TDC). The data is recorded while the participants observe a resting state and a soothing music stimuli. The data was collected using a high-resolution multi-channel dry-electrode system from EMOTIV called EPOC+. This is a 14-channel device with two reference channels and a sampling frequency of 128 Hz. The data was collected in a noise-isolated room. The participants were informed of the experimental procedure, related risks and were asked to keep their eyes closed throughout the experiment. The data is provided in two formats, (1) Raw EEG data and (2) Pre-processed and clean EEG data for both the group of participants. This data can be used to explore the functional brain connectivity of the IDD group. In addition, behavioral information like IQ, SQ, music apprehension and facial expressions (emotion) for IDD participants is provided in file “QualitativeData.xlsx".

    Data Usage: The data is arranged as follows: 1. Raw Data: Data/RawData/RawData_TDC/Music and Rest Data/RawData/RawData_IDD/Music and Rest 2. Clean Data Data/CleanData/CleanData_TDC/Music and Rest Data/CleanData/CleanData_IDD/Music and Rest

    The dataset comes along with a fully automated EEG pre-processing pipeline. This pipeline can be used to do batch-processing of raw EEG files to obtain clean and pre-processed EEG files. Key features of this pipeline are : (1) Bandpass filtering (2) Linenoise removal (3) Channel selection (4) Independent Component Analysis (ICA) (5) Automatic artifact rejection All the required files are present in the Pipeline folder.

    If you use this dataset and/or the fully automated pre-processing pipeline for your research work, kindly cite these two articles linked to this dataset.

    (1) Sareen, E., Singh, L., Varkey, B., Achary, K., Gupta, A. (2020). EEG dataset of individuals with intellectual and developmental disorder and healthy controls under rest and music stimuli. Data in Brief, 105488, ISSN 2352-3409, DOI:https://doi.org/10.1016/j.dib.2020.105488. (2) Sareen, E., Gupta, A., Verma, R., Achary, G. K., Varkey, B (2019). Studying functional brain networks from dry electrode EEG set during music and resting states in neurodevelopment disorder, bioRxiv 759738 [Preprint]. Available from: https://www.biorxiv.org/content/10.1101/759738v1

  14. u

    Motor and Speech Imagery EEG Dataset

    • drum.um.edu.mt
    docx
    Updated Nov 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natasha Padfield; KENNETH P CAMILLERI; TRACEY CAMILLERI; MARVIN K BUGEJA; SIMON G FABRI (2023). Motor and Speech Imagery EEG Dataset [Dataset]. http://doi.org/10.60809/drum.24465871.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Nov 1, 2023
    Dataset provided by
    University of Malta
    Authors
    Natasha Padfield; KENNETH P CAMILLERI; TRACEY CAMILLERI; MARVIN K BUGEJA; SIMON G FABRI
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview and MethodologyThis dataset contains motor imagery (MI) and speech imagery (SI) electroencephalogram (EEG) data recorded from 5 healthy subjects with a mean age of 24.4 years. MI involves the subject imagining movements in their limbs, whereas SI involves the subject imagining speaking words in their mind (thought-speech). The data was recorded using the BioSemi ActiveTwo electroencephalogram (EEG) recording equipment, at a sampling frequency of 2.048kHz. 24 channels of EEG data from the 10-20 system are available in the dataset. Four classes of data were recorded for each of the MI and SI paradigms. In the case of MI, left-hand, right-hand, legs and tongue MI tasks were recorded, and in the case of SI, the words, ‘left’, ‘right’, ‘up’ and ‘down’ were recorded. Data for the idle state, when the subject is mentally relaxed and not executing any tasks, was also recorded.Forty trials were recorded for each of the classes. These trials were recorded over four runs, with two runs being used to record MI trials, and two to record SI trials. The runs were interleaved, meaning that the first and third runs were used to record MI, and the second and fourth runs were used to record SI trials. During each run, twenty trials for each class in the paradigm were recorded. These trials were randomly ordered. Note that during each run, twenty trials of the idle state were also recorded. This means that in this database there are actually eighty idle state trials, with forty being recorded during MI runs and forty being recorded during SI runs.Subjects were guided through the data recording runs by a graphical user interface which issued instructions to them. At the start of a run, subjects are given one minute to settle down before the cued trials began. During a trial, a fixation cross first appears on-screen, indicating to the subject to remain relaxed but aware that the next trial will soon begin. After 2s a cue appears on-screen for 1.25s, indicating the particular task the subject should execute. The subject starts executing the task as soon as they see the cue, and continue even when it has disappeared, until the fixation cross appears again. The cues consist of a left-facing arrow (for left-hand MI or ‘left’ SI), a right-facing arrow (for right-hand MI or ‘right’ SI), an upward facing (for tongue MI or ‘up’ SI) and a downward facing arrow (for legs MI or ‘down’ SI). Each trial lasted 4 seconds. Between each run, subjects were given a 3–5-minute break. The data was re-referenced using channel Cz and then mean-centered it. The data was also passed through an anti-aliasing filter and down-sampled to 1kHz before being stored in .mat files for the data repository. The anti-aliasing filter was a low-pass filter with a cutoff frequency of 500Hz, implemented using the lowpass function in MATLAB, which produces a 60dB attenuation above the cutoff and automatically compensates for filter-induced delays.FilesThe dataset consists of 10 MAT-files, named X_Subject_Y.mat, where X is the acronym denoting the brain imagery type, either MI for motor imagery data or SI for speech imagery data, and Z is the subject number. Each file contains the trials for each run in the structure variables ‘run_1’ and ‘run_2’. Within each run structure there are two variables:‘EEG_Data’, a matrix containing the EEG data formatted as: [number of trials x channels x data samples]. The number of data samples is 4000 since the length of each trial was 4s, sampled at 1kHz. The relationship between the EEG channels and the channel number in the second dimension of this matrix is documented in the table stored within the ‘ChannelLocations.mat’ file, which is included with the dataset;‘labels’, a vector indicating which cue was issued, with the following numbers being used to represent the different cues: 1 – Right, 2 – Left, 3 – Up, 4 – Down, 5 – Idle, 6 – Fixation Cross.Acknowledgements The authors acknowledge that data collection for this project was funded through the project: “Setting up of transdisciplinary research and knowledge exchange (TRAKE) complex at the University of Malta (ERDF.01.124)”, which is being co-financed through the European Union through the European Regional Development Fund 2014–2020. The data was recorded by the Centre for Biomedical Cybernetics at the University of Malta.

  15. i

    EMG-EEG dataset for Upper-Limb Gesture Classification

    • ieee-dataport.org
    Updated Jun 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Boreom Lee (2023). EMG-EEG dataset for Upper-Limb Gesture Classification [Dataset]. https://ieee-dataport.org/documents/emg-eeg-dataset-upper-limb-gesture-classification
    Explore at:
    Dataset updated
    Jun 22, 2023
    Authors
    Boreom Lee
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    fatigue

  16. EEG dataset

    • figshare.com
    bin
    Updated Dec 6, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    minho lee (2019). EEG dataset [Dataset]. http://doi.org/10.6084/m9.figshare.8091242.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 6, 2019
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    minho lee
    License

    https://www.gnu.org/copyleft/gpl.htmlhttps://www.gnu.org/copyleft/gpl.html

    Description

    This dataset has collected for the study of "Robust Detection of Event-Related Potentials in a User-Voluntary Short-Term Imagery Task.

  17. Complete EEG dataset

    • kaggle.com
    Updated Aug 25, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aman Anand (2021). Complete EEG dataset [Dataset]. https://www.kaggle.com/amananandrai/complete-eeg-dataset/code
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 25, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Aman Anand
    Description

    Introduction

    The database contains EEG recordings of subjects before and during the performance of mental arithmetic tasks.

    Method of Recording Signals

    The EEGs were recorded monopolarly using Neurocom EEG 23-channel system (Ukraine, XAI-MEDICA). The silver/silver chloride electrodes were placed on the scalp according to the International 10/20 scheme. All electrodes were referenced to the interconnected ear reference electrodes.

    A high-pass filter with a 30 Hz cut-off frequency and a power line notch filter (50 Hz) were used. All recordings are artifact-free EEG segments of 60 seconds duration. At the stage of data preprocessing, the Independent Component Analysis (ICA) was used to eliminate the artifacts (eyes, muscle, and cardiac overlapping of the cardiac pulsation). The arithmetic task was the serial subtraction of two numbers. Each trial started with the communication orally 4-digit (minuend) and 2-digit (subtrahend) numbers (e.g. 3141 and 42).

    Data Description

    The data contains 36 csv files for each subject having 19 channels. It is converted from EDF to csv from the original data source. The channels selected are Fp1, Fp2, F3, F4, F7, F8, T3,T4, C3, C4, T5, T6, P3, P4, O1, O2, Fz, Cz, Pz. These are the columns in the dataset.

    Acknowledgements

    The data is taken from the following resources

    https://www.mdpi.com/2306-5729/4/1/14

    Goldberger, A., et al. "PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online]. 101 (23), pp. e215–e220." (2000).

  18. Fourteen-channel EEG with Imagined Speech (FEIS) dataset

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Scott Wellington; Jonathan Clayton; Scott Wellington; Jonathan Clayton (2020). Fourteen-channel EEG with Imagined Speech (FEIS) dataset [Dataset]. http://doi.org/10.5281/zenodo.3554128
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Scott Wellington; Jonathan Clayton; Scott Wellington; Jonathan Clayton
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    Welcome to the FEIS (Fourteen-channel EEG with Imagined Speech) dataset.
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    The FEIS dataset comprises Emotiv EPOC+ [1] EEG recordings of:
    
    * 21 participants listening to, imagining speaking, and then actually speaking
     16 English phonemes (see supplementary, below)
    
    * 2 participants listening to, imagining speaking, and then actually speaking
     16 Chinese syllables (see supplementary, below)
    
    For replicability and for the benefit of further research, this dataset
    includes the complete experiment set-up, including participants' recorded
    audio and 'flashcard' screens for audio-visual prompts, Lua script and .mxs
    scenario for the OpenVibe [2] environment, as well as all Python scripts
    for the preparation and processing of data as used in the supporting
    studies (submitted in support of completion of the MSc Speech and Language
    Processing with the University of Edinburgh):
    
    * J. Clayton, "Towards phone classification from imagined speech using
     a lightweight EEG brain-computer interface," M.Sc. dissertation,
     University of Edinburgh, Edinburgh, UK, 2019.
    
    * S. Wellington, "An investigation into the possibilities and limitations
     of decoding heard, imagined and spoken phonemes using a low-density,
     mobile EEG headset," M.Sc. dissertation, University of Edinburgh,
     Edinburgh, UK, 2019.
    
    Each participant's data comprise 5 .csv files -- these are the 'raw'
    (unprocessed) EEG recordings for the 'stimuli', 'articulators' (see
    supplementary, below) 'thinking', 'speaking' and 'resting' phases per epoch
    for each trial -- alongside a 'full' .csv file with the end-to-end
    experiment recording (for the benefit of calculating deltas).
    
    To guard against software deprecation or inaccessability, the full repository
    of open-source software used in the above studies is also included.
    
    We hope for the FEIS dataset to be of some utility for future researchers,
    due to the sparsity of similar open-access databases. As such, this dataset
    is made freely available for all academic and research purposes (non-profit).
    
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    REFERENCING
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    If you use the FEIS dataset, please reference:
    
    * S. Wellington, J. Clayton, "Fourteen-channel EEG with Imagined Speech
     (FEIS) dataset," v1.0, University of Edinburgh, Edinburgh, UK, 2019.
     doi:10.5281/zenodo.3369178
    
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    LEGAL
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    The research supporting the distribution of this dataset has been approved by
    the PPLS Research Ethics Committee, School of Philosophy, Psychology and
    Language Sciences, University of Edinburgh (reference number: 435-1819/2).
    
    This dataset is made available under the Open Data Commons Attribution License
    (ODC-BY): http://opendatacommons.org/licenses/by/1.0
    
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    ACKNOWLEDGEMENTS
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    The FEIS database was compiled by:
    
    Scott Wellington (MSc Speech and Language Processing, University of Edinburgh)
    Jonathan Clayton (MSc Speech and Language Processing, University of Edinburgh)
    
    Principal Investigators:
    
    Oliver Watts (Senior Researcher, CSTR, University of Edinburgh)
    Cassia Valentini-Botinhao (Senior Researcher, CSTR, University of Edinburgh)
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    METADATA
    
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    For participants, dataset refs 01 to 21:
    
    01 - NNS
    02 - NNS
    03 - NNS, Left-handed
    04 - E
    05 - E, Voice heard as part of 'stimuli' portions of trials belongs to
       particpant 04, due to microphone becoming damaged and unusable prior to
       recording
    06 - E
    07 - E
    08 - E, Ambidextrous
    09 - NNS, Left-handed
    10 - E
    11 - NNS
    12 - NNS, Only sessions one and two recorded (out of three total), as
       particpant had to leave the recording session early
    13 - E
    14 - NNS
    15 - NNS
    16 - NNS
    17 - E
    18 - NNS
    19 - E
    20 - E
    21 - E
    
    E = native speaker of English
    NNS = non-native speaker of English (>= C1 level)
    
    For participants, dataset refs chinese-1 and chinese-2:
    
    chinese-1 - C
    chinese-2 - C, Voice heard as part of 'stimuli' portions of trials belongs to
          participant chinese-1
    
    C = native speaker of Chinese
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    
    SUPPLEMENTARY
    
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    Under the international 10-20 system, the Emotiv EPOC+ headset 14 channels:
    
    F3 FC5 AF3 F7 T7 P7 O1 O2 P8 T8 F8 AF4 FC6 F4
    
    The 16 English phonemes investigated in dataset refs 01 to 21:
    
    /i/ /u:/ /æ/ /ɔ:/ /m/ /n/ /ŋ/ /f/ /s/ /ʃ/ /v/ /z/ /ʒ/ /p /t/ /k/
    
    The 16 Chinese syllables investigated in dataset refs chinese-1 and chinese-2:
    
    mā má mǎ mà mēng méng měng mèng duō duó duǒ duò tuī tuí tuǐ tuì
    
    All references to 'articulators' (e.g. as part of filenames) refer to the
    1-second 'fixation point' portion of trials. The name is a layover from
    preliminary trials which were modelled on the KARA ONE database
    (http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html) [3].
    
    <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <><
    ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><>
    
    [1] Emotiv EPOC+. https://emotiv.com/epoc. Accessed online 14/08/2019.
    
    [2] Y. Renard, F. Lotte, G. Gibert, M. Congedo, E. Maby, V. Delannoy,
      O. Bertrand, A. Lécuyer. “OpenViBE: An Open-Source Software Platform
      to Design, Test and Use Brain-Computer Interfaces in Real and Virtual
      Environments”, Presence: teleoperators and virtual environments,
      vol. 19, no 1, 2010.
    
    [3] S. Zhao, F. Rudzicz. "Classifying phonological categories in imagined
      and articulated speech." In Proceedings of ICASSP 2015, Brisbane
      Australia, 2015.
  19. Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG...

    • openneuro.org
    Updated May 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elif Isbell; Amanda N. Peters; Dylan M. Richardson; Nancy E. R. De León (2025). Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset [Dataset]. http://doi.org/10.18112/openneuro.ds006018.v1.2.2
    Explore at:
    Dataset updated
    May 22, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Elif Isbell; Amanda N. Peters; Dylan M. Richardson; Nancy E. R. De León
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Cognitive Electrophysiology in Socioeconomic Context in Adulthood Dataset

    Data Description

    This dataset comprises electroencephalogram (EEG) data collected from 127 young adults (18-30 years), along with retrospective objective and subjective indicators of childhood family socioeconomic status (SES), as well as SES indicators in adulthood, such as educational attainment, individual and household income, food security, and home and neighborhood characteristics. The EEG data were recorded with tasks directly acquired from the Event-Related Potentials Compendium of Open Resources and Experiments ERP CORE (Kappenman et al., 2021), or adapted from these tasks (Isbell et al., 2024). These tasks were optimized to capture neural activity manifest in perception, cognition, and action, in neurotypical young adults. Furthermore, the dataset includes a symptoms checklist, consisting of questions that were found to be predictive of symptoms consistent with attention-deficit/hyperactivity disorder (ADHD) in adulthood, which can be used to investigate the links between ADHD symptoms and neural activity in a socioeconomically diverse young adult sample. The detailed description of the dataset is accepted for publication in Scientific Data, with the title: "Cognitive Electrophysiology in Socioeconomic Context in Adulthood."

    EEG Recording

    EEG data were recorded using the Brain Products actiCHamp Plus system, in combination with BrainVision Recorder (Version 1.25.0101). We used a 32-channel actiCAP slim active electrode system, with electrodes mounted on elastic snap caps (Brain Products GmbH, Gilching, Germany). The ground electrode was placed at FPz. From the electrode bundle, we repurposed 2 electrodes by placing them on the mastoid bones behind the left and right ears to be used for re-referencing after data collection. We also repurposed 3 additional electrodes to record electrooculogram (EOG). To capture eye artifacts, we placed the horizontal EOG (HEOG) electrodes ateral to the external canthus of each eye. We also placed one vertical EOG (VEOG) electrode below the right eye. The remaining 27 electrodes were used as scalp electrodes, which were mounted per the international 10/20 system. EEG data were recorded at a sampling rate of 500 Hz and referenced to the Cz electrode. StimTrak was used to assess stimulus presentation delays for both the monitor and headphones. The results indicated that both the visual and auditory stimuli had a delay of approximately 20 ms. Therefore, users should shift the event-codes by 20 ms when conducting stimulus-locked analyses.

    Notes

    Before the data were publicly shared, all identifiable information was removed, including date of birth, date of session, race/ethnicity, zip code, occupation (self and parent), and names of the languages the participants reported speaking and understanding fluently. Date of birth and date of session were used to compute age in years, which is included in the dataset. Furthermore, several variables were recoded based on re-identification risk assessments. Users who would like to establish secure access to components of the dataset we could not publicly share due to re-identification risks, should contact the corresponding researcher as described below. The dataset consists of participants recruited for studies on adult cognition in context. To provide the largest sample size, we included all participants who completed at least one of the EEG tasks of interest. Each participant completed each EEG task only once. The original participant IDs with which the EEG data were saved were recoded and the raw EEG files were renamed to make the dataset BIDS compatible.

    The ERP CORE experimental tasks can be found on OSF, under Experiment Control Files: https://osf.io/thsqg/

    Examples of EEGLAB/ERPLAB data processing scripts that can be used with the EEG data shared here can be found on OSF:

    osf.io/thsqg osf.io/43H75

    Contact * If you have any questions, comments, or requests, please contact: * Elif Isbell: eisbell@ucmerced.edu

    Copyright and License

    This dataset is licensed under CC0.

    References

    Isbell, E., Peters, A. N., Richardson, D. M., & Rodas De León, N. E. (2025). Cognitive electrophysiology in socioeconomic context in adulthood. Scientific Data, 12(1), 1–9. https://doi.org/10.1038/s41597-025-05209-z

    Isbell, E., De León, N. E. R., & Richardson, D. M. (2024). Childhood family socioeconomic status is linked to adult brain electrophysiology. PloS One, 19(8), e0307406.

    Isbell, E., De León, N. E. R. & Richardson, D. M. Childhood family socioeconomic status is linked to adult brain electrophysiology - accompanying analytic data and code. OSF https://doi.org/10.17605/osf.io/43H75 (2024).

    Kappenman, E. S., Farrens, J. L., Zhang, W., Stewart, A. X., & Luck, S. J. (2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465.

    Kappenman, E. S., Farrens, J., Zhang, W., Stewart, A. X. & Luck, S. J. ERP CORE. https://osf.io/thsqg (2020).

    Kappenman, E., Farrens, J., Zhang, W., Stewart, A. & Luck, S. Experiment control files. https://osf.io/47uf2 (2020).

  20. An EEG dataset recorded during affective music listening

    • openneuro.org
    Updated Apr 23, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ian Daly; Nicoletta Nicolaou; Duncan Williams; Faustina Hwang; Alexis Kirke; Eduardo Miranda; Slawomir J. Nasuto (2020). An EEG dataset recorded during affective music listening [Dataset]. http://doi.org/10.18112/openneuro.ds002721.v1.0.1
    Explore at:
    Dataset updated
    Apr 23, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Ian Daly; Nicoletta Nicolaou; Duncan Williams; Faustina Hwang; Alexis Kirke; Eduardo Miranda; Slawomir J. Nasuto
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    0. Sections

    1. Project
    2. Dataset
    3. Terms of Use
    4. Contents
    5. Method and Processing

    1. PROJECT

    Title: Brain-Computer Music Interface for Monitoring and Inducing Affective States (BCMI-MIdAS)

    Dates: 2012-2017

    Funding organisation: Engineering and Physical Sciences Research Council (EPSRC)

    Grant no.: EP/J003077/1 and EP/J002135/1.

    2. DATASET

    Title: EEG data investigating neural correlates of music-induced emotion.

    Description: This dataset accompanies the publication by Daly et al. (2018) and has been analysed in Daly et al. (2014; 2015a; 2015b) (please see Section 5 for full references). The purpose of the research activity in which the data were collected was to investigate the EEG neural correlates of music-induced emotion. For this purpose 31 healthy adult participants listened to 40 music clips of 12 s duration each, targeting a range of emotional states. The music clips comprised excerpts from film scores spanning a range of styles and rated on induced emotion. The dataset contains unprocessed EEG data from all 31 participants (age range 18-66, 18 female) while listening to the music clips, together with the reported induced emotional responses . The paradigm involved 6 runs of EEG recordings. The first and last runs were resting state runs, during which participants were instructed to sit still and rest for 300 s. The other 4 runs each contained 10 music listening trials.

    Publication Year: 2018

    Creator: Nicoletta Nicolaou, Ian Daly.

    Contributors: Isil Poyraz Bilgin, James Weaver, Asad Malik.

    Principal Investigator: Slawomir Nasuto (EP/J003077/1).

    Co-Investigator: Eduardo Miranda (EP/J002135/1).

    Organisation: University of Reading

    Rights-holders: University of Reading

    Source: The musical stimuli were taken from Eerola & Vuoskoski, ŌĆ£A comparison of the discrete and dimensional models of emotion in musicŌĆØ, Psychol. Music, 39:18-49, 2010 (doi: 10.1177/0305735610362821).

    3. TERMS OF USE

    Copyright University of Reading, 2018. This dataset is licensed by the rights-holder(s) under a Creative Commons Attribution 4.0 International Licence: https://creativecommons.org/licenses/by/4.0/.

    4. CONTENTS

    BIDS File listing: The dataset comprises data from 31 participants, named using the convention: sub_s_number where: s_number is a random participant number from 1 to 31. For example: ŌĆśsub-08ŌĆÖ contains data obtained from participant 8.

    The data is BIDS format and contains EEG and associated meta data. The sampling rate is 1 kHz and the EEG corresponding to a music clip is 20 s long (the duration of the clips).

    Each data folder contains the following data (please note that the number of runs varies between participants):

    EEG data in .tsv format. Event codes (JSON) and timings (tsv). EEG channel information.

    5. METHOD and PROCESSING

    This information is available in the following publications:

    [1] Daly, I., Nicolaou, N., Williams, D., Hwang, F., Kirke, A., Miranda, E., Nasuto, S.J., ōNeural and physiological data from participants listening to affective musicö, Scientific Data, 2018. [2] Daly, I., Malik, A., Hwang, F., Roesch, E., Weaver, J., Kirke, A., Williams, D., Miranda, E. R., Nasuto, S. J., ōNeural correlates of emotional responses to music: an EEG studyö, Neuroscience Letters, 573: 52-7, 2014; doi: 10.1016/j.neulet.2014.05.003. [3] Daly, I., Hallowell, J., Hwang, F., Kirke, A., Malik, A., Roesch, E., Weaver, J., Williams, D., Miranda, E., Nasuto, S.J., ōChanges in music tempo entrain movement related brain activityö, Proc. IEEE EMBC 2014, pp.4595-8; doi: 10.1109/EMBC.2014.6944647 [4] Daly, I., Williams, D., Hallowell, J., Hwang, F., Kirke, A., Malik, A., Weaver, J., Miranda, E., Nasuto, S.J., ōMusic-induced emotions can be predicted from a combination of brain activity and acoustic featuresö, Brain and Cognition, 101:1-11, 2015b; doi: 10.1016/j.bandc.2015.08.003

    Please cite these references if you use this dataset in your study.

    Thank you for your interest in our work.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Rahul Kher (2020). EEG Signal Dataset [Dataset]. https://ieee-dataport.org/documents/eeg-signal-dataset

EEG Signal Dataset

Explore at:
Dataset updated
Jun 11, 2020
Authors
Rahul Kher
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

PCA

Search
Clear search
Close search
Google apps
Main menu