Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
License information was derived automatically
This database, collected at the Children’s Hospital Boston, consists of EEG recordings from pediatric subjects with intractable seizures. Subjects were monitored for up to several days following withdrawal of anti-seizure medication in order to characterize their seizures and assess their candidacy for surgical intervention. The recordings are grouped into 23 cases and were collected from 22 subjects (5 males, ages 3–22; and 17 females, ages 1.5–19).
https://github.com/bdsp-core/bdsp-license-and-duahttps://github.com/bdsp-core/bdsp-license-and-dua
The Harvard EEG Database will encompass data gathered from four hospitals affiliated with Harvard University: Massachusetts General Hospital (MGH), Brigham and Women's Hospital (BWH), Beth Israel Deaconess Medical Center (BIDMC), and Boston Children's Hospital (BCH). The EEG data includes three types:
rEEG: "routine EEGs" recorded in the outpatient setting.
EMU: recordings obtained in the inpatient setting, within the Epilepsy Monitoring Unit (EMU).
ICU/LTM: recordings obtained from acutely and critically ill patients within the intensive care unit (ICU).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Univ. of Bonn’ and ‘CHB-MIT Scalp EEG Database’ are publically available datasets which are the most sought after amongst researchers. Bonn dataset is very small compared to CHB-MIT. But still researchers prefer Bonn as it is in simple '.txt' format. The dataset being published here is a preprocessed form of CHB-MIT. The dataset is available in '.csv' format.
THIS RESOURCE IS NO LONGER IN SERVICE. Documented on November 22, 2022. Data set collected at the Children''s Hospital Boston, of EEG recordings from pediatric subjects with intractable seizures. Subjects were monitored for up to several days following withdrawal of anti-seizure medication in order to characterize their seizures and assess their candidacy for surgical intervention. All signals were sampled at 256 samples per second with 16-bit resolution. Most files contain 23 EEG signals (24 or 26 in a few cases).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
C4
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is a BIDS-compatible version of the CHB-MIT Scalp EEG Database. It reorganizes the file structure to comply with the BIDS specification. To this effect:
The dataset is released under the Open Data Commons Attribution License v1.0.
The original Physionet CHB-MIT Scalp EEG Database was published by Ali Shoeb. This BIDS-compatible version of the dataset was published by Jonathan Dan.
The original Physionet CHB-MIT Scalp EEG Database is available on the Physionet website.
CHB-MIT Scalp EEG Database
2010
This database, collected at the Children's Hospital Boston, consists of EEG recordings from pediatric subjects with intractable seizures. Subjects were monitored for up to several days following withdrawal of anti-seizure medication in order to characterize their seizures and assess their candidacy for surgical intervention.
Each folder (sub-01, sub-01, etc.) contains between 9 and 42 continuous .edf files from a single subject. Hardware limitations resulted in gaps between consecutively-numbered .edf files, during which the signals were not recorded; in most cases, the gaps are 10 seconds or less, but occasionally there are much longer gaps. In order to protect the privacy of the subjects, all protected health information (PHI) in the original .edf files has been replaced with surrogate information in the files provided here. Dates in the original .edf files have been replaced by surrogate dates, but the time relationships between the individual files belonging to each case have been preserved. In most cases, the .edf files contain exactly one hour of digitized EEG signals, although those belonging to case sub-10 are two hours long, and those belonging to cases sub-04, sub-06, sub-07, sub-09, and sub-23 are four hours long; occasionally, files in which seizures are recorded are shorter.
The EEG is recorded at 256 Hz with a 16-bit resolution. The recordings are referenced in a double banana bipolar montage with 18 channels from the 10-20 electrode system.
The dataset also contains seizure annotations as start and stop times.
The dataset contains 664 `.edf` recordings. 129 those files that contain one or more seizures. In all, these records include 198 seizures.
23 pediatric subjects with intractable seizures. (5 males, ages 3–22; and 17 females, ages 1.5–19; 1 n/a)
Recordings were performed at the Children's Hospital Boston using the International 10-20 system of EEG electrode positions. Signals were sampled at 256 samples per second with 16-bit resolution.
THIS RESOURCE IS NO LONGER IN SERVICE. Documented on April 29,2025. Electroencephalogram (EEG) data recorded from invasive and scalp electrodes. The EEG database contains invasive EEG recordings of 21 patients suffering from medically intractable focal epilepsy. The data were recorded during an invasive pre-surgical epilepsy monitoring at the Epilepsy Center of the University Hospital of Freiburg, Germany. In eleven patients, the epileptic focus was located in neocortical brain structures, in eight patients in the hippocampus, and in two patients in both. In order to obtain a high signal-to-noise ratio, fewer artifacts, and to record directly from focal areas, intracranial grid-, strip-, and depth-electrodes were utilized. The EEG data were acquired using a Neurofile NT digital video EEG system with 128 channels, 256 Hz sampling rate, and a 16 bit analogue-to-digital converter. Notch or band pass filters have not been applied. For each of the patients, there are datasets called ictal and interictal, the former containing files with epileptic seizures and at least 50 min pre-ictal data. the latter containing approximately 24 hours of EEG-recordings without seizure activity. At least 24 h of continuous interictal recordings are available for 13 patients. For the remaining patients interictal invasive EEG data consisting of less than 24 h were joined together, to end up with at least 24 h per patient. An interdisciplinary project between: * Epilepsy Center, University Hospital Freiburg * Bernstein Center for Computational Neuroscience (BCCN), Freiburg * Freiburg Center for Data Analysis and Modeling (FDM).
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset
Synthetic EEG data generated by the ‘bai’ model based on real data.
Features/Columns:
No: "Number" Sex: "Gender" Age: "Age of participants" EEG Date: "The date of the EEG" Education: "Education level" IQ: "IQ level of participants" Main Disorder: "General class definition of the disorder" Specific Disorder: "Specific class definition of the disorder"
Total Features/Columns: 1140
Content:
Obsessive Compulsive Disorder Bipolar Disorder Schizophrenia… See the full description on the dataset page: https://huggingface.co/datasets/Neurazum/General-Disorders-EEG-Dataset-v1.
Data set from a large study to examine EEG correlates of genetic predisposition to alcoholism. It contains measurements from 64 electrodes placed on the scalp sampled at 256 Hz (3.9-msec epoch) for 1 second. There were two groups of subjects: alcoholic and control. Each subject was exposed to either a single stimulus (S1) or to two stimuli (S1 and S2) which were pictures of objects chosen from the 1980 Snodgrass and Vanderwart picture set. When two stimuli were shown, they were presented in either a matched condition where S1 was identical to S2 or in a non-matched condition where S1 differed from S2. There were 122 subjects and each subject completed 120 trials where different stimuli were shown. The electrode positions were located at standard sites (Standard Electrode Position Nomenclature, American Electroencephalographic Association 1990). Zhang et al. (1995) describes in detail the data collection process. There are three versions of the EEG data set. * The Small Data Set (smni97_eeg_data.tar.gz) contains data for the 2 subjects, alcoholic a_co2a0000364 and control c_co2c0000337. For each of the 3 matching paradigms, c_1 (one presentation only), c_m (match to previous presentation) and c_n (no-match to previous presentation), 10 runs are shown. * The Large Data Set (SMNI_CMI_TRAIN.tar.gz and SMNI_CMI_TEST.tar.gz) contains data for 10 alcoholic and 10 control subjects, with 10 runs per subject per paradigm. The test data used the same 10 alcoholic and 10 control subjects as with the training data, but with 10 out-of-sample runs per subject per paradigm. * The Full Data Set contains all 120 trials for 122 subjects. The entire set of data is about 700 MBytes.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database includes the de-identified EEG data from 37 healthy individuals who participated in a brain-computer interface (BCI) study. All but one subject underwent 2 sessions of BCI experiments that involved controlling a computer cursor to move in one-dimensional space using their “intent”. EEG data were recorded with 62 electrodes. In addition to the EEG data, behavioral data including the online success rate and results of BCI cursor control are also included. This dataset was collected under support from the National Institutes of Health via grant AT009263 to Dr. Bin He. Correspondence about the dataset: Dr. Bin He, Carnegie Mellon University, Department of Biomedical Engineering, Pittsburgh, PA 15213. E-mail: bhe1@andrew.cmu.edu This dataset has been used and analyzed to study the immediate effect of short meditation on BCI performance. The results are reported in: Kim et al, “Immediate effects of short-term meditation on sensorimotor rhythm-based brain–computer interface performance,” Frontiers in Human Neuroscience, 2022 (https://doi.org/10.3389/fnhum.2022.1019279). Please cite this paper if you use any data included in this dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Artificial intelligence (AI) based automated epilepsy diagnosis has aimed to ease the burden of manual detection, prediction, and management of seizure and epilepsy-specific EEG signals for medical specialists. With increasing open-source, raw, and large EEG datasets, there is a need for data standardization of patient and seizure-sensitive AI analysis with reduced redundant information. This work releases a balanced, annotated, fixed time and length meta-data of CHB-MIT Scalp EEG database v1.0.0.0.
The work releases patient-specific (inter and intra) and patient non-specific EEG data extracted using specific time stamps of ictal, pre-ictal, post-ictal, peri-ictal, and non-seizure EEG provided in the original dataset (annotations). Further details of this metadata can be found in the provided csv file (CHB-MIT DB timestamp.csv). The released EEG data is available in csv format and class labels are provided in the last row of the csv files. Data of ch06, ch12, ch23, and ch24 in patient-specific and chb24_11 in patient non-specific have not been included. The importance of peri-ictal EEGs has been elucidated in Handa, P., & Goel, N. (2021). Peri‐ictal and non‐seizure EEG event detection using generated metadata. Expert Systems, e12929.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set consists of electroencephalography (EEG) data from 50 (Subject1 – Subject50) participants with acute ischemic stroke aged between 30 and 77 years. The participants included 39 male and 11 female. The time after stroke ranged from 1 days to 30 days. 22 participants had right hemisphere hemiplegia and 28 participants had left hemisphere hemiplegia. All participants were originally right-handed. Each of the participants sat in front of a computer screen with an arm resting on a pillow on their lap or on a table and they carried out the instructions given on the computer screen. At the trial start, a picture with text description which was circulated with left right hand, were presented for 2s. We asked the participants to focus their mind on the hand motor imagery which was instructed, at the same time, the video of ipsilateral hand movement is displayed on the computer screen and lasts for 4s. Next, take a 2s break.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These files contain the raw data and processing parameters to go with the paper "Hierarchical structure guides rapid linguistic predictions during naturalistic listening" by Jonathan R. Brennan and John T. Hale. These files include the stimulus (wav files), raw data (matlab format for the Fieldtrip toolbox), data processing paramters (matlab), and variables used to align the stimuli with the EEG data and for the statistical analyses reported in the paper.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PCA
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
epilepsy
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The dataset comprised 14 patients with paranoid schizophrenia and 14 healthy controls. Data were acquired with the sampling frequency of 250 Hz using the standard 10-20 EEG montage with 19 EEG channels: Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, O2. The reference electrode was placed between electrodes Fz and Cz.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset was collected in 2020, which combines high-density Electroencephalography (HD-EEG, 128 channels) and mouse-tracking intended as a resource for examining the dynamic decision process of semantics and preference choices in the human brain. The dataset includes high-density resting-state and task-related (food preference choices and semantic judgments) EEG acquired from 31 individuals (ages: 18-33).
The EEG data were acquired using a 128-channel cap based on the standard 10/20 System with Electrical Geodesics Inc (EGI, Eugene, Oregon) system. During recording, sampling rate was 1000Hz, and the E129 (Cz) electrode was used as reference. Electrode impedances were kept below 50kohm for each electrode during the experiment.
sub-*
: EEG (.set
) and behavior data with BIDS format.
sourcedata/rawdata
: Raw .mff
EGI data and behavior data with subject information desensitization.
sourcedata/psychopy
: Stimuli and PsychoPy scripts for presentation.
derivatives/eeglab-preproc
: Preprocessed continuous EEG data with EEGLAB (Easy to set different epoch time windows for further analysis).
Please refer to the corresponding paper and GitHub code to get more details.
Chen, K., Wang, R., Huang, J., Gao, F., Yuan, Z., Qi, Y., & Wu, H. (2022). A resource for assessing dynamic binary choices in the adult brain using EEG and mouse-tracking. Scientific Data, 9(1), 416. https://doi.org/10.1038/s41597-022-01538-5
Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896). https://doi.org/10.21105/joss.01896
Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. https://doi.org/10.1038/s41597-019-0104-8
Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
License information was derived automatically
><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> Welcome to the FEIS (Fourteen-channel EEG with Imagined Speech) dataset. <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< The FEIS dataset comprises Emotiv EPOC+ [1] EEG recordings of: * 21 participants listening to, imagining speaking, and then actually speaking 16 English phonemes (see supplementary, below) * 2 participants listening to, imagining speaking, and then actually speaking 16 Chinese syllables (see supplementary, below) For replicability and for the benefit of further research, this dataset includes the complete experiment set-up, including participants' recorded audio and 'flashcard' screens for audio-visual prompts, Lua script and .mxs scenario for the OpenVibe [2] environment, as well as all Python scripts for the preparation and processing of data as used in the supporting studies (submitted in support of completion of the MSc Speech and Language Processing with the University of Edinburgh): * J. Clayton, "Towards phone classification from imagined speech using a lightweight EEG brain-computer interface," M.Sc. dissertation, University of Edinburgh, Edinburgh, UK, 2019. * S. Wellington, "An investigation into the possibilities and limitations of decoding heard, imagined and spoken phonemes using a low-density, mobile EEG headset," M.Sc. dissertation, University of Edinburgh, Edinburgh, UK, 2019. Each participant's data comprise 5 .csv files -- these are the 'raw' (unprocessed) EEG recordings for the 'stimuli', 'articulators' (see supplementary, below) 'thinking', 'speaking' and 'resting' phases per epoch for each trial -- alongside a 'full' .csv file with the end-to-end experiment recording (for the benefit of calculating deltas). To guard against software deprecation or inaccessability, the full repository of open-source software used in the above studies is also included. We hope for the FEIS dataset to be of some utility for future researchers, due to the sparsity of similar open-access databases. As such, this dataset is made freely available for all academic and research purposes (non-profit). ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> REFERENCING <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< If you use the FEIS dataset, please reference: * S. Wellington, J. Clayton, "Fourteen-channel EEG with Imagined Speech (FEIS) dataset," v1.0, University of Edinburgh, Edinburgh, UK, 2019. doi:10.5281/zenodo.3369178 ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> LEGAL <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< The research supporting the distribution of this dataset has been approved by the PPLS Research Ethics Committee, School of Philosophy, Psychology and Language Sciences, University of Edinburgh (reference number: 435-1819/2). This dataset is made available under the Open Data Commons Attribution License (ODC-BY): http://opendatacommons.org/licenses/by/1.0 ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ACKNOWLEDGEMENTS <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< The FEIS database was compiled by: Scott Wellington (MSc Speech and Language Processing, University of Edinburgh) Jonathan Clayton (MSc Speech and Language Processing, University of Edinburgh) Principal Investigators: Oliver Watts (Senior Researcher, CSTR, University of Edinburgh) Cassia Valentini-Botinhao (Senior Researcher, CSTR, University of Edinburgh) <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< METADATA ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> For participants, dataset refs 01 to 21: 01 - NNS 02 - NNS 03 - NNS, Left-handed 04 - E 05 - E, Voice heard as part of 'stimuli' portions of trials belongs to particpant 04, due to microphone becoming damaged and unusable prior to recording 06 - E 07 - E 08 - E, Ambidextrous 09 - NNS, Left-handed 10 - E 11 - NNS 12 - NNS, Only sessions one and two recorded (out of three total), as particpant had to leave the recording session early 13 - E 14 - NNS 15 - NNS 16 - NNS 17 - E 18 - NNS 19 - E 20 - E 21 - E E = native speaker of English NNS = non-native speaker of English (>= C1 level) For participants, dataset refs chinese-1 and chinese-2: chinese-1 - C chinese-2 - C, Voice heard as part of 'stimuli' portions of trials belongs to participant chinese-1 C = native speaker of Chinese <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< SUPPLEMENTARY ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> Under the international 10-20 system, the Emotiv EPOC+ headset 14 channels: F3 FC5 AF3 F7 T7 P7 O1 O2 P8 T8 F8 AF4 FC6 F4 The 16 English phonemes investigated in dataset refs 01 to 21: /i/ /u:/ /æ/ /ɔ:/ /m/ /n/ /ŋ/ /f/ /s/ /ʃ/ /v/ /z/ /ʒ/ /p /t/ /k/ The 16 Chinese syllables investigated in dataset refs chinese-1 and chinese-2: mā má mǎ mà mēng méng měng mèng duō duó duǒ duò tuī tuí tuǐ tuì All references to 'articulators' (e.g. as part of filenames) refer to the 1-second 'fixation point' portion of trials. The name is a layover from preliminary trials which were modelled on the KARA ONE database (http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html) [3]. <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< <>< ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> ><> [1] Emotiv EPOC+. https://emotiv.com/epoc. Accessed online 14/08/2019. [2] Y. Renard, F. Lotte, G. Gibert, M. Congedo, E. Maby, V. Delannoy, O. Bertrand, A. Lécuyer. “OpenViBE: An Open-Source Software Platform to Design, Test and Use Brain-Computer Interfaces in Real and Virtual Environments”, Presence: teleoperators and virtual environments, vol. 19, no 1, 2010. [3] S. Zhao, F. Rudzicz. "Classifying phonological categories in imagined and articulated speech." In Proceedings of ICASSP 2015, Brisbane Australia, 2015.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the raw EEG data for the study. Data is in BioSemi Data Format (BDF). Files with only "II" in the file name were recorded during the reported 1-Exemplar categorization task; "RB-II" files were recorded during the reported 2-Exemplar categorization task. "Resting" files were recorded during wakeful resting state data.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is the raw EEG data for the study. Data is in text file format (rows: data points, columns: channels). Data was converted from original hexadecimal format. Labeling is as follows: "Lab" = Laboratory recording; "Natural" = Natural (outside) recordings; "Eyes Open" = Eyes Open Resting State Condition; "Eyes Closed" = Eyes Closed Resting State Condition; "Math" = PASAT task.
Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
License information was derived automatically
This database, collected at the Children’s Hospital Boston, consists of EEG recordings from pediatric subjects with intractable seizures. Subjects were monitored for up to several days following withdrawal of anti-seizure medication in order to characterize their seizures and assess their candidacy for surgical intervention. The recordings are grouped into 23 cases and were collected from 22 subjects (5 males, ages 3–22; and 17 females, ages 1.5–19).