Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
**Overview:
The Bonn EEG Dataset is a widely recognized dataset in the field of biomedical signal processing and machine learning, specifically designed for research in epilepsy detection and EEG signal analysis. It contains electroencephalogram (EEG) recordings from both healthy individuals and patients with epilepsy, making it suitable for tasks such as seizure detection and classification of brain activity states. The dataset is structured into five distinct subsets (labeled A, B, C, D, and E), each comprising 100 single-channel EEG segments, resulting in a total of 500 segments. Each segment represents 23.6 seconds of EEG data, sampled at a frequency of 173.61 Hz, yielding 4,096 data points per segment, stored in ASCII format as text files.
****Structure and Label:
**Key Characteristics
**Applications
The Bonn EEG Dataset is ideal for machine learning and signal processing tasks, including: - Developing algorithms for epileptic seizure detection and prediction. - Exploring feature extraction techniques, such as wavelet transforms, for EEG signal analysis. - Classifying brain states (healthy vs. epileptic, interictal vs. ictal). - Supporting research in neuroscience and medical diagnostics, particularly for epilepsy monitoring and treatment.
**Source
**Citation
When using this dataset, researchers are required to cite the original publication: Andrzejak, R. G., Lehnertz, K., Mormann, F., Rieke, C., David, P., & Elger, C. E. (2001). Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Physical Review E, 64(6), 061907. DOI: 10.1103/PhysRevE.64.061907.
**Additional Notes
The dataset is randomized, with no specific information provided about patients or electrode placements, ensuring simplicity and focus on signal characteristics.
The data is not hosted on Kaggle or Hugging Face but is accessible directly from the University of Bonn’s repository or mirrored sources.
Researchers may need to apply preprocessing steps, such as filtering or normalization, to optimize the data for machine learning tasks.
The dataset’s balanced structure and clear labels make it an excellent choice for a one-week machine learning project, particularly for tasks involving traditional algorithms like SVM, Random Forest, or Logistic Regression.
This dataset provides a robust foundation for learning signal processing, feature extraction, and machine learning techniques while addressing a real-world medical challenge in epilepsy detection.
Facebook
TwitterOpen Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
License information was derived automatically
This database, collected at the Children’s Hospital Boston, consists of EEG recordings from pediatric subjects with intractable seizures. Subjects were monitored for up to several days following withdrawal of anti-seizure medication in order to characterize their seizures and assess their candidacy for surgical intervention. The recordings are grouped into 23 cases and were collected from 22 subjects (5 males, ages 3–22; and 17 females, ages 1.5–19).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PCA
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This Siena Sleep EEG dataset contains multi-channel EEG recordings collected during sleep, specifically curated for epilepsy detection and sleep stage analysis. Electroencephalography (EEG) is one of the most reliable methods for studying brain activity during sleep, and it plays a crucial role in diagnosing neurological disorders such as epilepsy.
The dataset is formatted as a large-scale time-series table where each row represents a sampled time point, and each column corresponds to an EEG electrode channel. An additional diagnosis label column indicates whether the signal segment belongs to a healthy control or an epilepsy patient.
Dataset Structure
Number of Records: 944,640 samples
Number of Features: 20 EEG channels + 1 diagnosis label
File Format: CSV
Memory Size: ~150 MB
Columns
EEG Channels (20):
Fp1, F3, C3, P3, O1, F7, T3, T5, Fc1, Fc5, Cp1, Cp5, F9, Fz, Cz, Pz, Pf2, F4, C4, P4
These correspond to standard 10–20 EEG electrode placements, covering frontal, central, parietal, occipital, and temporal lobes.
diagnosis: 0 → Non-epileptic (Healthy subject)
1 → Sleep Stage Epileptic case
Facebook
Twitterhttps://github.com/MIT-LCP/license-and-dua/tree/master/draftshttps://github.com/MIT-LCP/license-and-dua/tree/master/drafts
This data set consists of over 240 two-minute EEG recordings obtained from 20 volunteers. Resting-state and auditory stimuli experiments are included in the data. The goal is to develop an EEG-based Biometric system.
The data includes resting-state EEG signals in both cases: eyes open and eyes closed. The auditory stimuli part consists of six experiments; Three with in-ear auditory stimuli and another three with bone-conducting auditory stimuli. The three stimuli for each case are a native song, a non-native song, and neutral music.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The database consists of EEG recordings of 14 patients acquired at the Unit of Neurology and Neurophysiology of the University of Siena. Subjects include 9 males (ages 25-71) and 5 females (ages 20-58). Subjects were monitored with a Video-EEG with a sampling rate of 512 Hz, with electrodes arranged on the basis of the international 10-20 System. Most of the recordings also contain 1 or 2 EKG signals. The diagnosis of epilepsy and the classification of seizures according to the criteria of the International League Against Epilepsy were performed by an expert clinician after a careful review of the clinical and electrophysiological data of each patient.
Facebook
TwitterTHINGS-EEG
This dataset is a processed version of THINGS-EEG, derived from the paper Bridging the Vision-Brain Gap with an Uncertainty-Aware Blur Prior (CVPR 2025). In this version, the EEG data is stored in float16 format, reducing the storage size by half. The original official dataset can be accessed from the OSF repository. Original official dataset:
A large and rich EEG dataset for modeling human visual object recognition [THINGS-EEG]
Citation… See the full description on the dataset page: https://huggingface.co/datasets/Haitao999/things-eeg.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains 848,640 records with 17 columns, representing EEG (Electroencephalogram) signals recorded from multiple electrode positions on the scalp, along with a status label. The dataset is be related to the study of Alzheimer’s Disease (AD).
Features (16 continuous variables, float64): Each feature corresponds to the electrical activity recorded from standard EEG electrode placements based on the international 10-20 system:
Fp1, Fp2, F7, F3, Fz, F4, F8
T3, C3, Cz, C4, T4
T5, P3, Pz, P4
These channels measure brain activity in different cortical regions (frontal, temporal, central, and parietal lobes).
Target variable (1 categorical variable, int64):
status: Represents the condition or classification of the subject at the time of recording (e.g., patient vs. control, or stage of Alzheimer’s disease).
Size & Integrity:
Rows: 848,640 samples
Columns: 17 (16 EEG features + 1 status label)
Data types: 16 float features, 1 integer label
Missing values: None (clean dataset)
This dataset is suitable for machine learning and deep learning applications such as:
EEG signal classification (AD vs. healthy subjects)
Brain activity pattern recognition
Feature extraction and dimensionality reduction (e.g., PCA, wavelet transforms)
Time-series analysis of EEG recordings
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This article presents an EEG dataset collected using the EMOTIV EEG 5-Channel Sensor kit during four different types of stimulation: Complex mathematical problem solving, Trier mental challenge test, Stroop colour word test, and Horror video stimulation, Listening to relaxing music. The dataset consists of EEG recordings from 22 subjects for Complex mathematical problem solving, 24 for Trier mental challenge test, 24 for Stroop colour word test, 22 for horror video stimulation, and 20 for relaxed state recordings. The data was collected in order to investigate the neural correlates of stress and to develop models for stress detection based on EEG data. The dataset presented in this article can be used for various applications, including stress management, healthcare, and workplace safety. The dataset provides a valuable resource for researchers and developers working on stress detection using EEG data, while the stress detection method provides a useful tool for evaluating the effectiveness of different stress detection models. Overall, this article contributes to the growing body of research on stress detection and management using EEG data and provides a useful resource for researchers and practitioners working in this field.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
There is a growing imperative to understand the neurophysiological impact of our rapidly changing and diverse technological, social, chemical, and physical environments. To untangle the multidimensional and interacting effects requires data at scale across diverse populations, taking measurement out of a controlled lab environment and into the field. Electroencephalography (EEG), which has correlates with various environmental factors as well as cognitive and mental health outcomes, has the advantage of both portability and cost-effectiveness for this purpose. However, with numerous field researchers spread across diverse locations, data quality issues and researcher idle time due to insufficient participants can quickly become unmanageable and expensive problems. In programs we have established in India and Tanzania, we demonstrate that with appropriate training, structured teams, and daily automated analysis and feedback on data quality, nonspecialists can reliably collect EEG data alongside various survey and assessments with consistently high throughput and quality. Over a 30 week period, research teams were able to maintain an average of 25.6 participants per week, collecting data from a diverse sample of 7,933 participants ranging from Hadzabe hunter-gatherers to office workers. Furthermore, data quality, computed on the first 5,831 records using two common methods, PREP and FASTER, was comparable to benchmark datasets from controlled lab conditions. Altogether this resulted in a cost per participant of under $50, a fraction of the cost typical of such data collection, opening up the possibility for large-scale programs particularly in low- and middle-income countries.
A subset of large-scale EEG recordings from India and Tanzania are uploaded here along with metadata like age, mental health quotient (MHQ) score, income and sex. This BIDS dataset was generated using MNE-BIDS from EDF source files.
Vianney JM, Swaminathan S, Newson JJ, Parameshwaran D, Subramaniyam NP, Roy SS, Machunda R, Sapuli A, Pramanik S, Kumar JV, Tiwari P. EEG Data Quality in Large-Scale Field Studies in India and Tanzania. Eneuro. 2025 Jul 1;12(7).
Newson JJ, Pastukh V, Thiagarajan TC. Assessment of population well-being with the Mental Health Quotient: validation study. JMIR Mental Health. 2022 Apr 20;9(4):e34105.
Appelhoff, S., Sanderson, M., Brooks, T., Vliet, M., Quentin, R., Holdgraf, C., Chaumon, M., Mikulan, E., Tavabi, K., Höchenberger, R., Welke, D., Brunner, C., Rockhill, A., Larson, E., Gramfort, A. and Jas, M. (2019). MNE-BIDS: Organizing electrophysiological data into the BIDS format and facilitating their analysis. Journal of Open Source Software 4: (1896).https://doi.org/10.21105/joss.01896
Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., Flandin, G., Phillips, C., Delorme, A., Oostenveld, R. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103.https://doi.org/10.1038/s41597-019-0104-8
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PCA
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Introduction: This dataset consists of the MEEG (sMRI+MEG+EEG) portion of the multi-subject, multi-modal face processing dataset (ds000117). This dataset was originally acquired and shared by Daniel Wakeman and Richard Henson (https://pubmed.ncbi.nlm.nih.gov/25977808/). The MEG and EEG data were simultaneously recorded; the sMRI scans were preserved to support M/EEG source localization. Following event log augmentation, reorganization, and HED (v8.0.0) annotation, the EEG data have been repackaged in EEGLAB format.
Overview of the experiment: Eighteen participants completed two recording sessions spaced three months apart – one session recorded fMRI and the other simultaneously recorded MEG and EEG data. During each session, participants performed the same simple perceptual task, responding to presented photographs of famous, unfamiliar, and scrambled faces by pressing one of two keyboard keys to indicate a subjective yes or no decision as to the relative spatial symmetry of the viewed face. Famous faces were feature-matched to unfamiliar faces; half the faces were female. The two sessions (MEEG, fMRI) had different organizations of event timing and presentation because of technological requirements of the respective imaging modalities. Each individual face was presented twice during the session. For half of the presented faces, the second presentation followed immediately after the first. For the other half, the second presentation was delayed by 5-15 face presentations.
Preprocessing: Multi-subject, multi-modal (sMRI+EEG) neuroimaging dataset on face processing. Original data described at https://www.nature.com/articles/sdata20151 This is repackaged version of the EEG data in EEGLAB format. The data has gone through minimal preprocessing including (see wh_extracteeg_BIDS.m): - Ignoring fMRI and MEG data (sMRI preserved for EEG source localization) - Extracting EEG channels out of the MEG/EEG fif data - Adding fiducials - Renaming EOG and EKG channels - Extracting events from event channel - Removing spurious events 5, 6, 7, 13, 14, 15, 17, 18 and 19 - Removing spurious event 24 for subject 3 run 4 - Renaming events taking into account button assigned to each subject - Correcting event latencies (events have a shift of 34 ms) - Resampling data to 250 Hz (this is a step that is done because this dataset is used as tutorial for EEGLAB and need to be lightweight) - Merging run 1 to 6 - Removing event fields urevent and duration - Filling up empty fields for events boundary and stim_file. - Saving as EEGLAB .set format
Original and related datasets This data is a mapping of the original openfmri dataset ds000117 on OpenfMRI, which is no longer available (although a copy is available in the sourcedata folder of the ds003645 repository). The ds000117 dataset on OpenNeuro contains only 16 subjects. The original OpenfMRI dataset is described at the bottom of this README file https://openneuro.org/datasets/ds000117/versions/1.0.4/file-display/README along with the correspondance with the 16 subjects in ds000117. Note that sub-001 data on OpenfMRI was corrupted so it is not included here.
The openneuro dataset ds003645 is similar to this one but also contains MEG data and HED events. Also, it does not have the different runs merged.
Import warning Make sure to import the channel locations from the BIDS electrodes.tsv files. The EEGLAB .set files also contain channel locations, although they differ for subjects 8 and 14 because the .set version is wrong and rotated by 90 degrees. When using the EEGLAB EEG BIDS plugin, the default behavior is to import channel locations from BIDS.
Data curators: Ramon Martinez, Dung Truong, Scott Makeig, Arnaud Delorme (UCSD, La Jolla, CA, USA)
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This dataset contains EEG recordings from 18 subjects listening to one of two competing speech audio streams. Continuous speech in trials of ~50 sec. was presented to normal hearing listeners in simulated rooms with different degrees of reverberation. Subjects were asked to attend one of two spatially separated speakers (one male, one female) and ignore the other. Repeated trials with presentation of a single talker were also recorded. The data were recorded in a double-walled soundproof booth at the Technical University of Denmark (DTU) using a 64-channel Biosemi system and digitized at a sampling rate of 512 Hz. Full details can be found in:
and
The data is organized in format of the publicly available COCOHA Matlab Toolbox. The preproc_script.m demonstrates how to import and align the EEG and audio data. The script also demonstrates some EEG preprocessing steps as used the Wong et al. paper above. The AUDIO.zip contains wav-files with the speech audio used in the experiment. The EEG.zip contains MAT-files with the EEG/EOG data for each subject. The EEG/EOG data are found in data.eeg with the following channels:
The expinfo table contains information about experimental conditions, including what what speaker the listener was attending to in different trials. The expinfo table contains the following information:
DATA_preproc.zip contains the preprocessed EEG and audio data as output from preproc_script.m.
The dataset was created within the COCOHA Project: Cognitive Control of a Hearing Aid
Facebook
Twitterhttps://github.com/bdsp-core/bdsp-license-and-duahttps://github.com/bdsp-core/bdsp-license-and-dua
The Harvard EEG Database will encompass data gathered from four hospitals affiliated with Harvard University: Massachusetts General Hospital (MGH), Brigham and Women's Hospital (BWH), Beth Israel Deaconess Medical Center (BIDMC), and Boston Children's Hospital (BCH). The EEG data includes three types:
rEEG: "routine EEGs" recorded in the outpatient setting.
EMU: recordings obtained in the inpatient setting, within the Epilepsy Monitoring Unit (EMU).
ICU/LTM: recordings obtained from acutely and critically ill patients within the intensive care unit (ICU).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We used a high-density electroencephalography (HD-EEG) system, with 128 customized electrode locations, to record from 17 individuals with migraine (12 female) in the interictal period, and 18 age- and gender-matched healthy control subjects, during visual (vertical grating pattern) and auditory (modulated tone) stimulation which varied in temporal frequency (4 and 6Hz), and during rest. This dataset includes the EEG raw data related to the paper entitled Chamanzar, Haigh, Grover, and Behrmann (2020), Abnormalities in cortical pattern of coherence in migraine detected using ultra high-density EEG. The link to our paper will be made available as soon as it is published online.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database includes the de-identified EEG data from 62 healthy individuals who participated in a brain-computer interface (BCI) study. All subjects underwent 7-11 sessions of BCI training which involves controlling a computer cursor to move in one-dimensional and two-dimensional spaces using subject’s “intent”. EEG data were recorded with 62 electrodes. In addition to the EEG data, behavioral data including the online success rate of BCI cursor control are also included.This dataset was collected under support from the National Institutes of Health via grants AT009263, EB021027, NS096761, MH114233, RF1MH to Dr. Bin He. Correspondence about the dataset: Dr. Bin He, Carnegie Mellon University, Department of Biomedical Engineering, Pittsburgh, PA 15213. E-mail: bhe1@andrew.cmu.edu This dataset has been used and analyzed to study the learning of BCI control and the effects of mind-body awareness training on this process. The results are reported in: Stieger et al, “Mindfulness Improves Brain Computer Interface Performance by Increasing Control over Neural Activity in the Alpha Band,” Cerebral Cortex, 2020 (https://doi.org/10.1093/cercor/bhaa234). Please cite this paper if you use any data included in this dataset.
Facebook
Twitterhttps://www.gnu.org/copyleft/gpl.htmlhttps://www.gnu.org/copyleft/gpl.html
This dataset has collected for the study of "Robust Detection of Event-Related Potentials in a User-Voluntary Short-Term Imagery Task.
Facebook
TwitterAttribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
This dataset contains scalp-recorded EEG responses from ten human participants viewing a set of photographs of objects with a planned category structure. EEG was recorded using the Electrical Geodesics, Inc. (EGI) GES 300 platform. Each participant viewed each of the 72 images in the stimulus set 72 times, for a total of 5,184 experimental trials per participant. Data files are split into six recordings per participant, each comprising 864 trials, 12 of each stimulus. In addition to the 60 primary recordings analyzed in the Kaneshiro et al. (2015) PLoS ONE paper, the dataset also includes 12 additional EEG recordings from three of the study participants. Data are published in Matlab (.mat) format. Each data file is around 1GB in size.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
epilepsy
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset included 33 healthy participants collected at Pennsylvania State University with informed consent. Simultaneously collected EEG and BOLD signals for each participant were recorded and organized at each folder.
Each scanning section consisted of an anatomical session, two 10-min resting-state sessions, and several 15-min sleep sessions. The first resting-state session was conducted before a visual-motor adaptation task (Albouy et al, Journal of Sleep Research, 2013) and the second resting-state session was conducted after a visual-motor adaptation task.
The scored sleep stages for these 33 subjects were organized under 'sourcedata' folder. Each TSV file contained the sleep stages for each 30-sec epoch across different sessions for each subject. In the TSV file, “w” represents wakefulness and “1, 2, 3” represents NREM1, NREM2, and NREM3, respectively. Some epochs scoring with uncertainty are noted as “uncertain” and some epochs with too large artifacts to score reasonably are noted as “unscorable”.
MR imaging data were collected on a 3 Tesla Prisma Siemens Fit scanner using a Siemens 20-channel receive-array coil. Anatomical images were acquired using a MPRAGE sequence (TR: 2300 milliseconds, TE: 2.28 milliseconds, 1mm isotropic spatial resolution, FOV: 256 millimeters, flip angle: 8 degrees, matrix size: 256×256×192, acceleration factor: 2). Blood oxygenation level-dependent (BOLD) fMRI data were acquired using an EPI sequence (TR: 2100 milliseconds, TE: 25 milliseconds, slice thickness: 4mm, slices: 35, FOV: 240mm, in-plane resolution: 3mm×3mm).
EEG data were collected using a 32-channel MR-compatible EEG system from Brain Products, Germany. Electrodes were placed based on the 10-20 international system. EOG and ECG recorded eye movement and cardiac signal, respectively. EEG data were collected at a sampling rate of 5000 Hz with a band-pass filter of 0-250 Hz. R128 in the EEG signals corresponds to the BOLD fMRI volume trigger. S1 markers in the EEG during sleep sessions correspond to participants hitting buttons indicating wakefulness state. S2 and S3 markers during sleep sessions represent no button hitting and can be ignored.
For more information or any questions about this dataset, please see the two papers listed in the References and Links section or contact Dr. Yameng Gu (ymgu95@gmail.com)
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
**Overview:
The Bonn EEG Dataset is a widely recognized dataset in the field of biomedical signal processing and machine learning, specifically designed for research in epilepsy detection and EEG signal analysis. It contains electroencephalogram (EEG) recordings from both healthy individuals and patients with epilepsy, making it suitable for tasks such as seizure detection and classification of brain activity states. The dataset is structured into five distinct subsets (labeled A, B, C, D, and E), each comprising 100 single-channel EEG segments, resulting in a total of 500 segments. Each segment represents 23.6 seconds of EEG data, sampled at a frequency of 173.61 Hz, yielding 4,096 data points per segment, stored in ASCII format as text files.
****Structure and Label:
**Key Characteristics
**Applications
The Bonn EEG Dataset is ideal for machine learning and signal processing tasks, including: - Developing algorithms for epileptic seizure detection and prediction. - Exploring feature extraction techniques, such as wavelet transforms, for EEG signal analysis. - Classifying brain states (healthy vs. epileptic, interictal vs. ictal). - Supporting research in neuroscience and medical diagnostics, particularly for epilepsy monitoring and treatment.
**Source
**Citation
When using this dataset, researchers are required to cite the original publication: Andrzejak, R. G., Lehnertz, K., Mormann, F., Rieke, C., David, P., & Elger, C. E. (2001). Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Physical Review E, 64(6), 061907. DOI: 10.1103/PhysRevE.64.061907.
**Additional Notes
The dataset is randomized, with no specific information provided about patients or electrode placements, ensuring simplicity and focus on signal characteristics.
The data is not hosted on Kaggle or Hugging Face but is accessible directly from the University of Bonn’s repository or mirrored sources.
Researchers may need to apply preprocessing steps, such as filtering or normalization, to optimize the data for machine learning tasks.
The dataset’s balanced structure and clear labels make it an excellent choice for a one-week machine learning project, particularly for tasks involving traditional algorithms like SVM, Random Forest, or Logistic Regression.
This dataset provides a robust foundation for learning signal processing, feature extraction, and machine learning techniques while addressing a real-world medical challenge in epilepsy detection.