Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PCA
Facebook
TwitterThis is the Dataset Collected by Shahed Univeristy Released in IEEE.
the Columns are: Fz, Cz, Pz, C3, T3, C4, T4, Fp1, Fp2, F3, F4, F7, F8, P3, P4, T5, T6, O1, O2, Class, ID
the first 19 are channel names.
Class: ADHD/Control
ID: Patient ID
Participants were 61 children with ADHD and 60 healthy controls (boys and girls, ages 7-12). The ADHD children were diagnosed by an experienced psychiatrist to DSM-IV criteria, and have taken Ritalin for up to 6 months. None of the children in the control group had a history of psychiatric disorders, epilepsy, or any report of high-risk behaviors.
EEG recording was performed based on 10-20 standard by 19 channels (Fz, Cz, Pz, C3, T3, C4, T4, Fp1, Fp2, F3, F4, F7, F8, P3, P4, T5, T6, O1, O2) at 128 Hz sampling frequency. The A1 and A2 electrodes were the references located on earlobes.
Since one of the deficits in ADHD children is visual attention, the EEG recording protocol was based on visual attention tasks. In the task, a set of pictures of cartoon characters was shown to the children and they were asked to count the characters. The number of characters in each image was randomly selected between 5 and 16, and the size of the pictures was large enough to be easily visible and countable by children. To have a continuous stimulus during the signal recording, each image was displayed immediately and uninterrupted after the child’s response. Thus, the duration of EEG recording throughout this cognitive visual task was dependent on the child’s performance (i.e. response speed).
Citation Author(s): Ali Motie Nasrabadi Armin Allahverdy Mehdi Samavati Mohammad Reza Mohammadi
DOI: 10.21227/rzfh-zn36
License: Creative Commons Attribution
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These files contain the raw data and processing parameters to go with the paper "Hierarchical structure guides rapid linguistic predictions during naturalistic listening" by Jonathan R. Brennan and John T. Hale. These files include the stimulus (wav files), raw data (matlab format for the Fieldtrip toolbox), data processing paramters (matlab), and variables used to align the stimuli with the EEG data and for the statistical analyses reported in the paper.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set consists of electroencephalography (EEG) data from 50 (Subject1 – Subject50) participants with acute ischemic stroke aged between 30 and 77 years. The participants included 39 male and 11 female. The time after stroke ranged from 1 days to 30 days. 22 participants had right hemisphere hemiplegia and 28 participants had left hemisphere hemiplegia. All participants were originally right-handed. Each of the participants sat in front of a computer screen with an arm resting on a pillow on their lap or on a table and they carried out the instructions given on the computer screen. At the trial start, a picture with text description which was circulated with left right hand, were presented for 2s. We asked the participants to focus their mind on the hand motor imagery which was instructed, at the same time, the video of ipsilateral hand movement is displayed on the computer screen and lasts for 4s. Next, take a 2s break.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Dataset
Synthetic EEG data generated by the ‘bai’ model based on real data.
Features/Columns:
No: "Number" Sex: "Gender" Age: "Age of participants" EEG Date: "The date of the EEG" Education: "Education level" IQ: "IQ level of participants" Main Disorder: "General class definition of the disorder" Specific Disorder: "Specific class definition of the disorder"
Total Features/Columns: 1140
Content:
Obsessive Compulsive Disorder Bipolar Disorder Schizophrenia… See the full description on the dataset page: https://huggingface.co/datasets/Neurazum/General-Disorders-EEG-Dataset-v1.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains the EEG resting state-closed eyes recordings from 88 subjects in total. Participants: 36 of them were diagnosed with Alzheimer's disease (AD group), 23 were diagnosed with Frontotemporal Dementia (FTD group) and 29 were healthy subjects (CN group). Cognitive and neuropsychological state was evaluated by the international Mini-Mental State Examination (MMSE). MMSE score ranges from 0 to 30, with lower MMSE indicating more severe cognitive decline. The duration of the disease was measured in months and the median value was 25 with IQR range (Q1-Q3) being 24 - 28.5 months. Concerning the AD groups, no dementia-related comorbidities have been reported. The average MMSE for the AD group was 17.75 (sd=4.5), for the FTD group was 22.17 (sd=8.22) and for the CN group was 30. The mean age of the AD group was 66.4 (sd=7.9), for the FTD group was 63.6 (sd=8.2), and for the CN group was 67.9 (sd=5.4).
Recordings: Recordings were aquired from the 2nd Department of Neurology of AHEPA General Hispital of Thessaloniki by an experienced team of neurologists. For recording, a Nihon Kohden EEG 2100 clinical device was used, with 19 scalp electrodes (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, and O2) according to the 10-20 international system and 2 reference electrodes (A1 and A2) placed on the mastoids for impendance check, according to the manual of the device. Each recording was performed according to the clinical protocol with participants being in a sitting position having their eyes closed. Before the initialization of each recording, the skin impedance value was ensured to be below 5k?. The sampling rate was 500 Hz with 10uV/mm resolution. The recording montages were anterior-posterior bipolar and referential montage using Cz as the common reference. The referential montage was included in this dataset. The recordings were received under the range of the following parameters of the amplifier: Sensitivity: 10uV/mm, time constant: 0.3s, and high frequency filter at 70 Hz. Each recording lasted approximately 13.5 minutes for AD group (min=5.1, max=21.3), 12 minutes for FTD group (min=7.9, max=16.9) and 13.8 for CN group (min=12.5, max=16.5). In total, 485.5 minutes of AD, 276.5 minutes of FTD and 402 minutes of CN recordings were collected and are included in the dataset.
Preprocessing: The EEG recordings were exported in .eeg format and are transformed to BIDS accepted .set format for the inclusion in the dataset. Automatic annotations of the Nihon Kohden EEG device marking artifacts (muscle activity, blinking, swallowing) have not been included for language compatibility purposes (If this is an issue, please use the preprocessed dataset in Folder: derivatives). The unprocessed EEG recordings are included in folders named: sub-0XX. Folders named sub-0XX in the subfolder derivatives contain the preprocessed and denoised EEG recordings. The preprocessing pipeline of the EEG signals is as follows. First, a Butterworth band-pass filter 0.5-45 Hz was applied and the signals were re-referenced to A1-A2. Then, the Artifact Subspace Reconstruction routine (ASR) which is an EEG artifact correction method included in the EEGLab Matlab software was applied to the signals, removing bad data periods which exceeded the max acceptable 0.5 second window standard deviation of 17, which is considered a conservative window. Next, the Independent Component Analysis (ICA) method (RunICA algorithm) was performed, transforming the 19 EEG signals to 19 ICA components. ICA components that were classified as “eye artifacts” or “jaw artifacts” by the automatic classification routine “ICLabel” in the EEGLAB platform were automatically rejected. It should be noted that, even though the recording was performed in a resting state, eyes-closed condition, eye artifacts of eye movement were still found at some EEG recordings.
A complete analysis of this dataset can be found in the published Data Descriptor paper "A Dataset of Scalp EEG Recordings of Alzheimer’s Disease, Frontotemporal Dementia and Healthy Subjects from Routine EEG", https://doi.org/10.3390/data8060095 *****Im not the original creator of this dataset it was published on https://openneuro.org/datasets/ds004504/versions/1.0.6 i just ported it here for ease of use *****
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We used a high-density electroencephalography (HD-EEG) system, with 128 customized electrode locations, to record from 17 individuals with migraine (12 female) in the interictal period, and 18 age- and gender-matched healthy control subjects, during visual (vertical grating pattern) and auditory (modulated tone) stimulation which varied in temporal frequency (4 and 6Hz), and during rest. This dataset includes the EEG raw data related to the paper entitled Chamanzar, Haigh, Grover, and Behrmann (2020), Abnormalities in cortical pattern of coherence in migraine detected using ultra high-density EEG. The link to our paper will be made available as soon as it is published online.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Univ. of Bonn’ and ‘CHB-MIT Scalp EEG Database’ are publically available datasets which are the most sought after amongst researchers. Bonn dataset is very small compared to CHB-MIT. But still researchers prefer Bonn as it is in simple '.txt' format. The dataset being published here is a preprocessed form of CHB-MIT. The dataset is available in '.csv' format.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset contains 848,640 records with 17 columns, representing EEG (Electroencephalogram) signals recorded from multiple electrode positions on the scalp, along with a status label. The dataset is be related to the study of Alzheimer’s Disease (AD).
Features (16 continuous variables, float64): Each feature corresponds to the electrical activity recorded from standard EEG electrode placements based on the international 10-20 system:
Fp1, Fp2, F7, F3, Fz, F4, F8
T3, C3, Cz, C4, T4
T5, P3, Pz, P4
These channels measure brain activity in different cortical regions (frontal, temporal, central, and parietal lobes).
Target variable (1 categorical variable, int64):
status: Represents the condition or classification of the subject at the time of recording (e.g., patient vs. control, or stage of Alzheimer’s disease).
Size & Integrity:
Rows: 848,640 samples
Columns: 17 (16 EEG features + 1 status label)
Data types: 16 float features, 1 integer label
Missing values: None (clean dataset)
This dataset is suitable for machine learning and deep learning applications such as:
EEG signal classification (AD vs. healthy subjects)
Brain activity pattern recognition
Feature extraction and dimensionality reduction (e.g., PCA, wavelet transforms)
Time-series analysis of EEG recordings
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The dataset provides resting-state EEG data (eyes open,partially eyes closed) from 71 participants who underwent two experiments involving normal sleep (NS---session1) and sleep deprivation(SD---session2) .The dataset also provides information on participants' sleepiness and mood states. (Please note here Session 1 (NS) and Session 2 (SD) is not the time order, the time order is counterbalanced across participants and is listed in metadata.)
The data collection was initiated in March 2019 and was terminated in December 2020. The detailed description of the dataset is currently under working by Chuqin Xiang,Xinrui Fan,Duo Bai,Ke Lv and Xu Lei, and will submit to Scientific Data for publication.
* If you have any questions or comments, please contact:
* Xu Lei: xlei@swu.edu.cn
Xiang, C., Fan, X., Bai, D. et al. A resting-state EEG dataset for sleep deprivation. Sci Data 11, 427 (2024). https://doi.org/10.1038/s41597-024-03268-2
Facebook
TwitterSAM 40: Dataset of 40 subject EEG recordings to monitor the induced-stress while performing Stroop color-word test, arithmetic task, and mirror image recognition task
presents a collection of electroencephalogram (EEG) data recorded from 40 subjects (female: 14, male: 26, mean age: 21.5 years). The dataset was recorded from the subjects while performing various tasks such as Stroop color-word test, solving arithmetic questions, identification of symmetric mirror images, and a state of relaxation. The experiment was primarily conducted to monitor the short-term stress elicited in an individual while performing the aforementioned cognitive tasks. The individual tasks were carried out for 25 s and were repeated to record three trials. The EEG was recorded using a 32-channel Emotiv Epoc Flex gel kit. The EEG data were then segmented into non-overlapping epochs of 25 s depending on the various tasks performed by the subjects. The EEG data were further processed to remove the baseline drifts by subtracting the average trend obtained using the Savitzky-Golay filter. Furthermore, the artifacts were also removed from the EEG data by applying wavelet thresholding. The dataset proposed in this paper can aid and support the research activities in the field of brain-computer interface and can also be used in the identification of patterns in the EEG data elicited due to stress.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
This dataset contains EEG recordings from 18 subjects listening to one of two competing speech audio streams. Continuous speech in trials of ~50 sec. was presented to normal hearing listeners in simulated rooms with different degrees of reverberation. Subjects were asked to attend one of two spatially separated speakers (one male, one female) and ignore the other. Repeated trials with presentation of a single talker were also recorded. The data were recorded in a double-walled soundproof booth at the Technical University of Denmark (DTU) using a 64-channel Biosemi system and digitized at a sampling rate of 512 Hz. Full details can be found in:
and
The data is organized in format of the publicly available COCOHA Matlab Toolbox. The preproc_script.m demonstrates how to import and align the EEG and audio data. The script also demonstrates some EEG preprocessing steps as used the Wong et al. paper above. The AUDIO.zip contains wav-files with the speech audio used in the experiment. The EEG.zip contains MAT-files with the EEG/EOG data for each subject. The EEG/EOG data are found in data.eeg with the following channels:
The expinfo table contains information about experimental conditions, including what what speaker the listener was attending to in different trials. The expinfo table contains the following information:
DATA_preproc.zip contains the preprocessed EEG and audio data as output from preproc_script.m.
The dataset was created within the COCOHA Project: Cognitive Control of a Hearing Aid
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data is linked to the publication "Electrophysiological signatures of brain aging in autism spectrum disorder" by Dickinson, Jeste and Milne, in which it is referenced as Dataset 1.EEG data were acquired via Biosemi Active two EEG system. The original recordings have been converted to .set and .fdt files via EEGLAB as uploaded here. There is a .fdt and a .set file for each recording, the .fdt file contains the data, the .set file contains information about the parameters of the recording (see https://eeglab.org/tutorials/ for further information). The files can be opened within EEGLAB software.The data were acquired from 28 individuals with a diagnosis of an autism spectrum condition and 28 neurotypical controls aged between 18 and 68 years. The paradigm that generated the data was a 2.5 minute (150 seconds) period of eyes closed resting.Ethical approval for data collection and data sharing was given by the Health Research Authority [IRAS ID = 212171].Only data from participants who provided signed consent for data sharing were included in this work and uploaded here.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
EEG signals with 256 channels captured from 11 subjects executing a SSVEP-based experimental protocol. Five different frequencies (6.66, 7.50, 8.57, 10.00 and 12.00 Hz) have been used for the visual stimulation, and the EGI 300 Geodesic EEG System (GES 300), using a 256-channel HydroCel Geodesic Sensor Net (HCGSN) and a sampling rate of 250 Hz has been used for capturing the signals. Check https://www.youtube.com/watch?v=8lGBVvCX5d8&feature=youtu.be for a video demonstrating one trial.Check https://github.com/MAMEM/ssvep-eeg-processing-toolbox for the processing toolbox.Check http://arxiv.org/abs/1602.00904 for the technical report.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data presents a collection of EEG recordings of seven participants with Intellectual and Developmental Disorder (IDD) and seven Typically Developing Controls (TDC). The data is recorded while the participants observe a resting state and a soothing music stimuli. The data was collected using a high-resolution multi-channel dry-electrode system from EMOTIV called EPOC+. This is a 14-channel device with two reference channels and a sampling frequency of 128 Hz. The data was collected in a noise-isolated room. The participants were informed of the experimental procedure, related risks and were asked to keep their eyes closed throughout the experiment. The data is provided in two formats, (1) Raw EEG data and (2) Pre-processed and clean EEG data for both the group of participants. This data can be used to explore the functional brain connectivity of the IDD group. In addition, behavioral information like IQ, SQ, music apprehension and facial expressions (emotion) for IDD participants is provided in file “QualitativeData.xlsx".
Data Usage: The data is arranged as follows: 1. Raw Data: Data/RawData/RawData_TDC/Music and Rest Data/RawData/RawData_IDD/Music and Rest 2. Clean Data Data/CleanData/CleanData_TDC/Music and Rest Data/CleanData/CleanData_IDD/Music and Rest
The dataset comes along with a fully automated EEG pre-processing pipeline. This pipeline can be used to do batch-processing of raw EEG files to obtain clean and pre-processed EEG files. Key features of this pipeline are : (1) Bandpass filtering (2) Linenoise removal (3) Channel selection (4) Independent Component Analysis (ICA) (5) Automatic artifact rejection All the required files are present in the Pipeline folder.
If you use this dataset and/or the fully automated pre-processing pipeline for your research work, kindly cite these two articles linked to this dataset.
(1) Sareen, E., Singh, L., Varkey, B., Achary, K., Gupta, A. (2020). EEG dataset of individuals with intellectual and developmental disorder and healthy controls under rest and music stimuli. Data in Brief, 105488, ISSN 2352-3409, DOI:https://doi.org/10.1016/j.dib.2020.105488. (2) Sareen, E., Gupta, A., Verma, R., Achary, G. K., Varkey, B (2019). Studying functional brain networks from dry electrode EEG set during music and resting states in neurodevelopment disorder, bioRxiv 759738 [Preprint]. Available from: https://www.biorxiv.org/content/10.1101/759738v1
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
Experiment Details Electroencephalography recordings from 16 subjects to fast streams of gabor-like stimuli. Images were presented in rapid serial visual presentation streams at 6.67Hz and 20Hz rates. Participants performed an orthogonal fixation colour change detection task.
Experiment length: 1 hour Raw and preprocessed data are available online through openneuro: https://openneuro.org/datasets/ds004357. Supplementary Material and analysis scripts are available on github: https://github.com/Tijl/features-eeg
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This is Release 3 of HBN-EEG, the EEG and (soon-released) Eye-Tracking Section of the Child Mind Network Healthy Brain Network (HBN) Project, curated into the Brain Imaging Data Structure (BIDS) format. This dataset is part of a larger initiative to advance the understanding of child and adolescent mental health through collecting and analyzing neuroimaging, behavioral, and genetic data (Alexander et al., Sci Data 2017).
This dataset comprises electroencephalogram (EEG) data and behavioral responses collected during EEG experiments from >3000 participants (5-21 yo) involved in the HBN project. The data has been released in 11 separate Releases, each containing data from a different set of participants.
The HBN-EEG dataset includes EEG recordings from participants performing six distinct tasks, which are categorized into passive and active tasks based on the presence of user input and interaction in the experiment.
Resting State: Participants rested with their heads on a chin rest, following instructions to open or close their eyes and fixate on a central cross.
Surround Suppression: Participants viewed flashing peripheral disks with contrasting backgrounds, while event markers and conditions were recorded.
Movie Watching: Participants watched four short movies with different themes, with event markers recording the start and stop times of presentations.
Contrast Change Detection: Participants identified flickering disks with dominant contrast changes and received feedback based on their responses.
Sequence Learning: Participants memorized and repeated sequences of flashed circles on the screen, designed for different age groups.
Symbol Search: Participants performed a computerized symbol search task, identifying target symbols from rows of search symbols.
events.tsv files.participants.tsv file.For access all releases of the HBN-EEG dataset, follow this link on NEMAR.org. The links to the individual releases are below:
s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R1s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R2s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R3s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R4s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R5s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R6s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R7s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R8s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R9s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R10s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_R11s3://fcp-indi/data/Projects/HBN/BIDS_EEG/cmi_bids_NCThe HBN-EEG dataset is licensed under the Creative Commons Attribution 4.0 International License (CC BY SA 4.0), except for the Not-for-Commercial-Use dataset. Please cite the dataset paper (https://doi.org/10.1101/2024.10.03.615261) as well as the original HBN publication (https://dx.doi.org/10.1038/sdata.2017.181).
We would like to express our gratitude to all participants and their families, whose contributions have made this project possible. We also thank our dedicated team of researchers and clinicians for their efforts in collecting, processing, and curating this data.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
EEG data for comparison to PIR-estimated sleep in the Wellcome Open Research article:
'COMPASS: Continuous Open Mouse Phenotyping of Activity and Sleep Status'
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
EmoKey Moments Muse EEG Dataset (EKM-ED): A Comprehensive Collection of Muse S EEG Data and Key Emotional Moments
Dataset Description:
The EmoKey Moments EEG Dataset (EKM-ED) is an intricately curated dataset amassed from 47 participants, detailing EEG responses as they engage with emotion-eliciting video clips. Covering a spectrum of emotions, this dataset holds immense value for those diving deep into human cognitive responses, psychological research, and emotion-based analyses.
Dataset Highlights:
Precise Timestamps: Capturing the exact millisecond of EEG data acquisition, ensuring unparalleled granularity.
Brainwave Metrics: Illuminating the variety of cognitive states through the prism of Delta, Theta, Alpha, Beta, and Gamma waves.
Motion Data: Encompassing the device's movement in three dimensions for enhanced contextuality.
Auxiliary Indicators: Key elements like the device's positioning, battery metrics, and user-specific actions are meticulously logged.
Consent and Ethics: The dataset respects and upholds privacy and ethical standards. Every participant provided informed consent. This endeavor has received the green light from the Ethics Committee at the University of Granada, documented under the reference: 2100/CEIH/2021.
A pivotal component of this dataset is its focus on "key moments" within the selected video clips, honing in on periods anticipated to evoke heightened emotional responses.
Curated Video Clips within Dataset:
Film
Emotion
Duration (seconds)
The Lover
Baseline
43
American History X
Anger
106
Cry Freedom
Sadness
166
Alive
Happiness
310
Scream
Fear
395
The cornerstone of EKM-ED is its innovative emphasis on these key moments, bringing to light the correlation between distinct cinematic events and specific EEG responses.
Key Emotional Moments in Dataset:
Film
Emotion
Key moment timestamps (seconds)
American History X
Anger
36, 57, 68
Cry Freedom
Sadness
112, 132, 154
Alive
Happiness
227, 270, 289
Scream
Fear
23, 42, 79, 226, 279, 299, 334
Citation: Gilman, T. L., et al. (2017). A film set for the elicitation of emotion in research. Behavior Research Methods, 49(6). Link to the study
With its unparalleled depth and focus, the EmoKey Moments EEG Dataset aims to advance research in fields such as neuroscience, psychology, and affective computing, providing a comprehensive platform for understanding and analyzing human emotions through EEG data.
——————————————————————————————————— FOLDER STRUCTURE DESCRIPTION ———————————————————————————————————
questionnaires: all there response questionnaires (Spanish); raw and preprocessed Including SAM | ——preprocessed: Ficha_Evaluacion_Participante_SAM_Refactored.csv: the SAM responses for every film clip
key_moments: the key moment timestamps for every emotion’s clip
muse_wearable_data: XXXX | |—raw |——1: ID = 1 of subject |————muse: EEG data of Muse device |—————————ANGER_XXX.csv : leg data of the anger elicitation |—————————FEAR_XXX.csv : leg data of the fear elicitation |—————————HAPPINESS_XXX.csv : leg data of the happiness elicitation |—————————SADNESS_XXX.csv : leg data of the sadness elicitation |————order: film elicitation order of play: For example: HAPPINESS,SADNESS,ANGER,FEAR … | |—preprocessed |——unclean-signals: without removing EEG artifacts, noise, etc. |————muse: EEG data of Muse device |—————————0.0078125: data downsampled to 128 Hz from 256Hz recorded |——clean-signals: removed EEG artifacts, noise, etc. |————muse: EEG data of Muse device |—————————0.0078125: data downsampled to 128 Hz from 256Hz recorded
The ethical consent for this dataset was provided by La Comisión de Ética en Investigación de la Universidad de Granada, as documented in the approval titled: 'DETECCIÓN AUTOMÁTICA DE LAS EMOCIONES BÁSICAS Y SU INFLUENCIA EN LA TOMA DE DECISIONES MEDIANTE WEARABLES Y MACHINE LEARNING' registered under 2100/CEIH/2021.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset consists of EEG recordings and Brain-Computer Interface (BCI) data from 25 different human subjects performing BCI experiments. More information can be found in the corresponding manuscript:
Dylan Forenzo, Yixuan Liu, Jeehyun Kim, Yidan Ding, Taehyung Yoon, Bin He: “Integrating Simultaneous Motor Imagery and Spatial Attention for EEG-BCI Control”, IEEE Transactions on Biomedical Engineering (10.1109/TBME.2023.3298957).
Please cite this paper if you use any data included in this dataset.
The dataset was collected under the support of NIH grants AT009263, EB021027, EB029354, NS096761, NS124564 to Dr. Bin He at Carnegie Mellon University.
Each file is a MATLAB object (.mat file) which contains data from a single run of BCI control. The MATLAB files are grouped into folders based on the Subject, one for each of the 25 subjects studied. Each subject completed 5 sessions of BCI experiments and each session consisted of either 18 (sessions 1 and 2) or 15 (sessions 3-5) runs, for a total of 81 runs per subject or 2025 total BCI runs.
Each of the MATLAB files contains a single structure with the following fields:
data: An array containing the EEG recordings with the size (channels x time points)
times: A vector containing the timestamps in milisceonds with the size (1 x time points)
fs: sampling frequency (1000 Hz)
labels: A cell array containing the label for each channel
targets: A list of target codes. For LR: 1 is right, 2 is left. For UD: 1 is up, 2 is down. For 2D: 1 is right, 2 is left, 3 is up, and 4 is down
event: A structure of events from BCI2000. Each index corresponds to the start of a trial and includes the time (latency) of when the trial starts, and how long each trial lasted (duration).
results: A vector of which target was hit for each trial (0 if the trial was aborted before a target was hit)
outcome: A vector indicating the outcome of each trail (1: hit, 0: abort, -1: miss)
subject: The coded subject number
session: The session number. Please note that the session numbers are for specific tasks, so even though 2D sessions began on the third day of experiments, the 2D runs are listed as session 1, 2, and 3 as they are the first, second, and third 2D sessions.
axis: The axis of control. Either LR (horizontal only, Left-Right), UD (vertical only, Up-Down), or 2D (both horizontal and vertical control).
task: The control paradigm used. Options are MI (motor imagery), OSA (overt spatial attention), MIOSA (MI and OSA together), MIOSA1 (MI controls horizontal axis, OSA controls vertical. Referred to as MI/OSA in the paper), or MIOSA2 (MI controls vertical axis, OSA controls horizontal. Referred to as OSA/MI in the paper).
run: The run number
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PCA