100+ datasets found
  1. SignalFlowEEG Example Data

    • figshare.com
    bin
    Updated Mar 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ernest Pedapati (2024). SignalFlowEEG Example Data [Dataset]. http://doi.org/10.6084/m9.figshare.25414042.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Mar 15, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Ernest Pedapati
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The SignalFlowEEG Example Data dataset contains sample EEG recordings that demonstrate the capabilities and usage of the SignalFlowEEG Python package. This package provides a comprehensive set of tools for processing, analyzing, and visualizing electroencephalography (EEG) data, with a focus on neuroscience research applications.The example dataset includes EEG recordings from various paradigms:Resting-state EEG: A 5-minute recording where the subject relaxed with eyes closed.Auditory chirp stimulation: EEG recorded while the subject listened to chirp sounds with varying frequencies.Visual evoked potentials: EEG recorded as the subject viewed checkerboard pattern stimuli to elicit visual responses.These recordings were collected at the Cincinnati Children's Hospital Medical Center and are made available for educational and testing purposes.SignalFlowEEG builds upon MNE-Python, a popular open-source library for EEG analysis, and offers additional functionality tailored for clinical research workflows. This example dataset allows users to explore SignalFlowEEG's features and gain hands-on experience analyzing EEG data with this powerful Python package.The dataset consists of .set files, a format used by the EEGLAB toolbox. Each file contains raw EEG data, channel info, and event markers for a specific experimental paradigm. Files can be loaded using mne.io.read_raw_eeglab() from MNE-Python, a SignalFlowEEG dependency. The dataset has no missing data or special abbreviations. Channel names and event markers follow standard EEGLAB conventions.

  2. u

    Motor and Speech Imagery EEG Dataset

    • drum.um.edu.mt
    docx
    Updated Nov 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natasha Padfield; KENNETH P CAMILLERI; TRACEY CAMILLERI; MARVIN K BUGEJA; SIMON G FABRI (2023). Motor and Speech Imagery EEG Dataset [Dataset]. http://doi.org/10.60809/drum.24465871.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Nov 1, 2023
    Dataset provided by
    University of Malta
    Authors
    Natasha Padfield; KENNETH P CAMILLERI; TRACEY CAMILLERI; MARVIN K BUGEJA; SIMON G FABRI
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Overview and MethodologyThis dataset contains motor imagery (MI) and speech imagery (SI) electroencephalogram (EEG) data recorded from 5 healthy subjects with a mean age of 24.4 years. MI involves the subject imagining movements in their limbs, whereas SI involves the subject imagining speaking words in their mind (thought-speech). The data was recorded using the BioSemi ActiveTwo electroencephalogram (EEG) recording equipment, at a sampling frequency of 2.048kHz. 24 channels of EEG data from the 10-20 system are available in the dataset. Four classes of data were recorded for each of the MI and SI paradigms. In the case of MI, left-hand, right-hand, legs and tongue MI tasks were recorded, and in the case of SI, the words, ‘left’, ‘right’, ‘up’ and ‘down’ were recorded. Data for the idle state, when the subject is mentally relaxed and not executing any tasks, was also recorded.Forty trials were recorded for each of the classes. These trials were recorded over four runs, with two runs being used to record MI trials, and two to record SI trials. The runs were interleaved, meaning that the first and third runs were used to record MI, and the second and fourth runs were used to record SI trials. During each run, twenty trials for each class in the paradigm were recorded. These trials were randomly ordered. Note that during each run, twenty trials of the idle state were also recorded. This means that in this database there are actually eighty idle state trials, with forty being recorded during MI runs and forty being recorded during SI runs.Subjects were guided through the data recording runs by a graphical user interface which issued instructions to them. At the start of a run, subjects are given one minute to settle down before the cued trials began. During a trial, a fixation cross first appears on-screen, indicating to the subject to remain relaxed but aware that the next trial will soon begin. After 2s a cue appears on-screen for 1.25s, indicating the particular task the subject should execute. The subject starts executing the task as soon as they see the cue, and continue even when it has disappeared, until the fixation cross appears again. The cues consist of a left-facing arrow (for left-hand MI or ‘left’ SI), a right-facing arrow (for right-hand MI or ‘right’ SI), an upward facing (for tongue MI or ‘up’ SI) and a downward facing arrow (for legs MI or ‘down’ SI). Each trial lasted 4 seconds. Between each run, subjects were given a 3–5-minute break. The data was re-referenced using channel Cz and then mean-centered it. The data was also passed through an anti-aliasing filter and down-sampled to 1kHz before being stored in .mat files for the data repository. The anti-aliasing filter was a low-pass filter with a cutoff frequency of 500Hz, implemented using the lowpass function in MATLAB, which produces a 60dB attenuation above the cutoff and automatically compensates for filter-induced delays.FilesThe dataset consists of 10 MAT-files, named X_Subject_Y.mat, where X is the acronym denoting the brain imagery type, either MI for motor imagery data or SI for speech imagery data, and Z is the subject number. Each file contains the trials for each run in the structure variables ‘run_1’ and ‘run_2’. Within each run structure there are two variables:‘EEG_Data’, a matrix containing the EEG data formatted as: [number of trials x channels x data samples]. The number of data samples is 4000 since the length of each trial was 4s, sampled at 1kHz. The relationship between the EEG channels and the channel number in the second dimension of this matrix is documented in the table stored within the ‘ChannelLocations.mat’ file, which is included with the dataset;‘labels’, a vector indicating which cue was issued, with the following numbers being used to represent the different cues: 1 – Right, 2 – Left, 3 – Up, 4 – Down, 5 – Idle, 6 – Fixation Cross.Acknowledgements The authors acknowledge that data collection for this project was funded through the project: “Setting up of transdisciplinary research and knowledge exchange (TRAKE) complex at the University of Malta (ERDF.01.124)”, which is being co-financed through the European Union through the European Regional Development Fund 2014–2020. The data was recorded by the Centre for Biomedical Cybernetics at the University of Malta.

  3. The raw EEG data, 4 files (EEG_A to D), in European data format (.edf)

    • zenodo.org
    • explore.openaire.eu
    • +1more
    bin
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Laurence A. Brown; Sibah Hasan; Russell G. Foster; Stuart N. Peirson; Laurence A. Brown; Sibah Hasan; Russell G. Foster; Stuart N. Peirson (2020). The raw EEG data, 4 files (EEG_A to D), in European data format (.edf) [Dataset]. http://doi.org/10.5281/zenodo.160118
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Laurence A. Brown; Sibah Hasan; Russell G. Foster; Stuart N. Peirson; Laurence A. Brown; Sibah Hasan; Russell G. Foster; Stuart N. Peirson
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    EEG data for comparison to PIR-estimated sleep in the Wellcome Open Research article:

    'COMPASS: Continuous Open Mouse Phenotyping of Activity and Sleep Status'

  4. Dataset containing resting EEG for a sample of 103 normal infants in the...

    • openneuro.org
    Updated May 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thalía Harmony (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México); Gloria Otero-Ojeda (Facultad de Medicina; Universidad Autónoma del Estado de México); Eduardo Aubert (Centro de Neurociencias de Cuba); Thalía Fernández (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México); Lourdes Cubero-Rego (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México) (2023). Dataset containing resting EEG for a sample of 103 normal infants in the first year of life [Dataset]. http://doi.org/10.18112/openneuro.ds004577.v1.0.0
    Explore at:
    Dataset updated
    May 24, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Thalía Harmony (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México); Gloria Otero-Ojeda (Facultad de Medicina; Universidad Autónoma del Estado de México); Eduardo Aubert (Centro de Neurociencias de Cuba); Thalía Fernández (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México); Lourdes Cubero-Rego (Neurodevelopment Research Unit; Instituto de Neurobiología; Universidad Nacional Autónoma de México)
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    May 25th 2023 Neurodevelopment Research Unit, Instituto de Neurobiología, Universidad Nacional Autónoma de México

    This is a dataset containing resting EEG for a sample of 103 normal infants (41 female and 62 male) in the first year of life.

    81 subjects with 1 EEG recording 18 subjects with 2 EEG recordings 3 subjects with 3 EEG recording 1 subject with 4 EEG recordings

    130 EEG recordings in total distributed in 4 sessions

  5. EEG Alpha Waves dataset

    • zenodo.org
    • search.datacite.org
    bin
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Grégoire Cattan; Pedro L. C. Rodrigues; Pedro L. C. Rodrigues; Marco Congedo; Marco Congedo; Grégoire Cattan (2020). EEG Alpha Waves dataset [Dataset]. http://doi.org/10.5281/zenodo.2348892
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Grégoire Cattan; Pedro L. C. Rodrigues; Pedro L. C. Rodrigues; Marco Congedo; Marco Congedo; Grégoire Cattan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Summary:

    This dataset contains electroencephalographic recordings of subjects in a simple resting-state eyes open/closed experimental protocol. Data were recorded during a pilot experiment taking place in the GIPSA-lab, Grenoble, France, in 2017 [1]. Python code is available at https://github.com/plcrodrigues/Alpha-Waves-Dataset for manipulating the data.

    Principal Investigators: Eng. Grégoire CATTAN, Eng. Pedro L. C. RODRIGUES
    Scientific Supervisor: Dr. Marco Congedo

    Introduction :

    The occipital dominant rhythm (commonly referred to as occipital ‘Alpha’) is prominent in occipital and parietal regions when a subject is exempt of visual stimulations, as in the case when keeping the eyes closed (2). In normal subjects its peak frequency is in the range 8-12Hz. The detection of alpha waves on the ongoing electroencephalography (EEG) is a useful indicator of the subject’s level of stress, concentration, relaxation or mental load (3,4) and an easy marker to detect in the recorded signals because of its high signal-to-noise-ratio. This experiment was conducted to provide a simple yet reliable set of EEG signals carrying very distinct signatures on each experimental condition. It can be useful for researchers and students looking for an EEG dataset to perform tests with signal processing and machine learning algorithms. An example of application of this dataset can be seen in (5).

    I.Participants

    A total of 20 volunteers participated in the experiment (7 females), with mean (sd) age 25.8 (5.27) and median 25.5. 18 subjects were between 19 and 28 years old. Two participants with age 33 and 44 were outside this range.

    II.Procedures

    EEG signals were acquired using a standard research grade amplifier (g.USBamp, g.tec, Schiedlberg, Austria) and the EC20 cap equipped with 16 wet electrodes (EasyCap, Herrsching am Ammersee, Germany), placed according to the 10-20 international system. The locations of the electrodes were FP1, FP2, FC5, FC6, FZ, T7, CZ, T8, P7, P3, PZ, P4, P8, O1, Oz, and O2. The reference was placed on the right earlobe and the ground at the AFZ scalp location. The amplifier was linked by USB connection to the PC where the data were acquired by means of the software OpenVibe (6,7). We acquired the data with no digital filter and a sampling frequency of 512 samples per second was used. For ensuing analyses, the experimenter was able to tag the EEG signal using an in-house application based on a C/C++ library (8). The tag were sent by the application to the amplifier through the USB port of the PC. It was then recorded along with the EEG signal as a supplementary channel.

    For each recording we provide the age, genre and fatigue of each participant. Fatigue was evaluated by the subjects thanks to a scale ranging from 0 to 10, where 10 represents exhaustion. Each participant underwent one session consisting of ten blocks of ten seconds of EEG data recording. Five blocks were recorded while a subject was keeping his eyes closed (condition 1) and the others while his eyes were open (condition 2). The two conditions were alternated. Before the onset of each block, the subject was asked to close or open his eyes according to the experimental condition. The experimenter then tagged the EEG signal using the in-house application and started a 10-second countdown of a block.

    III.Organization of the dataset

    For each subject we provide a single .mat file containing the complete recording of the session. The file is a 2D-matrix where the rows contain the observations at each time sample. Columns 2 to 17 contain the recordings on each of the 16 EEG electrodes. The first column of the matrix represents the timestamp of each observation and column 18 and 19 contain the triggers for the experimental condition 1 and 2. The rows in column 18 (resp. 19) are filled with zeros, except at the timestamp corresponding to the beginning of the block for condition 1 (resp. 2), when the row gets a value of one.

    We supply an online and open-source example working with Python (9).

    IV.References

    1. Cattan G, Andreev A, Mendoza C, Congedo M. The Impact of Passive Head-Mounted Virtual Reality Devices on the Quality of EEG Signals. In Delft: The Eurographics Association; 2018 [cited 2018 Apr 16]. Available from: https://diglib.eg.org:443/handle/10.2312/vriphys20181064

    2. Pfurtscheller G, Stancák A, Neuper C. Event-related synchronization (ERS) in the alpha band — an electrophysiological correlate of cortical idling: A review. Int J Psychophysiol. 1996 Nov 1;24(1):39–46.

    3. Banquet JP. Spectral analysis of the EEG in meditation. Electroencephalogr Clin Neurophysiol. 1973 Aug 1;35(2):143–51.

    4. Antonenko P, Paas F, Grabner R, van Gog T. Using Electroencephalography to Measure Cognitive Load. Educ Psychol Rev. 2010 Dec 1;22(4):425–38.

    5. Rodrigues PLC, Congedo M, Jutten C. Multivariate Time-Series Analysis Via Manifold Learning. In: 2018 IEEE Statistical Signal Processing Workshop (SSP). 2018. p. 573–7.

    6. Renard Y, Lotte F, Gibert G, Congedo M, Maby E, Delannoy V, et al. OpenViBE: An Open-Source Software Platform to Design, Test, and Use Brain–Computer Interfaces in Real and Virtual Environments. Presence Teleoperators Virtual Environ. 2010 Feb 1;19(1):35–53.

    7. Arrouët C, Congedo M, Marvie J-E, Lamarche F, Lécuyer A, Arnaldi B. Open-ViBE: A Three Dimensional Platform for Real-Time Neuroscience. J Neurother. 2005 Jul 8;9(1):3–25.

    8. Mandal MK. C++ Library for Serial Communication with Arduino [Internet]. 2016 [cited 2018 Dec 15]. Available from : https://github.com/manashmndl/SerialPort

    9. Rodrigues PLC. Alpha-Waves-Dataset [Internet]. Grenoble: GIPSA-lab; 2018. Available from : https://github.com/plcrodrigues/Alpha-Waves-Dataset

  6. H

    Replication Data for: A cross-session motor imagery EEG dataset

    • dataverse.harvard.edu
    Updated Apr 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lincong Pan (2024). Replication Data for: A cross-session motor imagery EEG dataset [Dataset]. http://doi.org/10.7910/DVN/O5CQFA
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 8, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Lincong Pan
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Pan2023 Dataset Documentation # This is a replication of the "A cross-session motor imagery EEG dataset" dataset, the .mat file version is v7.0. ## Abstract The Pan2023 dataset is a collection of electroencephalography (EEG) signals from 14 subjects performing motor imagery (MI) tasks across two sessions. The dataset aims to facilitate the study of cross-session variability in MI-EEG signals and to support the development of robust brain-computer interface (BCI) systems. ## Dataset Composition The dataset encompasses EEG recordings from 14 subjects, each participating in two sessions. The sessions involve MI tasks with visual cues for left-handed and right-handed movements. Data acquisition was performed using a Neuroscan SynAmps2 amplifier, equipped with 28 scalp electrodes following the international 10-20 system. The EEG signals were sampled at a frequency of 250Hz, with a band-pass filter applied from 0.01 to 200Hz to mitigate power line noise. The collected data is stored in Matlab format, labeled by subject and session number. ## Participants The participant cohort includes 14 individuals (five females), aged 22 to 25, with two reporting left-handedness. All subjects were screened for neurological and movement disorders, ensuring a healthy participant profile for the study. ## Experimental Paradigm Each experimental session comprised 120 trials, segmented into three distinct phases: Rest, Preparation, and Task. During the Rest Period (2 seconds), subjects were instructed to remain relaxed without engaging in mental tasks. The Preparation Period (1 second) involved a 'Ready' cue on the monitor, prompting subjects to focus and prepare for the upcoming MI task. The Task Period (4 seconds) required subjects to perform the MI task, visualizing the movement corresponding to the provided cues, either left or right-handed. This paradigm was designed to occur in a controlled, distraction-free environment. ## Data Acquisition and Preprocessing EEG signals were captured using a Neuroscan SynAmps2 amplifier and 28 scalp electrodes positioned per the 10-20 system. The sampling rate was set at 1000Hz, and a band-pass filter from 0.01 to 200Hz and a notch filter at 50Hz were employed to exclude power line interference. The signals were downsampled to 250Hz and archived in Matlab format, systematically named by subject and session identifiers. ## Data Structure The dataset's structure is encapsulated in a Matlab file, comprising a struct with the following components: - data: A 3D matrix ([n_trials, n_channels, n_samples]) containing the EEG signals. - label: A vector ([n_trials]) denoting each trial's label (1 for left-handed, 2 for right-handed movement). - trial_info: A struct detailing each trial's phase (1 for Rest, 2 for Preparation, 3 for Task), the visual cue (1 for left-handed, 2 for right-handed movement), and the subject's identifier.

  7. m

    Tactile Stimulation in a Healthy Mexican Sample: An EEG Database

    • data.mendeley.com
    Updated Nov 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luis Cepeda (2024). Tactile Stimulation in a Healthy Mexican Sample: An EEG Database [Dataset]. http://doi.org/10.17632/6jhz2vpxtt.2
    Explore at:
    Dataset updated
    Nov 12, 2024
    Authors
    Luis Cepeda
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Tecnologico de Monterrey School of Engineering and Sciences NeuroTechs Research Group

    Context: This dataset includes electroencephalographic (EEG) recordings from 34 healthy, young adults in Mexico, collected to study the somatosensory system's responses to a range of tactile stimuli. The study employs innovative NeuroSense tactile stimulators to explore how the brain processes touch sensations when subjected to stimuli such as air, vibration, and caress at four distinct intensity levels.

    Objective: The objective of this database is to understand the cortical processing of tactile stimuli including air, vibration and carress using EEG.

    Main Outcome Measure: The main outcome measure is the EEG recordings which includes the evoked responses of the somatosensory system to each type of stimulus and intensities. These measurements allow for an in-depth analysis of the cortical dynamics involved in processing touch.

    Limitations: One limitation of the database is its focus on a relatively small and specific population, which could affect the generalizability of the findings. Additionally, the data is dependent on the accuracy and consistency of the stimulus delivery and EEG recording during the experimental sessions.

    Generalizability: While the findings provide significant insights into the neural processing of tactile stimuli within the central nervous sytem, their generalizability might be limited due to the specialized nature of the stimuli and the controlled experimental conditions. However, the dataset serves as a valuable resource for developing diagnostic and therapeutic strategies for somatosensory impairments and advancing research in neuroscience and somatosensory rehabilitation.

  8. Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG...

    • openneuro.org
    Updated May 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elif Isbell; Amanda N. Peters; Dylan M. Richardson; Nancy E. R. De León (2025). Cognitive Electrophysiology in Socioeconomic Context in Adulthood: An EEG dataset [Dataset]. http://doi.org/10.18112/openneuro.ds006018.v1.2.2
    Explore at:
    Dataset updated
    May 22, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Elif Isbell; Amanda N. Peters; Dylan M. Richardson; Nancy E. R. De León
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Cognitive Electrophysiology in Socioeconomic Context in Adulthood Dataset

    Data Description

    This dataset comprises electroencephalogram (EEG) data collected from 127 young adults (18-30 years), along with retrospective objective and subjective indicators of childhood family socioeconomic status (SES), as well as SES indicators in adulthood, such as educational attainment, individual and household income, food security, and home and neighborhood characteristics. The EEG data were recorded with tasks directly acquired from the Event-Related Potentials Compendium of Open Resources and Experiments ERP CORE (Kappenman et al., 2021), or adapted from these tasks (Isbell et al., 2024). These tasks were optimized to capture neural activity manifest in perception, cognition, and action, in neurotypical young adults. Furthermore, the dataset includes a symptoms checklist, consisting of questions that were found to be predictive of symptoms consistent with attention-deficit/hyperactivity disorder (ADHD) in adulthood, which can be used to investigate the links between ADHD symptoms and neural activity in a socioeconomically diverse young adult sample. The detailed description of the dataset is accepted for publication in Scientific Data, with the title: "Cognitive Electrophysiology in Socioeconomic Context in Adulthood."

    EEG Recording

    EEG data were recorded using the Brain Products actiCHamp Plus system, in combination with BrainVision Recorder (Version 1.25.0101). We used a 32-channel actiCAP slim active electrode system, with electrodes mounted on elastic snap caps (Brain Products GmbH, Gilching, Germany). The ground electrode was placed at FPz. From the electrode bundle, we repurposed 2 electrodes by placing them on the mastoid bones behind the left and right ears to be used for re-referencing after data collection. We also repurposed 3 additional electrodes to record electrooculogram (EOG). To capture eye artifacts, we placed the horizontal EOG (HEOG) electrodes ateral to the external canthus of each eye. We also placed one vertical EOG (VEOG) electrode below the right eye. The remaining 27 electrodes were used as scalp electrodes, which were mounted per the international 10/20 system. EEG data were recorded at a sampling rate of 500 Hz and referenced to the Cz electrode. StimTrak was used to assess stimulus presentation delays for both the monitor and headphones. The results indicated that both the visual and auditory stimuli had a delay of approximately 20 ms. Therefore, users should shift the event-codes by 20 ms when conducting stimulus-locked analyses.

    Notes

    Before the data were publicly shared, all identifiable information was removed, including date of birth, date of session, race/ethnicity, zip code, occupation (self and parent), and names of the languages the participants reported speaking and understanding fluently. Date of birth and date of session were used to compute age in years, which is included in the dataset. Furthermore, several variables were recoded based on re-identification risk assessments. Users who would like to establish secure access to components of the dataset we could not publicly share due to re-identification risks, should contact the corresponding researcher as described below. The dataset consists of participants recruited for studies on adult cognition in context. To provide the largest sample size, we included all participants who completed at least one of the EEG tasks of interest. Each participant completed each EEG task only once. The original participant IDs with which the EEG data were saved were recoded and the raw EEG files were renamed to make the dataset BIDS compatible.

    The ERP CORE experimental tasks can be found on OSF, under Experiment Control Files: https://osf.io/thsqg/

    Examples of EEGLAB/ERPLAB data processing scripts that can be used with the EEG data shared here can be found on OSF:

    osf.io/thsqg osf.io/43H75

    Contact * If you have any questions, comments, or requests, please contact: * Elif Isbell: eisbell@ucmerced.edu

    Copyright and License

    This dataset is licensed under CC0.

    References

    Isbell, E., Peters, A. N., Richardson, D. M., & Rodas De León, N. E. (2025). Cognitive electrophysiology in socioeconomic context in adulthood. Scientific Data, 12(1), 1–9. https://doi.org/10.1038/s41597-025-05209-z

    Isbell, E., De León, N. E. R., & Richardson, D. M. (2024). Childhood family socioeconomic status is linked to adult brain electrophysiology. PloS One, 19(8), e0307406.

    Isbell, E., De León, N. E. R. & Richardson, D. M. Childhood family socioeconomic status is linked to adult brain electrophysiology - accompanying analytic data and code. OSF https://doi.org/10.17605/osf.io/43H75 (2024).

    Kappenman, E. S., Farrens, J. L., Zhang, W., Stewart, A. X., & Luck, S. J. (2021). ERP CORE: An open resource for human event-related potential research. NeuroImage, 225, 117465.

    Kappenman, E. S., Farrens, J., Zhang, W., Stewart, A. X. & Luck, S. J. ERP CORE. https://osf.io/thsqg (2020).

    Kappenman, E., Farrens, J., Zhang, W., Stewart, A. & Luck, S. Experiment control files. https://osf.io/47uf2 (2020).

  9. Infant Sibling Project: Sample Files

    • zenodo.org
    • explore.openaire.eu
    bin, zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    April Robyn Levin; Laurel Joy Gabard-Durnam; Adriana Sofia Mendez Leal; Heather Marie O'Leary; Carol Lee Wilkinson; Helen Tager-Flusberg; Charles Alexander Nelson; April Robyn Levin; Laurel Joy Gabard-Durnam; Adriana Sofia Mendez Leal; Heather Marie O'Leary; Carol Lee Wilkinson; Helen Tager-Flusberg; Charles Alexander Nelson (2020). Infant Sibling Project: Sample Files [Dataset]. http://doi.org/10.5281/zenodo.998965
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    April Robyn Levin; Laurel Joy Gabard-Durnam; Adriana Sofia Mendez Leal; Heather Marie O'Leary; Carol Lee Wilkinson; Helen Tager-Flusberg; Charles Alexander Nelson; April Robyn Levin; Laurel Joy Gabard-Durnam; Adriana Sofia Mendez Leal; Heather Marie O'Leary; Carol Lee Wilkinson; Helen Tager-Flusberg; Charles Alexander Nelson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    These electroencephalography (EEG) data files were collected through the Infant Sibling Project (ISP), a prospective investigation examining infants at high versus low familial risk for autism spectrum disorder over the first 3 years of life. Here we provide a subset of the full dataset, as example files for the Batch EEG Automated Processing Pipeline (BEAPP), and the Harvard Automated Processing Pipeline for EEG (HAPPE). Both BEAPP and HAPPE may be downloaded at www.github.com.

    Baseline EEG data was collected while a young child sat in a parent’s lap watching a research assistant blow bubbles or show toys for several minutes.1 12 sample baseline EEGs are provided here, in .mat format. Auditory (event-related) EEG data was collected using an auditory double oddball paradigm, in which a stream of consonant-vowel stimuli was presented. Stimuli included a “Standard” /ɖa/ sound 80% of the time, “Native” /ta/ sound 10% of the time, and “Non-Native” /da/ sound 10% of the time.2 10 sample auditory EEGs are provided here, in .mff format.

    This sample dataset was chosen for demonstration of BEAPP and HAPPE for several reasons. First, the longitudinal nature of the study led to data collection with different sampling rates (250 Hz and 500 Hz) and acquisition setups (64-channel Geodesic Sensor Net v2.0, and 128-channel HydroCel Geodesic Sensor Net, both from Electrical Geodesics, Inc., Eugene, OR). Additionally, because young children cannot follow instructions to “rest” or remain still, EEG in these children typically contains greater amounts of artifact than EEG in typical adults. BEAPP and HAPPE are targeted towards addressing these challenges.

    The Infant Sibling Project was carried out in accordance with the recommendations of the Institutional Review Board at Boston University and Boston Children’s Hospital (#X06-08-0374), with written informed consent from all caregivers prior to their child’s participation in the study. All files here have been deidentified, including alteration of exact acquisition dates. Acquisition times have not been altered.

    For additional information about data collection paradigms, and sample studies published on the larger ISP data set, please see the following references:

    1. Levin, A. R., Varcin, K. J., O’Leary, H. M., Tager-Flusberg, H., and Nelson, C. A. (2017). EEG power at 3 months in infants at high familial risk for autism. J. Neurodev. Disord. 9, 1–13.

    2. Seery A, Tager-Flusberg H, Nelson CA. Event-related potentials to repeated speech in 9-month-old infants at risk for autism spectrum disorder. J. Neurodev. Disord. 2014;6:43.

  10. d

    Replication Data for: A cross-session motor imagery EEG dataset

    • search.dataone.org
    Updated Sep 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pan, Lincong (2024). Replication Data for: A cross-session motor imagery EEG dataset [Dataset]. http://doi.org/10.7910/DVN/O5CQFA
    Explore at:
    Dataset updated
    Sep 24, 2024
    Dataset provided by
    Harvard Dataverse
    Authors
    Pan, Lincong
    Description

    Pan2023 Dataset Documentation # This is a replication of the "A cross-session motor imagery EEG dataset" dataset, the .mat file version is v7.0. ## Abstract The Pan2023 dataset is a collection of electroencephalography (EEG) signals from 14 subjects performing motor imagery (MI) tasks across two sessions. The dataset aims to facilitate the study of cross-session variability in MI-EEG signals and to support the development of robust brain-computer interface (BCI) systems. ## Dataset Composition The dataset encompasses EEG recordings from 14 subjects, each participating in two sessions. The sessions involve MI tasks with visual cues for left-handed and right-handed movements. Data acquisition was performed using a Neuroscan SynAmps2 amplifier, equipped with 28 scalp electrodes following the international 10-20 system. The EEG signals were sampled at a frequency of 250Hz, with a band-pass filter applied from 0.01 to 200Hz to mitigate power line noise. The collected data is stored in Matlab format, labeled by subject and session number. ## Participants The participant cohort includes 14 individuals (five females), aged 22 to 25, with two reporting left-handedness. All subjects were screened for neurological and movement disorders, ensuring a healthy participant profile for the study. ## Experimental Paradigm Each experimental session comprised 120 trials, segmented into three distinct phases: Rest, Preparation, and Task. During the Rest Period (2 seconds), subjects were instructed to remain relaxed without engaging in mental tasks. The Preparation Period (1 second) involved a 'Ready' cue on the monitor, prompting subjects to focus and prepare for the upcoming MI task. The Task Period (4 seconds) required subjects to perform the MI task, visualizing the movement corresponding to the provided cues, either left or right-handed. This paradigm was designed to occur in a controlled, distraction-free environment. ## Data Acquisition and Preprocessing EEG signals were captured using a Neuroscan SynAmps2 amplifier and 28 scalp electrodes positioned per the 10-20 system. The sampling rate was set at 1000Hz, and a band-pass filter from 0.01 to 200Hz and a notch filter at 50Hz were employed to exclude power line interference. The signals were downsampled to 250Hz and archived in Matlab format, systematically named by subject and session identifiers. ## Data Structure The dataset's structure is encapsulated in a Matlab file, comprising a struct with the following components: - data: A 3D matrix ([n_trials, n_channels, n_samples]) containing the EEG signals. - label: A vector ([n_trials]) denoting each trial's label (1 for left-handed, 2 for right-handed movement). - trial_info: A struct detailing each trial's phase (1 for Rest, 2 for Preparation, 3 for Task), the visual cue (1 for left-handed, 2 for right-handed movement), and the subject's identifier.

  11. The Nencki-Symfonia EEG/ERP dataset

    • openneuro.org
    Updated Jun 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dzianok Patrycja; Antonova Ingrida; Wojciechowski Jakub; Dreszer Joanna; Kublik Ewa (2025). The Nencki-Symfonia EEG/ERP dataset [Dataset]. http://doi.org/10.18112/openneuro.ds004621.v1.0.4
    Explore at:
    Dataset updated
    Jun 21, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Dzianok Patrycja; Antonova Ingrida; Wojciechowski Jakub; Dreszer Joanna; Kublik Ewa
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Nencki-Symfonia EEG/ERP dataset (dataset DOI: doi.org/10.5524/100990)

    IMPORTANT NOTE: The dataset contains no errors (BIDS-1). The numerous warnings currently displayed are a result of OpenNeuro updating its validator to BIDS-2. The OpenNeuro team is actively working on refining the validator to display only meaningful warnings (more information on OpenNeuro GitHub page). At this time, as dataset owners, we are unable to take any action to resolve these warnings.

    Description: mixed cognitive tasks [(i) an extended multi-source interference task, MSIT+; (ii) a 3-stimuli oddball task; (iii) a control, simple reaction task, SRT; and (iv) a resting-state protocol]

    Please cite the following references if you use these data: 1. Dzianok P, Antonova I, Wojciechowski J, Dreszer J, Kublik E. The Nencki-Symfonia electroencephalography/event-related potential dataset: Multiple cognitive tasks and resting-state data collected in a sample of healthy adults. Gigascience. 2022 Mar 7;11:giac015. doi: 10.1093/gigascience/giac015. 2. Dzianok P, Antonova I, Wojciechowski J, Dreszer J, Kublik E. Supporting data for "The Nencki-Symfonia EEG/ERP dataset: Multiple cognitive tasks and resting-state data collected in a sample of healthy adults" GigaScience Database, 2022. http://doi.org/10.5524/100990

    Release history:

    26/01/2022: Initial release (GigaDB)

    15/06/2023: Added to OpenNeuro; updated README and dataset_description.json; minor updated to .json files related with BIDS errors/warnings. Updated events files (ms changed to s).

    12/10/2023: public release on OpenNeuro after deleting some additional, not needed system information from raw logfiles

    10/2024: minor correction of logfiles in the /sourcedata directory (MSIT and SRT) for sub-01 to sub-03

    02/2025 (v1.0.3): corrections to REST files for subjects sub-20 and sub-23 (EEG and .tsv files) – corrected marker names and removed redundant markers

  12. f

    EEG 128E example dataset ANT CNT

    • figshare.com
    bin
    Updated May 3, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benedikt Ehinger (2020). EEG 128E example dataset ANT CNT [Dataset]. http://doi.org/10.6084/m9.figshare.12236966.v2
    Explore at:
    binAvailable download formats
    Dataset updated
    May 3, 2020
    Dataset provided by
    figshare
    Authors
    Benedikt Ehinger
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    a raw EEG datase contaminated with a 16 2/3 Hz artefact

  13. Z

    Ultra high-density 255-channel EEG-AAD dataset

    • data.niaid.nih.gov
    Updated Jun 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bertrand, Alexander (2024). Ultra high-density 255-channel EEG-AAD dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4518753
    Explore at:
    Dataset updated
    Jun 13, 2024
    Dataset provided by
    Zink, Rob
    Bertrand, Alexander
    Mundanad Narayanan, Abhijith
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    If using this dataset, please cite the following paper above and the current Zenodo repository:A. Mundanad Narayanan, R. Zink, and A. Bertrand, "EEG miniaturization limits for stimulus decoding with EEG sensor networks", Journal of Neural Engineering, vol. 18, 2021, doi: 10.1088/1741-2552/ac2629

    Experiment*************

    This dataset contains 255-channel electroencephalography (EEG) data collected during an auditory attention decoding experiment (AAD). The EEG was recorded using a SynAmps RT device (Compumedics, Australia) at a sampling rate of 1 kHz and using active Ag/Cl electrodes. The electrodes were placed on the head according to the international 10-5 (5%) system. 30 normal hearing male subjects between 22 and 35 years old participated in the experiment. All of them signed an informed consent form approved by the KU Leuven ethical committee.

    Two Dutch stories narrated by different male speakers divided into two parts of 6 minutes each were used as the stimuli in the experiment [1]. A single trial of the experiment involved the presentation of these two parts (one of both stories) to the subject through insert phones (Etymotic ER3A) at 60dBA. These speech stimuli were filtered using a head-related transfer function (HRTF) such that the stories seemed to arrive from two distinct spatial locations, namely left and right with respect to the subject with 180 degrees separation. In each trial, the subjects were asked to attend to only one ear while ignoring the other. Four trials of 6 minutes each were carried out, in which each story part is used twice. The order of presentations was randomized and balanced over different subjects. Thus approximately 24 minutes of EEG data was recorded per subject.

    File organization and details********************************

    The EEG data of each of the 30 subjects are uploaded as a ZIP file with the name Sx.tar.gzip here x=0,1,2,..,29. When a zip file is extracted, the EEG data are in their original raw format as recorded by the CURRY software [2]. The data files of each recording consist of four files with the same name but different extensions, namely, .dat, ,dap, .rs3 and .ceo. The name of each file follows the following convention: Sx_AAD_P. With P taking one of the following values for each file:1. 1L2. 1R3. 2L4. 2R

    The letter 'L' or 'R' in P indicates the attended direction of each subject in a recording: left and right respectively. A MATLAB function to read the software is provided in the directory called scripts. A python function to read the file is available in this Github repository [3].The original version of stimuli presented to subjects, i.e. without the HRTF filtering, can be found after extracting the stimuli.zip file in WAV format. There are 4 WAV files corresponding to the two parts of each of the two stories. These files have been sampled at 44.1 kHz. The order of presentation of these WAV files is given in the table below: Stimuli presentation and attention information of files

    Trial (P) Stimuli: Left-ear Stimuli: Right-ear Attention

    1L part1_track1_dry part1_track2_dry Left

    1R part1_track1_dry part1_track2_dry Right

    2L part2_track2_dry part2_track1_dry Left

    2R part2_track2_dry part2_track1_dry Right

    Additional files (after extracting scripts.zip and misc.zip):

    scripts/sample_script.m: Demonstrates reading an EEG-AAD recording and extracting the start and end of the experiment.

    misc/channel-layout.jpeg: The 255-channel EEG cap layout

    misc/eeg255ch_locs.csv: The channel names, numbers and their spherical (theta and phi) scalp coordinates.

    [1] Radioboeken voor kinderen, http://radioboeken.eu/kinderradioboeken.php?lang=NL, 2007 (Accessed: 8 Feb 2021)

    [2] CURRY 8 X – Data Acquisition and Online Processing, https://compumedicsneuroscan.com/product/curry-data-acquisition-online-processing-x/ (Accessed: 8, Feb, 2021)

    [3] Abhijith Mundanad Narayanan, "EEG analysis in python", 2021. https://github.com/mabhijithn/eeg-analyse , (Accessed: 8 Feb, 2021)

  14. T

    Raw EEG Data Files

    • dataverse.tdl.org
    txt
    Updated Jun 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Logan Trujillo; Logan Trujillo (2024). Raw EEG Data Files [Dataset]. http://doi.org/10.18738/T8/9TTLK8
    Explore at:
    txt(77555456), txt(76490240), txt(77779712), txt(75144704), txt(90450176), txt(75873536), txt(28499456), txt(28387328), txt(28555520), txt(75312896), txt(75200768), txt(75256832), txt(73126400), txt(74976512), txt(75985664), txt(80302592), txt(28611584), txt(85516544), txt(79910144), txt(76658432), txt(77499392), txt(76041728), txt(74023424), txt(80078336), txt(28443392), txt(77611520), txt(79517696), txt(78452480), txt(75761408), txt(73574912), txt(81928448), txt(74696192), txt(79293440), txt(76770560)Available download formats
    Dataset updated
    Jun 6, 2024
    Dataset provided by
    Texas Data Repository
    Authors
    Logan Trujillo; Logan Trujillo
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is the raw EEG data for the study. Data is in BioSemi Data Format (BDF). Files with only "II" in the file name were recorded during the reported 1-Exemplar categorization task; "RB-II" files were recorded during the reported 2-Exemplar categorization task. "Resting" files were recorded during wakeful resting state data.

  15. Z

    Video-EEG Encoding-Decoding Dataset KU Leuven

    • data.niaid.nih.gov
    • zenodo.org
    Updated Feb 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stebner, Axel (2025). Video-EEG Encoding-Decoding Dataset KU Leuven [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10512413
    Explore at:
    Dataset updated
    Feb 24, 2025
    Dataset provided by
    Tuytelaars, Tinne
    Geirnaert, Simon
    Yao, Yuanyuan
    Stebner, Axel
    Bertrand, Alexander
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Leuven
    Description

    If using this dataset, please cite the following paper and the current Zenodo repository.

    This dataset is described in detail in the following paper:

    [1] Yao, Y., Stebner, A., Tuytelaars, T., Geirnaert, S., & Bertrand, A. (2024). Identifying temporal correlations between natural single-shot videos and EEG signals. Journal of Neural Engineering, 21(1), 016018. doi:10.1088/1741-2552/ad2333

    The associated code is available at: https://github.com/YYao-42/Identifying-Temporal-Correlations-Between-Natural-Single-shot-Videos-and-EEG-Signals?tab=readme-ov-file

    Introduction

    The research work leading to this dataset was conducted at the Department of Electrical Engineering (ESAT), KU Leuven.

    This dataset contains electroencephalogram (EEG) data collected from 19 young participants with normal or corrected-to-normal eyesight when they were watching a series of carefully selected YouTube videos. The videos were muted to avoid the confounds introduced by audio. For synchronization, a square box was encoded outside of the original frames and flashed every 30 seconds in the top right corner of the screen. A photosensor, detecting the light changes from this flashing box, was affixed to that region using black tape to ensure that the box did not distract participants. The EEG data was recorded using a BioSemi ActiveTwo system at a sample rate of 2048 Hz. Participants wore a 64-channel EEG cap, and 4 electrooculogram (EOG) sensors were positioned around the eyes to track eye movements.

    The dataset includes a total of (19 subjects x 63 min + 9 subjects x 24 min) of data. Further details can be found in the following section.

    Content

    YouTube Videos: Due to copyright constraints, the dataset includes links to the original YouTube videos along with precise timestamps for the segments used in the experiments. The features proposed in 1 have been extracted and can be downloaded here: https://drive.google.com/file/d/1J1tYrxVizrl1xP-W1imvlA_v-DPzZ2Qh/view?usp=sharing.

    Raw EEG Data: Organized by subject ID, the dataset contains EEG segments corresponding to the presented videos. Both EEGLAB .set files (containing metadata) and .fdt files (containing raw data) are provided, which can also be read by popular EEG analysis Python packages such as MNE.

    The naming convention links each EEG segment to its corresponding video. E.g., the EEG segment 01_eeg corresponds to video 01_Dance_1, 03_eeg corresponds to video 03_Acrob_1, Mr_eeg corresponds to video Mr_Bean, etc.

    The raw data have 68 channels. The first 64 channels are EEG data, and the last 4 channels are EOG data. The position coordinates of the standard BioSemi headcaps can be downloaded here: https://www.biosemi.com/download/Cap_coords_all.xls.

    Due to minor synchronization ambiguities, different clocks in the PC and EEG recorder, and missing or extra video frames during video playback (rarely occurred), the length of the EEG data may not perfectly match the corresponding video data. The difference, typically within a few milliseconds, can be resolved by truncating the modality with the excess samples.

    Signal Quality Information: A supplementary .txt file detailing potential bad channels. Users can opt to create their own criteria for identifying and handling bad channels.

    The dataset is divided into two subsets: Single-shot and MrBean, based on the characteristics of the video stimuli.

    Single-shot Dataset

    The stimuli of this dataset consist of 13 single-shot videos (63 min in total), each depicting a single individual engaging in various activities such as dancing, mime, acrobatics, and magic shows. All the participants watched this video collection.

    Video ID Link Start time (s) End time (s)

    01_Dance_1 https://youtu.be/uOUVE5rGmhM 8.54 231.20

    03_Acrob_1 https://youtu.be/DjihbYg6F2Y 4.24 231.91

    04_Magic_1 https://youtu.be/CvzMqIQLiXE 3.68 348.17

    05_Dance_2 https://youtu.be/f4DZp0OEkK4 5.05 227.99

    06_Mime_2 https://youtu.be/u9wJUTnBdrs 5.79 347.05

    07_Acrob_2 https://youtu.be/kRqdxGPLajs 183.61 519.27

    08_Magic_2 https://youtu.be/FUv-Q6EgEFI 3.36 270.62

    09_Dance_3 https://youtu.be/LXO-jKksQkM 5.61 294.17

    12_Magic_3 https://youtu.be/S84AoWdTq3E 1.76 426.36

    13_Dance_4 https://youtu.be/0wc60tA1klw 14.28 217.18

    14_Mime_3 https://youtu.be/0Ala3ypPM3M 21.87 386.84

    15_Dance_5 https://youtu.be/mg6-SnUl0A0 15.14 233.85

    16_Mime_6 https://youtu.be/8V7rhAJF6Gc 31.64 388.61

    MrBean Dataset

    Additionally, 9 participants watched an extra 24-minute clip from the first episode of Mr. Bean, where multiple (moving) objects may exist and interact, and the camera viewpoint may change. The subject IDs and the signal quality files are inherited from the single-shot dataset.

    Video ID Link Start time (s) End time (s)

    Mr_Bean https://www.youtube.com/watch?v=7Im2I6STbms 39.77 1495.00

    Acknowledgement

    This research is funded by the Research Foundation - Flanders (FWO) project No G081722N, junior postdoctoral fellowship fundamental research of the FWO (for S. Geirnaert, No. 1242524N), the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 802895), the Flemish Government (AI Research Program), and the PDM mandate from KU Leuven (for S. Geirnaert, No PDMT1/22/009).

    We also thank the participants for their time and effort in the experiments.

    Contact Information

    Executive researcher: Yuanyuan Yao, yuanyuan.yao@kuleuven.be

    Led by: Prof. Alexander Bertrand, alexander.bertrand@kuleuven.be

  16. MEG-BIDS Brainstorm data sample

    • openneuro.org
    Updated Apr 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elizabeth Bock; Peter Donhauser; Francois Tadel; Guiomar Niso; Sylvain Baillet (2024). MEG-BIDS Brainstorm data sample [Dataset]. http://doi.org/10.18112/openneuro.ds000246.v1.0.1
    Explore at:
    Dataset updated
    Apr 23, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Elizabeth Bock; Peter Donhauser; Francois Tadel; Guiomar Niso; Sylvain Baillet
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Brainstorm - Auditory Dataset

    License

    This dataset (MEG and MRI data) was collected by the MEG Unit Lab, McConnell Brain Imaging Center, Montreal Neurological Institute, McGill University, Canada. The original purpose was to serve as a tutorial data example for the Brainstorm software project (http://neuroimage.usc.edu/brainstorm). It is presently released in the Public Domain, and is not subject to copyright in any jurisdiction.

    We would appreciate though that you reference this dataset in your publications: please acknowledge its authors (Elizabeth Bock, Peter Donhauser, Francois Tadel and Sylvain Baillet) and cite the Brainstorm project seminal publication (also in open access): http://www.hindawi.com/journals/cin/2011/879716/

    Presentation of the experiment

    Experiment

    • One subject, two acquisition runs of 6 minutes each
    • Subject stimulated binaurally with intra-aural earphones (air tubes+transducers)
    • Each run contains:
      • 200 regular beeps (440Hz)
      • 40 easy deviant beeps (554.4Hz, 4 semitones higher)
    • Random inter-stimulus interval: between 0.7s and 1.7s seconds, uniformly distributed
    • The subject presses a button when detecting a deviant with the right index finger
    • Auditory stimuli generated with the Matlab Psychophysics toolbox
    • The specifications of this dataset were discussed initially on the FieldTrip bug tracker

    MEG acquisition

    • Acquisition at 2400Hz, with a CTF 275 system, subject in seating position
    • Recorded at the Montreal Neurological Institute in December 2013
    • Anti-aliasing low-pass filter at 600Hz, files saved with the 3rd order gradient
    • Recorded channels (340):
      • 1 Stim channel indicating the presentation times of the audio stimuli: UPPT001 (#1)
      • 1 Audio signal sent to the subject: UADC001 (#316)
      • 1 Response channel recordings the finger taps in response to the deviants: UDIO001 (#2)
      • 26 MEG reference sensors (#5-#30)
      • 274 MEG axial gradiometers (#31-#304)
      • 2 EEG electrodes: Cz, Pz (#305 and #306)
      • 1 ECG bipolar (#307)
      • 2 EOG bipolar (vertical #308, horizontal #309)
      • 12 Head tracking channels: Nasion XYZ, Left XYZ, Right XYZ, Error N/L/R (#317-#328)
      • 20 Unused channels (#3, #4, #310-#315, #329-340)
    • 3 datasets:

      • S01_AEF_20131218_01.ds: Run #1, 360s, 200 standard + 40 deviants

      • S01_AEF_20131218_02.ds: Run #2, 360s, 200 standard + 40 deviants

      • S01_Noise_20131218_01.ds: Empty room recordings, 30s long

      • File name: S01=Subject01, AEF=Auditory evoked field, 20131218=date(Dec 18 2013), 01=run

    • Use of the .ds, not the AUX (standard at the MNI) because they are easier to manipulate in FieldTrip

    Stimulation delays

    • Delay #1: Production of the sound.
      Between the stim markers (channel UDIO001) and the moment where the sound card plays the sound (channel UADC001). This is mostly due to the software running on the computer (stimulation software, operating system, sound card drivers, sound card electronics). The delay can be measured from the recorded files by comparing the triggers in the two channels: Delay between 11.5ms and 12.8ms (std = 0.3ms) This delay is not constant, we will need to correct for it.
    • Delay #2: Transmission of the sound.
      Between when the sound card plays the sound and when the subject receives the sound in the ears. This is the time it takes for the transducer to convert the analog audio signal into a sound, plus the time it takes to the sound to travel through the air tubes from the transducer to the subject's ears. This delay cannot be estimated from the recorded signals: before the acquisition, we placed a sound meter at the extremity of the tubes to record when the sound is delivered. Delay between 4.8ms and 5.0ms (std = 0.08ms). At a sampling rate of 2400Hz, this delay can be considered constant, we will not compensate for it.
    • Delay #3: Recording of the signals.
      The CTF MEG systems have a constant delay of 4 samples between the MEG/EEG channels and the analog channels (such as the audio signal UADC001), because of an anti-aliasing filtered that is applied to the first and not the second. This translate here to a constant delay of 1.7ms.
    • Delay #4: Over-compensation of delay #1.
      When correcting of delay #1, the process we use to detect the beginning of the triggers on the audio signal (UADC001) sets the trigger in the middle of the ramp between silence and the beep. We "over-compensate" the delay #1 by 1.7ms. This can be considered as constant delay of about -1.7ms.
    • Uncorrected delays: We will correct for the delay #1, and keep the other delays (#2, #3 and #4). After we compensate for delay #1 our MEG signals will have a constant delay of about 4.9 + 1.7 - 1.7 = 4.9 ms. We decide not to compensate for th3se delays because they do not introduce any jitter in the responses and they are not going to change anything in the interpretation of the data.

    Head shape and fiducial points

    • 3D digitization using a Polhemus Fastrak device driven by Brainstorm (S01_20131218_*.pos)
    • More information: Digitize EEG electrodes and head shape
    • The output file is copied to each .ds folder and contains the following entries:

      • The position of the center of CTF coils
      • The position of the anatomical references we use in Brainstorm: Nasion and connections tragus/helix, as illustrated here.
    • Around 150 head points distributed on the hard parts of the head (no soft tissues)

    Subject anatomy

    • Subject with 1.5T MRI
    • Marker on the left cheek
    • Processed with FreeSurfer 5.3
  17. e

    EEG data and psychometric results from children with learning difficulties

    • search.kg.ebrains.eu
    Updated May 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    César E. Corona-González; Claudia Rebeca De Stefano Ramos; Moramay Ramos-Flores; Luz María Alonso-Valerdi; David I. Ibarra-Zarate; Fabiola R. Gómez-Velázquez (2024). EEG data and psychometric results from children with learning difficulties [Dataset]. http://doi.org/10.17632/bnfgrn5jbb.4
    Explore at:
    Dataset updated
    May 28, 2024
    Authors
    César E. Corona-González; Claudia Rebeca De Stefano Ramos; Moramay Ramos-Flores; Luz María Alonso-Valerdi; David I. Ibarra-Zarate; Fabiola R. Gómez-Velázquez
    Description

    1. Objective

    To perform a psychophysiological evaluation of reading and mathematical skills in children with low academic performance.

    2. Sample

    One hundred and four children participated in this study. Each child was allocated into one of the following groups:

    a) Reading difficulties (RD): 54 Children with low academic performance in reading.

    b) Math difficulties (MD): 50 Children with low academic performance in math.

    2.1. Sample selection criteria

    • Children aged between 7-13 years.

    • Not to be diagnosed with any neurological disorder.

    • Capacity to read a simple text and solve arithmetic facts

    • Indistinct gender and socioeconomic level.

    3. Methods

    Data collection was carried out across two stages:

    I) Psychometric evaluation, where reading, spelling, math, attention levels, and IQ were assessed. These results determined which academic skill was the most affected.

    II) EEG acquisition, by designing two experimental paradigms. For children in the RD group, three passages were displayed and should have been read out loud. After that, three multiple-choice reading comprehension questions were presented. Regarding children in the MD group, forty arithmetical facts were displayed within two blocks (20 each). Similarly, three options were displayed to answer the operation.

    4. Data description.

    4.1. EEG recordings.

    • Baseline: a cross was displayed on the screen, and a 3-min recording was taken for each child.

    • Reading: children read aloud three texts while the EEG was being recorded. Then, three comprehension questions were answered.

    • Math: two blocks of 20 operations were performed for each child.

    4.2. EEG events.

    • Baseline: no events.

    • Reading: reading aloud (between 'condition' events); answers for reading comprehension questions ('33026' option 1; '33027' option 2; '33028' option 3).

    • Math: block 1 (from '33101' to '33120'); block 2 (from '33121' to '33140'); answers ('33026' option 1; '33027' option 2; '33028' option 3).

    4.3. Files attached

    • 32Ch_gTec.ced/.txt - Channel location.

    • Arithmetical Facts and Answers.xlsx: event tags within EEG data, including each operation and answer from the MD group.

    • Comprehension Questions and Answers.xlsx: includes the event tags for each answer to the comprehension questions in the RD group. In addition, this file describes the sequence of appearance of the questions.

    • EEG files.xlsx: it shows the EEG files available in the database plus missing EEG data.

    • Psychometric Variables.xlsx: description of each psychometric variable and categories associated with each numerical value.

    • Psychometric_Math.xlsx: psychometric results for children in the math group.

    • Psychometric_Reading.xlsx: psychometric results for children in the reading group

  18. EEG driver drowsiness dataset (unbalanced)

    • figshare.com
    bin
    Updated Sep 8, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jian Cui (2021). EEG driver drowsiness dataset (unbalanced) [Dataset]. http://doi.org/10.6084/m9.figshare.16586957.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Sep 8, 2021
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Jian Cui
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset contains EEG signals from 11 subjects with labels of alert and drowsy. It can be opened with Matlab. We extracted the data for our own research purpose from another public dataset:Cao, Z., et al., Multi-channel EEG recordings during a sustained-attention driving task. Scientific data, 2019. 6(1): p. 1-8.If you find the dataset useful, please give credits to their works.The full version of the pre-processed dataset from the original author is accessible from:https://figshare.com/articles/dataset/Multi-channel_EEG_recordings_during_a_sustained-attention_driving_task_preprocessed_dataset_/7666055This dataset is the unbalanced version of our previous version of dataset:https://figshare.com/articles/dataset/EEG_driver_drowsiness_dataset/14273687The difference from the previous version is that the samples have not been balanced for each subject. The details on how the data were extracted are described in our paper (except performing step 3 in page 3):"Jian Cui, Zirui Lan, Yisi Liu, Ruilin Li, Fan Li, Olga Sourina, Wolfgang Müller-Wittig, A Compact and Interpretable Convolutional Neural Network for Cross-Subject Driver Drowsiness Detection from Single-Channel EEG, Methods, 2021, ISSN 1046-2023, https://doi.org/10.1016/j.ymeth.2021.04.017."The data file contains 3 variables and they are EEGsample, substate and subindex."EEGsample" contains 2952 EEG samples of size 20x384 from 11 subjects. Each sample is a 3s EEG data with 128Hz from 30 EEG channels."subindex" is an array of 2952x1. It contains the subject indexes from 1-11 corresponding to each EEG sample."substate" is an array of 2952x1. It contains the labels of the samples. 0 corresponds to the alert state and 1 correspond to the drowsy state.

  19. sample eeg

    • kaggle.com
    Updated Oct 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gayathri Rajavelu (2024). sample eeg [Dataset]. https://www.kaggle.com/datasets/gayathrirajavelu/sample-eeg/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 24, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Gayathri Rajavelu
    Description

    Dataset

    This dataset was created by Gayathri Rajavelu

    Contents

  20. p

    CHB-MIT Scalp EEG Database

    • physionet.org
    Updated Jun 9, 2010
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Guttag (2010). CHB-MIT Scalp EEG Database [Dataset]. http://doi.org/10.13026/C2K01R
    Explore at:
    Dataset updated
    Jun 9, 2010
    Authors
    John Guttag
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    This database, collected at the Children’s Hospital Boston, consists of EEG recordings from pediatric subjects with intractable seizures. Subjects were monitored for up to several days following withdrawal of anti-seizure medication in order to characterize their seizures and assess their candidacy for surgical intervention. The recordings are grouped into 23 cases and were collected from 22 subjects (5 males, ages 3–22; and 17 females, ages 1.5–19).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ernest Pedapati (2024). SignalFlowEEG Example Data [Dataset]. http://doi.org/10.6084/m9.figshare.25414042.v1
Organization logo

SignalFlowEEG Example Data

Explore at:
binAvailable download formats
Dataset updated
Mar 15, 2024
Dataset provided by
Figsharehttp://figshare.com/
Authors
Ernest Pedapati
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The SignalFlowEEG Example Data dataset contains sample EEG recordings that demonstrate the capabilities and usage of the SignalFlowEEG Python package. This package provides a comprehensive set of tools for processing, analyzing, and visualizing electroencephalography (EEG) data, with a focus on neuroscience research applications.The example dataset includes EEG recordings from various paradigms:Resting-state EEG: A 5-minute recording where the subject relaxed with eyes closed.Auditory chirp stimulation: EEG recorded while the subject listened to chirp sounds with varying frequencies.Visual evoked potentials: EEG recorded as the subject viewed checkerboard pattern stimuli to elicit visual responses.These recordings were collected at the Cincinnati Children's Hospital Medical Center and are made available for educational and testing purposes.SignalFlowEEG builds upon MNE-Python, a popular open-source library for EEG analysis, and offers additional functionality tailored for clinical research workflows. This example dataset allows users to explore SignalFlowEEG's features and gain hands-on experience analyzing EEG data with this powerful Python package.The dataset consists of .set files, a format used by the EEGLAB toolbox. Each file contains raw EEG data, channel info, and event markers for a specific experimental paradigm. Files can be loaded using mne.io.read_raw_eeglab() from MNE-Python, a SignalFlowEEG dependency. The dataset has no missing data or special abbreviations. Channel names and event markers follow standard EEGLAB conventions.

Search
Clear search
Close search
Google apps
Main menu