13 datasets found
  1. Z

    Raw EEG data for the experiment reported in "Understanding the effects of...

    • data.niaid.nih.gov
    Updated Aug 19, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stone, Kate; Nicenboim, Bruno; Vasishth, Shravan; Rösler, Frank (2022). Raw EEG data for the experiment reported in "Understanding the effects of constraint and predictability in ERP" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6992084
    Explore at:
    Dataset updated
    Aug 19, 2022
    Dataset provided by
    University of Hamburg
    University of Potsdam
    Tilburg University
    Authors
    Stone, Kate; Nicenboim, Bruno; Vasishth, Shravan; Rösler, Frank
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains the raw EEG data for the above-named paper. Preprocessing scripts are stored at: https://osf.io/fndk5/

    The raw EEG data are the files *.eeg, *.vmrk and *.vhdr (BrainVision format EEG data). The numeric prefix indicates the participant ID. All three files must be stored in the same directory to work with the preprocessing script. Individual participant log files from the experimental presentation paradigm are stored in the zipped subdirectory opensesame_logs.zip. To work with the preprocessing script, these must be unzipped into a folder called opensesame_logs, stored in the folder containing the raw EEG files.

    A repository of intermediate preprocessing files based on the raw data is at https://zenodo.org/record/7002697.

    Note that the following raw files are included in the dataset for transparency but were not preprocessed for the final analysis: subject 15 due to a recording software crash mid-experiment, and subjects 40:43 as their EEG data were corrupted.

  2. EEG and EMG dataset for the detection of errors introduced by an active...

    • zenodo.org
    txt, zip
    Updated Dec 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Niklas Kueper; Kartik Chari; Kartik Chari; Judith Bütefür; Julia Habenicht; Tobias Rossol; Su Kyoung Kim; Su Kyoung Kim; Marc Tabie; Frank Kirchner; Frank Kirchner; Elsa Andrea Kirchner; Elsa Andrea Kirchner; Niklas Kueper; Judith Bütefür; Julia Habenicht; Tobias Rossol; Marc Tabie (2023). EEG and EMG dataset for the detection of errors introduced by an active orthosis device (IJCAI Competition) [Dataset]. http://doi.org/10.5281/zenodo.7966275
    Explore at:
    zip, txtAvailable download formats
    Dataset updated
    Dec 1, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Niklas Kueper; Kartik Chari; Kartik Chari; Judith Bütefür; Julia Habenicht; Tobias Rossol; Su Kyoung Kim; Su Kyoung Kim; Marc Tabie; Frank Kirchner; Frank Kirchner; Elsa Andrea Kirchner; Elsa Andrea Kirchner; Niklas Kueper; Judith Bütefür; Julia Habenicht; Tobias Rossol; Marc Tabie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is a part of the IJCAI 2023 competition : CC6: IntEr-HRI: Intrinsic Error Evaluation during Human-Robot Interaction (IJCAI'23 Official Website). This dataset repository is divided into 2 versions:

    • Version 1: Training data + Metadata
    • Version 2: Test data

    N.B.: After conducting a small survey to determine the willingness of the participating teams to travel to Macao, it became evident that a significant number of them preferred not to travel. With this in mind, we have decided to modify the initial plan for the online stage of the competition wherein the participating teams can participate from anywhere on Earth. Hope that this motivates more teams to participate. For more detailed information, please visit our competition webpage.

    Although the registration for the offline stage is officially closed, if you still wish to participate, please reach out to us via the contact form available on our webpage.

    This dataset contains recordings of the electroencephalogram (EEG) data from eight subjects who were assisted in moving their right arm by an active orthosis. This is only a part of the complete dataset which also contains electromyogram (EMG) data and the complete dataset will be made public after the end of the competition.

    The orthosis-supported movements were elbow joint movements, i.e., flexion and extension of the right arm. While the orthosis was actively moving the subject's arm, some errors were deliberately introduced for a short duration of time. During this time, the orthosis moved in the opposite direction. The errors are very simple and easy to detect. EEG and EMG data are provided. The recorded EEG data follows the BrainVision Core Data Format 1.0, consisting of a binary data file (.eeg), a header file (.vhdr), and a marker file (.vmrk) (https://www.brainproducts.com/support-resources/brainvision-core-data-format-1-0/). For ease of use, the data can be exported into the widely adopted BIDS format. Furthermore, for data analysis, processing, and classification, two popular options are available - MNE (Python) and EEGLAB (MATLAB).

    If you use our dataset, cite our paper.

    arXiv-issued DOI: https://doi.org/10.48550/arXiv.2305.11996

    BibTeX citation:

    @misc{kueper2023eeg,
    title={EEG and EMG dataset for the detection of errors introduced by an active orthosis device},
    author={Niklas Kueper and Kartik Chari and Judith Bütefür and Julia Habenicht and Su Kyoung Kim and Tobias Rossol and Marc Tabie and Frank Kirchner and Elsa Andrea Kirchner},
    year={2023},
    eprint={2305.11996},
    archivePrefix={arXiv},
    primaryClass={cs.HC}
    }

  3. EEG of Alzheimer's and Frontotemporal dementia

    • kaggle.com
    zip
    Updated Jan 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    yosf tag (2024). EEG of Alzheimer's and Frontotemporal dementia [Dataset]. https://www.kaggle.com/datasets/yosftag/open-nuro-dataset
    Explore at:
    zip(4479288286 bytes)Available download formats
    Dataset updated
    Jan 28, 2024
    Authors
    yosf tag
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset contains the EEG resting state-closed eyes recordings from 88 subjects in total. Participants: 36 of them were diagnosed with Alzheimer's disease (AD group), 23 were diagnosed with Frontotemporal Dementia (FTD group) and 29 were healthy subjects (CN group). Cognitive and neuropsychological state was evaluated by the international Mini-Mental State Examination (MMSE). MMSE score ranges from 0 to 30, with lower MMSE indicating more severe cognitive decline. The duration of the disease was measured in months and the median value was 25 with IQR range (Q1-Q3) being 24 - 28.5 months. Concerning the AD groups, no dementia-related comorbidities have been reported. The average MMSE for the AD group was 17.75 (sd=4.5), for the FTD group was 22.17 (sd=8.22) and for the CN group was 30. The mean age of the AD group was 66.4 (sd=7.9), for the FTD group was 63.6 (sd=8.2), and for the CN group was 67.9 (sd=5.4).

    Recordings: Recordings were aquired from the 2nd Department of Neurology of AHEPA General Hispital of Thessaloniki by an experienced team of neurologists. For recording, a Nihon Kohden EEG 2100 clinical device was used, with 19 scalp electrodes (Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, and O2) according to the 10-20 international system and 2 reference electrodes (A1 and A2) placed on the mastoids for impendance check, according to the manual of the device. Each recording was performed according to the clinical protocol with participants being in a sitting position having their eyes closed. Before the initialization of each recording, the skin impedance value was ensured to be below 5k?. The sampling rate was 500 Hz with 10uV/mm resolution. The recording montages were anterior-posterior bipolar and referential montage using Cz as the common reference. The referential montage was included in this dataset. The recordings were received under the range of the following parameters of the amplifier: Sensitivity: 10uV/mm, time constant: 0.3s, and high frequency filter at 70 Hz. Each recording lasted approximately 13.5 minutes for AD group (min=5.1, max=21.3), 12 minutes for FTD group (min=7.9, max=16.9) and 13.8 for CN group (min=12.5, max=16.5). In total, 485.5 minutes of AD, 276.5 minutes of FTD and 402 minutes of CN recordings were collected and are included in the dataset.

    Preprocessing: The EEG recordings were exported in .eeg format and are transformed to BIDS accepted .set format for the inclusion in the dataset. Automatic annotations of the Nihon Kohden EEG device marking artifacts (muscle activity, blinking, swallowing) have not been included for language compatibility purposes (If this is an issue, please use the preprocessed dataset in Folder: derivatives). The unprocessed EEG recordings are included in folders named: sub-0XX. Folders named sub-0XX in the subfolder derivatives contain the preprocessed and denoised EEG recordings. The preprocessing pipeline of the EEG signals is as follows. First, a Butterworth band-pass filter 0.5-45 Hz was applied and the signals were re-referenced to A1-A2. Then, the Artifact Subspace Reconstruction routine (ASR) which is an EEG artifact correction method included in the EEGLab Matlab software was applied to the signals, removing bad data periods which exceeded the max acceptable 0.5 second window standard deviation of 17, which is considered a conservative window. Next, the Independent Component Analysis (ICA) method (RunICA algorithm) was performed, transforming the 19 EEG signals to 19 ICA components. ICA components that were classified as “eye artifacts” or “jaw artifacts” by the automatic classification routine “ICLabel” in the EEGLAB platform were automatically rejected. It should be noted that, even though the recording was performed in a resting state, eyes-closed condition, eye artifacts of eye movement were still found at some EEG recordings.

    A complete analysis of this dataset can be found in the published Data Descriptor paper "A Dataset of Scalp EEG Recordings of Alzheimer’s Disease, Frontotemporal Dementia and Healthy Subjects from Routine EEG", https://doi.org/10.3390/data8060095 *****Im not the original creator of this dataset it was published on https://openneuro.org/datasets/ds004504/versions/1.0.6 i just ported it here for ease of use *****

  4. Z

    Raw EEG Data for: Learning from Label Proportions in Brain-Computer...

    • data.niaid.nih.gov
    Updated Jan 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Hübner (2022). Raw EEG Data for: Learning from Label Proportions in Brain-Computer Interfaces [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5831825
    Explore at:
    Dataset updated
    Jan 28, 2022
    Dataset provided by
    Universität Freiburg
    Authors
    David Hübner
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    If you prefer to use the preprocessed and epoched data, please refer to: https://zenodo.org/record/192684

    Note that this repository ontains only the visual paradigm with the N=13 subjects recorded at 31 EEG channels, as described in the above link. We copied the relevant section of the description below:

    This data repository contains raw EEG of an EEG experiment utilizing visual event-related potentials (ERPs) with N=13 healthy subjects.

    The dataset is used and described in the following journal article:

    Hübner, D., Verhoeven, T., Schmid, K., Müller, K. R., Tangermann, M., & Kindermans, P. J. (2017). Learning from label proportions in brain-computer interfaces: online unsupervised learning with guarantees. PloS one, 12(4), e0175856.

    Please cite the above article when using the data.

    The data set with N=13 subjects is different to ordinary ERP datasets in the sense that the train of stimuli to spell one character (68) is divided into repetitions of two interleaved sequences with length 8 and 18, respectively. We added '#' symbols to the spelling matrix which should never be attended by the subject and hence, are non-targets by definition. The first, shorter sequence, now highlights only ordinary characters, while the second sequence also highlights '#' -- visual blank symbols. By construction, sequence 1 has a higher target ratio than sequence 2. These known, but different target and non-target proportions are then used to reconstruct the target and non-target class means. This approach which does not need explicit class labels is termed Learning from Label Proportions (LLP). It can be used to decode brain signals without prior calibration session. More details can be found in the article.

    In another study, the above data set was used to simulate a new unsupervised mixture approach which combines the mean estimation of the unsupervised expectation-maximization algorithm by Kindermans et al. (2012, PLoS One) with the means obtained with the LLP approach. This leads to an unsupervised solution for which the performance is as good as in the supervised scenario. Please find more details in the following article:

    Verhoeven, T., Hübner, D., Tangermann, M., Müller, K. R., Dambre, J., & Kindermans, P. J. (2017). Improving zero-training brain-computer interfaces by mixing model estimators. Journal of neural engineering, 14(3), 036021.

    The data was recorded with BrainVision recorder. A new file was recorded for every group of 7 characters. The .eeg file contains the RAW EEG data in the format as described in the .vhdr file. Events / stimuli markers are provided in the .vmrk files. Note that there is a wrapper available to use this data in MOABB here: TODO INSERT LINK

    The subjects had the task to spell a specific sentence with 63 letters. In the online experiment, this was repeated 3 times and each time the online unsupervised classifier was reset at the start of the sentence.

  5. Capacity for movement is an organisational principle in object...

    • openneuro.org
    Updated Nov 16, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sophia M. Shatek; Amanda K. Robinson; Tijl Grootswagers; Thomas A. Carlson (2021). Capacity for movement is an organisational principle in object representations: EEG data from Experiment 3 [Dataset]. http://doi.org/10.18112/openneuro.ds003887.v1.0.2
    Explore at:
    Dataset updated
    Nov 16, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Sophia M. Shatek; Amanda K. Robinson; Tijl Grootswagers; Thomas A. Carlson
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview —————————————————— This data is from the paper "Capacity for movement is a major organisational principle in object representations". This is the data of Experiment 3 (EEG: movement). Access the preprint here: https://psyarxiv.com/3x2qh/

    Abstract: The ability to perceive moving objects is crucial for survival and threat identification. Recent neuroimaging evidence has shown that the visual system processes objects on a spectrum according to their ability to engage in self-propelled, goal-directed movement. The association between the ability to move and being alive is learned early in childhood, yet evidently not all moving objects are alive. Natural, non-agentive movement (e.g., in clouds, or fire) cause confusion in children and adults under time pressure. In the current study, we investigated the relationship between movement and aliveness using both behavioural and neural measures. We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Behavioural classification showed two key categorisation biases: moving natural things were often mistaken to be alive, and often classified as not moving. Movement explained significant variance in the EEG data, during both a classification task and passive viewing. These results highlight that capacity for movement is an important dimension in the structure of human visual object representations.

    In this experiment, participants completed two tasks - classification and passive viewing. In the classification task, participants classified single images that appeared on the screen as "can move" or "still". This task was time-pressured, and trials timed out after 1 second. In the passive viewing task, participants viewed rapid (RSVP) streams of images, and pressed a button to indicate when the fixation cross changed colour.

    Contents of the dataset: - Raw EEG data is available in individual subject folders (BrainVision raw formats .eeg, .vmrk, .vhdr). Pre-processed EEG data is available in the derivatives folders in EEGlab (.set, .fdt) and cosmoMVPA dataset (.mat) format. This experiment has 24 subjects. - Scripts for data analysis and running the experiment are available in the code folder. Note that all code runs on both EEG experiments together, so you must download both this and the movement experiment data in order to replicate analyses. - Stimuli are also available (400 CC0 images) - Results of decoding analyses are available in the derivatives folder.

    Further notes:

    Note that the code is designed to run analyses for data and its partner data (experiments 2 and 3 of the paper). Copies in both folders are identical. Scripts need to be run in a particular order (detailed at the top of each script)

    Further explanations of the code:

    1. Run pre-processing of EEG (analyse_EEG_preprocessing.m), and behavioural data (analyse_behavioural_EEG.m)
    2. Ensure that the MTurk data has been run (analyse_behavioural_MTurk.m), from the Experiment 1 folder.
    3. Run RSA (analyse_rsa.m; reliant on behavioural data and pre-processed EEG data), and run decoding (analyse_decoding.m; reliant on pre-processed EEG data)
    4. Run GLMs (analyse_glms.m; reliant on RSA, behavioural)

    To only look at the results, the results for each of these analyses is saved in the derivatives already, so there is no need to run any of them again.

    Each file named plot_X.m will create a graph as in the paper. Each is reliant on saved data from the above analyses, which are saved in the derivatives folder.

    Citing this dataset ——————————————————— If using this data, please cite the associated paper:

    Contact ———————————————————

    Contact Sophia Shatek (sophia.shatek@sydney.edu.au) for additional information. ORCID: 0000-0002-7787-1379

  6. Capacity for movement is an organisational principle in object...

    • openneuro.org
    Updated Nov 9, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sophia M. Shatek; Amanda K. Robinson; Tijl Grootswagers; Thomas A. Carlson (2021). Capacity for movement is an organisational principle in object representations: EEG data from Experiment 2 [Dataset]. http://doi.org/10.18112/openneuro.ds003885.v1.0.2
    Explore at:
    Dataset updated
    Nov 9, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Sophia M. Shatek; Amanda K. Robinson; Tijl Grootswagers; Thomas A. Carlson
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview —————————————————— This data is from the paper "Capacity for movement is a major organisational principle in object representations". This is the data of Experiment 2 (EEG: aliveness)

    Abstract: The ability to perceive moving objects is crucial for survival and threat identification. Recent neuroimaging evidence has shown that the visual system processes objects on a spectrum according to their ability to engage in self-propelled, goal-directed movement. The association between the ability to move and being alive is learned early in childhood, yet evidently not all moving objects are alive. Natural, non-agentive movement (e.g., in clouds, or fire) cause confusion in children and adults under time pressure. In the current study, we investigated the relationship between movement and aliveness using both behavioural and neural measures. We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Behavioural classification showed two key categorisation biases: moving natural things were often mistaken to be alive, and often classified as not moving. Movement explained significant variance in the EEG data, during both a classification task and passive viewing. These results highlight that capacity for movement is an important dimension in the structure of human visual object representations.

    In this experiment, participants completed two tasks - classification and passive viewing. In the classification task, participants classified single images that appeared on the screen as "alive" or "not alive". This task was time-pressured, and trials timed out after 1 second. In the passive viewing task, participants viewed rapid (RSVP) streams of images, and pressed a button to indicate when the fixation cross changed colour.

    Contents of the dataset: - Raw EEG data is available in individual subject folders (BrainVision raw formats .eeg, .vmrk, .vhdr). Pre-processed EEG data is available in the derivatives folders in EEGlab (.set, .fdt) and cosmoMVPA dataset (.mat) format. This experiment has 24 subjects. - Scripts for data analysis and running the experiment are available in the code folder. Note that all code runs on both EEG experiments together, so you must download both this and the movement experiment data in order to replicate analyses. - Stimuli are also available (400 CC0 images) - Results of decoding analyses are available in the derivatives folder.

    Further notes:

    Note that the code is designed to run analyses for data and its partner data (experiments 2 and 3 of the paper). Copies in both folders are identical. Scripts need to be run in a particular order (detailed at the top of each script)

    Further explanations of the code:

    1. Run pre-processing of EEG (analyse_EEG_preprocessing.m), and behavioural data (analyse_behavioural_EEG.m)
    2. Ensure that the MTurk data has been run (analyse_behavioural_MTurk.m), from the Experiment 1 folder.
    3. Run RSA (analyse_rsa.m; reliant on behavioural data and pre-processed EEG data), and run decoding (analyse_decoding.m; reliant on pre-processed EEG data)
    4. Run GLMs (analyse_glms.m; reliant on RSA, behavioural)

    To only look at the results, the results for each of these analyses is saved in the derivatives already, so there is no need to run any of them again.

    Each file named plot_X.m will create a graph as in the paper. Each is reliant on saved data from the above analyses, which are saved in the derivatives folder.

    Citing this dataset ——————————————————— If using this data, please cite the associated paper:

    Contact ———————————————————

    Contact Sophia Shatek (sophia.shatek@sydney.edu.au) for additional information. ORCID: 0000-0002-7787-1379

  7. f

    Open data: The early but not the late neural correlate of auditory awareness...

    • su.figshare.com
    • demo.researchdata.se
    • +1more
    pdf
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefan Wiens; Rasmus Eklund; Billy Gerdfeldter (2023). Open data: The early but not the late neural correlate of auditory awareness reflects lateralized experiences [Dataset]. http://doi.org/10.17045/sthlmuni.13067018.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Stockholm University
    Authors
    Stefan Wiens; Rasmus Eklund; Billy Gerdfeldter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    GENERAL INFORMATION1. Title of Dataset:Open data: The early but not the late neural correlate of auditory awareness reflects lateralized experiences.2. Author Information A. Principal Investigator Contact Information Name: Stefan Wiens Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/swiens-1.184142 Email: sws@psychology.su.se B. Associate or Co-investigator Contact Information Name: Rasmus Eklund Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/raek2031-1.223133 Email: rasmus.eklund@psychology.su.se C. Associate or Co-investigator Contact Information Name: Billy Gerdfeldter Institution: Department of Psychology, Stockholm University, Sweden Internet: https://www.su.se/profiles/bige1544-1.403208 Email: billy.gerdfeldter@psychology.su.se3. Date of data collection: Subjects (N = 28) were tested between 2020-03-04 and 2020-09-18.4. Geographic location of data collection: Department of Psychology, Stockholm, Sweden5. Information about funding sources that supported the collection of the data:Marianne and Marcus Wallenberg (Grant 2019-0102)SHARING/ACCESS INFORMATION1. Licenses/restrictions placed on the data: CC BY 4.02. Links to publications that cite or use the data: Eklund R., Gerdfeldter B., & Wiens S. (2021). The early but not the late neural correlate of auditory awareness reflects lateralized experiences. Neuropsychologia. https://doi.org/The study was preregistered:https://doi.org/10.17605/OSF.IO/PSRJF3. Links to other publicly accessible locations of the data: N/A4. Links/relationships to ancillary data sets: N/A5. Was data derived from another source? No6. Recommended citation for this dataset: Eklund R., Gerdfeldter B., & Wiens S. (2020). Open data: The early but not the late neural correlate of auditory awareness reflects lateralized experiences. Stockholm: Stockholm University. https://doi.org/10.17045/sthlmuni.13067018DATA & FILE OVERVIEWFile List:The files contain the downsampled data in bids format, scripts, and results of main and supplementary analyses of the electroencephalography (EEG) study. Links to the hardware and software are provided under methodological information.AAN_LRclick_experiment_scripts.zip: contains the Python files to run the experimentAAN_LRclick_bids_EEG.zip: contains EEG data files for each subject in .eeg format.AAN_LRclick_behavior_log.zip: contains log files of the EEG session (generated by Python)AAN_LRclick_EEG_scripts.zip: Python-MNE scripts to process and to analyze the EEG dataAAN_LRclick_results.zip: contains summary data files, figures, and tables that are created by Python-MNE.METHODOLOGICAL INFORMATION1. Description of methods used for collection/generation of data:The auditory stimuli were 4-ms clicks.The experiment was programmed in Python: https://www.python.org/ and used extra functions from here: https://github.com/stamnosslin/mn The EEG data were recorded with an Active Two BioSemi system (BioSemi, Amsterdam, Netherlands; www.biosemi.com) and converted to .eeg format.For more information, see linked publication.2. Methods for processing the data:We computed event-related potentials. See linked publication3. Instrument- or software-specific information needed to interpret the data:MNE-Python (Gramfort A., et al., 2013): https://mne.tools/stable/index.html# 4. Standards and calibration information, if appropriate:For information, see linked publication.5. Environmental/experimental conditions:For information, see linked publication.6. Describe any quality-assurance procedures performed on the data:For information, see linked publication.7. People involved with sample collection, processing, analysis and/or submission:- Data collection: Rasmus Eklund with assistance from Billy Gerdfeldter.- Data processing, analysis, and submission: Rasmus EklundDATA-SPECIFIC INFORMATION:All relevant information can be found in the MNE-Python scripts (in EEG_scripts folder) that process the EEG data. For example, we added notes to explain what different variables mean.The folder structure needs to be as follows:AAN_LRclick (main folder)--->data--->--->bids (AAN_LRclick_bids_EEG)--->--->log (AAN_LRclick_behavior_log)--->MNE (AAN_LRclick_EEG_scripts)--->results (AAN_LRclick_results)To run the MNE-Python scripts:Anaconda was used with MNE-Python 0.22 (see installation at https://mne.tools/stable/index.html# ).For preprocess.py and analysis.py, the complete scripts should be run (from anaconda prompt).

  8. Data from: Physiotherapist-Assisted Wrist Movement Protocol for EEG-Based...

    • figshare.com
    7z
    Updated Sep 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fanni Kovács; Adél Ernhaft; Gábor Fazekas; János Horváth (2025). Physiotherapist-Assisted Wrist Movement Protocol for EEG-Based Corticokinematic Coherence Assessment [Dataset]. http://doi.org/10.6084/m9.figshare.29589566.v1
    Explore at:
    7zAvailable download formats
    Dataset updated
    Sep 29, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Fanni Kovács; Adél Ernhaft; Gábor Fazekas; János Horváth
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset accompanies the manuscript titled 'Physiotherapist-Assisted Wrist Movement Protocol for EEG-Based Corticokinematic Coherence Assessment' and includes synchronized EEG and movement recordings collected during a physiotherapist-assisted wrist movement task for corticokinematic coherence (CKC) analysis. The data are organized per participant and trial.ContentsEach participant's data are stored in files using the following naming convention:SXXX_ckc_HH_SS_BBWhere:SXXX: anonymized subject ID (random 3-digit code, e.g., S113, S479),HH: movement hand (bal = left hand, jobb = right hand),SS: session number (t1),BB: trial block number (01, 02).File TypesEach trial includes the following two files:Movement file (no extension, e.g., S113_ckc_bal_t1_01):Custom binary format containing synchronized hand acceleration data collected via a 3-axis accelerometer placed on the wrist.Movement File StructureEach entry contains:time: a 32-bit unsigned integer indicating the timestamp in milliseconds,x, y, z: 16-bit signed integers representing acceleration along the respective axes,trigger: an 8-bit unsigned integer used to mark event-related triggers for synchronization with the EEG data (e.g., movement onset).EEG files (.hed/.flo pair)EEG was recorded with a 64-channel Synamp RT system (Compumedics Neuroscan, Victoria, Australia) at 2000 Hz.Converted from the original Neuroscan CNT format into an open binary format..hed: plain-text header describing the dataset (channels, sampling rate, etc.). Each line starts with a keyword such as:Datatype: always float (4-byte floats),Number of timepoints: total samples,Starting timepoint: always 0,Sampling rate: 1000 Hz,Ch: channel labels in order of appearance in the .flo file.Other fields (e.g. Number of channels hint) can be ignored..flo: binary file with EEG samples stored as little-endian 4-byte floats in channel-major order:[Channel 1: sample 1 ... last sample] [Channel 2: sample 1 ... last sample] ...Example (Python, using numpy):import numpy as np, osfilename = "S113_ckc_bal_t1_01"# Read channel labels from .hedchannels = []with open(filename + ".hed", "r") as f: for line in f: if line.startswith("Ch:"): channels.append(line.split(":")[1].strip())n_channels = len(channels)# Determine number of samplesfilesize = os.path.getsize(filename + ".flo")n_samples = filesize // (4 * n_channels)# Load EEG dataraweeg = np.fromfile(filename + ".flo", dtype="f4")raweeg = raweeg.reshape(n_channels, n_samples)NotesData are organized by movement side (bal = left, jobb = right), trial (t1), and block repetition (01, 02).The EEG and movement files are time-locked to enable CKC computation.Stimuli were delivered manually by a trained physiotherapist with visual pacing using a screen-based metronome.UsageThese data can be used to replicate the analyses reported in the manuscript, or for methodological and clinical studies of proprioception using EEG-based CKC.Please cite the associated paper when using this dataset.

  9. Capacity for movement is an organisational principle in object...

    • openneuro.org
    Updated May 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sophia M. Shatek; Amanda K. Robinson; Tijl Grootswagers; Thomas A. Carlson (2023). Capacity for movement is an organisational principle in object representations: EEG data from Experiment 1 [Dataset]. http://doi.org/10.18112/openneuro.ds003885.v1.0.8
    Explore at:
    Dataset updated
    May 24, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Sophia M. Shatek; Amanda K. Robinson; Tijl Grootswagers; Thomas A. Carlson
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview —————————————————— This data is from the paper "Capacity for movement is a major organisational principle in object representations". This is the data of Experiment 1 (EEG: aliveness). The paper is now published in NeuroImage: https://doi.org/10.1016/j.neuroimage.2022.119517

    Abstract: The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans.

    In this experiment, participants completed two tasks - classification and passive viewing. In the classification task, participants classified single images that appeared on the screen as "alive" or "not alive". This task was time-pressured, and trials timed out after 1 second. In the passive viewing task, participants viewed rapid (RSVP) streams of images, and pressed a button to indicate when the fixation cross changed colour.

    Contents of the dataset: - Raw EEG data is available in individual subject folders (BrainVision raw formats .eeg, .vmrk, .vhdr). Pre-processed EEG data is available in the derivatives folders in EEGlab (.set, .fdt) and cosmoMVPA dataset (.mat) format. This experiment has 24 subjects. - Scripts for data analysis and running the experiment are available in the code folder. Note that all code runs on both EEG experiments together, so you must download both this and the movement experiment data in order to replicate analyses. - Stimuli are also available (400 CC0 images) - Results of decoding analyses are available in the derivatives folder.

    Further notes:

    Note that the code is designed to run analyses for data and its partner data (experiments 2 and 3 of the paper). Copies in both folders are identical. Scripts need to be run in a particular order (detailed at the top of each script)

    Further explanations of the code:

    1. Run pre-processing of EEG (analyse_EEG_preprocessing.m), and behavioural data (analyse_behavioural_EEG.m)
    2. Ensure that the MTurk data has been run (analyse_behavioural_MTurk.m), from the Experiment 1 folder.
    3. Run RSA (analyse_rsa.m; reliant on behavioural data and pre-processed EEG data), and run decoding (analyse_decoding.m; reliant on pre-processed EEG data)
    4. Run GLMs (analyse_glms.m; reliant on RSA, behavioural)

    To only look at the results, the results for each of these analyses is saved in the derivatives already, so there is no need to run any of them again.

    Each file named plot_X.m will create a graph as in the paper. Each is reliant on saved data from the above analyses, which are saved in the derivatives folder.

    Citing this dataset ——————————————————— If using this data, please cite the associated paper: Shatek, S. M., Robinson, A. K., Grootswagers, T., & Carlson, T. A. (2022). Capacity for movement is an organisational principle in object representations. NeuroImage, 261, 119517. https://doi.org/10.1016/j.neuroimage.2022.119517

    Contact ——————————————————— Contact Sophia Shatek (sophia.shatek@sydney.edu.au) for additional information. ORCID: 0000-0002-7787-1379

  10. EEG Mortality Dataset in Parkinson's Disease

    • openneuro.org
    Updated Dec 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simin Jamshidi; Arturo Espinoza; Soura Dasgupta; Nandakumar Narayanan (2025). EEG Mortality Dataset in Parkinson's Disease [Dataset]. http://doi.org/10.18112/openneuro.ds007020.v1.0.0
    Explore at:
    Dataset updated
    Dec 2, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Simin Jamshidi; Arturo Espinoza; Soura Dasgupta; Nandakumar Narayanan
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset contains de-identified resting-state EEG recordings from individuals with Parkinson’s disease (PD) and age-matched healthy control subjects. All EEG data were recorded using standard clinical EEG systems at Neurology Clinic. Dataset Purpose: This dataset was originally used to evaluate whether resting-state EEG can help distinguish subjects who were later deceased from those who remained living (mortality classification). Only de-identified EEG data and mortality labels are included.

    Participant Information: - Participants are labeled as either "living" or "deceased" in participants.tsv - No other demographic or clinical information (age, cognition, UPDRS, disease duration, etc.) is included per data-sharing guidelines. - All participant IDs are anonymized following BIDS convention (e.g., sub-PD1301).

    EEG Acquisition Details: - Recording type: Resting-state EEG (eyes open) - Device: Clinical BrainVision EEG system - File formats: .vhdr, .eeg, .vmrk - Sampling rate: 500 Hz - Montage: Standard 10–20 international system - Recording condition: “task-rest” (no task)

    Data Organization: Data are structured following the BIDS (Brain Imaging Data Structure) EEG standard: sub-

    Mortality Label Format: - Living subjects: survival_status = "living" - Deceased subjects: survival_status = "deceased"

    Ethics & Privacy: All subjects provided consent for EEG recording at the University of Iowa Hospitals and Clinics. The publicly shared version here is fully de-identified and contains no clinical or personal health information other than mortality classification.

    Suggested Use: This dataset can be used to explore EEG biomarkers of mortality risk, EEG signal characteristics in PD, or to build machine learning models for classification.

    Questions or requests: Please contact nandakumar-narayanan@uiowa.edu.

  11. Data from: Chimeric music reveals an interaction of pitch and time in...

    • openneuro.org
    Updated Sep 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tong Shan; Edmund C. Lalor; Ross K. Maddox (2025). Chimeric music reveals an interaction of pitch and time in electrophysiological signatures of music encoding [Dataset]. http://doi.org/10.18112/openneuro.ds006735.v1.0.3
    Explore at:
    Dataset updated
    Sep 30, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Tong Shan; Edmund C. Lalor; Ross K. Maddox
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Details related to access to the data

    Please contact the following authors for further information:

    Tong Shan (email: tongshan@stanford.edu)

    Ross K. Maddox (email: rkmaddox@med.umich.edu)

    Overview

    This study examines pitch-time interactions in music processing by introducing “chimeric music,” which pairs two distinct melodies, and exchanges their pitch contours and note onset-times to create two new melodies, thereby distorting musical pattern while maintaining the marginal statistics of the original pieces’ pitch and temporal sequences.

    Data collected from Sep to Nov, 2023.

    The details of the experiment can be found at Shan et al. (2024). There were two phases in this experiment. For the first phase, ten trials of one-minute clicks were presented to the subjects. For the second phase, the 2 types of monophonic music (original and chimeric) clips were presented. There were 33 trials for each type with shuffled order. Between trials, there was a 0.5 s pause.

    Format

    This dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from subject 001 to subject 027 in raw brainvision format (including .eeg, .vhdr, and .vmrk triplet).

    Subjects

    27 subjects participated in this study.

    Subject inclusion criteria

    Age between 18-40.

    Normal hearing: audiometric thresholds of 20 dB HL or better from 500 to 8000 Hz. Speak English as their primary language.

    Self-reported normal or correctable to normal vision.

    Twenty-seven participants participated in this experiment with an age of 22.9 ± 3.9 (mean ± STD) years.

    Apparatus

    Subjects were seated in a sound-isolating booth on a chair in front of a 24-inch BenQ monitor with a viewing distance of approximately 60 cm. Stimuli were presented at an average level of 60 dB SPL and a sampling rate of 48000 Hz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. The stimulus presentation for the experiment was controlled by a python script using a custom package, expyfun.

    Following the experimental session, participants completed a self-reported musicianship questionnaire (adapted from Whiteford et al, 2025). The questionnaire is included in this repository.

  12. Data from: Fundamental frequency predominantly drives talker differences in...

    • openneuro.org
    Updated Nov 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Melissa J. Polonenko; Ross K. Maddox (2024). Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech [Dataset]. http://doi.org/10.18112/openneuro.ds005340.v1.0.4
    Explore at:
    Dataset updated
    Nov 22, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Melissa J. Polonenko; Ross K. Maddox
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    README

    Details related to access to the data

    Please contact the following authors for further information: Melissa Polonenko(email: mpolonen@umn.edu) Ross Maddox (email: rkmaddox@med.umich.edu)

    Overview

    This is the "peaky_pitchshift"" dataset for the paper Polonenko MJ & Maddox RK (2024), with citation listed below.

    Peer-reviewed manuscript:

    Melissa J. Polonenko, Ross K. Maddox; Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech. JASA Express Lett. 1 November 2024; 4 (11): 114401. https://doi.org/10.1121/10.0034329

    BioRxiv pre-print:

    Melissa Jane Polonenko, Ross K Maddox (2024). Fundamental frequency predominantly drives talker differences in auditory brainstem responses to continuous speech. bioRxiv 2024.07.12.603125; doi: https://doi.org/10.1101/2024.07.12.603125

    Auditory brainstem responses (ABRs) were derived to continuous peaky speech from two talkers with different fundamental frequencies (f0s) and from clicks that have mean stimulus rates set to the mean f0s. Data was collected from May to June 2021.

    Aims: 1) replicate the male/female talker effect with each at their natural f0 2) systematically determine if f0 is the main driver of this talker difference 3) evaluate if the f0 effect resembles the click rate effect

    The details of the experiment can be found at Polonenko & Maddox (2024).

    Stimuli: 1) randomized click trains at 3 stimulus rates (123, 150, 183 Hz), 30 x 10 s trials each for a total of 90 trials (15 min, 5 min each rate) 2) peaky speech for a male and female narrator at 3 f0s (123, 150, 183 Hz), 120 x 10 s trials each of the 6 narrator-f0 combo for a total of 720 trials (2 hours, 20 min each)

    NOTE: f0s used: original f0s (low & high respectively) and f0s
    shifted to the other narrator's f0 and an f0 at the midpoint between the f0s.
    click rates used: set to the mean f0s used for the speech
    

    The code for stimulus preprocessing and EEG analysis is available on Github: https://github.com/polonenkolab/peaky_pitchshift

    Format

    The dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from participant 01 to 15 in raw brainvision format (3 files: .eeg, .vhdr, .vmrk) and stimuli files in format of .hdf5. The stimuli files contain the audio ('x'), and regressors for the deconvolution ('pinds' are the pulse indices, 'anm' is an auditory nerve model regressor, which was used during analyses but was not included as part of the article).

    Generally, you can find detailed event data in the .tsv files and descriptions in the accompanying .json files. Raw eeg files are provided in the Brain Products format.

    Participants

    15 participants, mean ± SD age of 24.1 ± 6.1 years (19-35 years)

    Inclusion criteria: 1) Age between 18-40 years 2) Normal hearing: audiometric thresholds 20 dB HL or better from 500 to 8000 Hz 3) Speak English as their primary language

    Please see participants.tsv for more information.

    Apparatus

    Participants sat in a darkened sound-isolating booth and rested or watched silent videos with closed captioning. Stimuli were presented at an average level of 65 dB SPL and a sampling rate of 48 kHz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. Custom python scripts using expyfun were used to control the experiment and stimulus presentation.

    Details about the experiment

    For a detailed description of the task, see Polonenko & Maddox (2024) and the supplied task-peaky_pitch_eeg.json file. The 6 peaky speech conditions (2 narrators x 3 f0s) were randomly interleaved for each block of trials (i.e., for trial 1, the 6 conditions were randomized) and the story token was randomized. This means that the participant would not be able to follow the story. For clicks the trials were not randomized (already random clicks).

    Trigger onset times in the tsv files have already been corrected for the tubing delay of the insert earphones (but not in the events of the raw files). Triggers with values of "1" were recorded to the onset of the 10 s audio, and shortly after triggers with values of "4" or "8" were stamped to indicate the overall trial number out of 120 for each speech conditon and out of 30 for each click condition. This was done by converting the decimal trial number to bits, denoted b, then calculating 2 ** (b + 2). We've specified these trial numbers and more metadata of the events in each of the '*_eeg_events.tsv" file, which is sufficient to know which trial corresponded to which type of stimulus (clicks, male narrator, female narrator), which f0 (low, mid, high), and which file - e.g., male_low_000_regress.hdf5 for the male narrator with the low f0.

  13. The effect of speech masking on the subcortical response to speech

    • openneuro.org
    Updated Aug 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Melissa J. Polonenko; Ross K. Maddox (2024). The effect of speech masking on the subcortical response to speech [Dataset]. http://doi.org/10.18112/openneuro.ds005407.v1.0.0
    Explore at:
    Dataset updated
    Aug 10, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Melissa J. Polonenko; Ross K. Maddox
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    README

    Details related to access to the data

    Please contact the following authors for further information: Melissa Polonenko(email: mpolonen@umn.edu) Ross Maddox (email: rkmaddox@med.umich.edu)

    Overview

    This is the "peaky_snr" dataset for the paper Polonenko MJ & Maddox RK (2024), with citation listed below.

    BioRxiv: The effect of speech masking on the subcortical response to speech

    Auditory brainstem responses (ABRs) were derived to continuous peaky speech from between one up to five simultaneously presented talkers and from clicks. Data was collected from June to July 2021.

    Goal: To better understand masking’s effects on the subcortical neural encoding of naturally uttered speech in human listeners.

    To do this we leveraged our recently developed method for determining the auditory brainstem response (ABR) to speech (Polonenko and Maddox, 2021). Whereas our previous work was aimed at encoding of single talkers, here we determined the ABR to speech in quiet as well as in the presence of varying numbers of other talkers.

    The details of the experiment can be found at Polonenko & Maddox (2024).

    Stimuli: 1) randomized click trains at an average rate of 40 Hz, 60 x 10 s trials for a total of 10 minutes; 2) peaky speech for up to 5 male narrators. 30 minutes of each SNR (clean, 0 dB, -3 dB, -6 dB), corresponding to 1, 2, 3, and 5 talkers presented simultaneously, each set to 65 dB.

    NOTE: files for each story were completely randomized. Random combinations
    were created so that each story was equally represented in the data.
    

    The code for stimulus preprocessing and EEG analysis is available on Github: https://github.com/polonenkolab/peaky_snr

    Format

    The dataset is formatted according to the EEG Brain Imaging Data Structure. It includes EEG recording from participant 01 to 25 in raw brainvision format (3 files: .eeg, .vhdr, .vmrk) and stimuli files in format of .hdf5. The stimuli files contain the audio ('audio'), and regressors for the deconvolution ('pinds' are the pulse indices, 'anm' is an auditory nerve model regressor, which was used during analyses but was not included as part of the article).

    Generally, you can find detailed event data in the .tsv files and descriptions in the accompanying .json files. Raw eeg files are provided in the Brain Products format.

    Participants

    25 participants, mean ± SD age of 23.4 ± 5.5 years (19-37 years)

    Inclusion criteria: 1) Age between 18-40 years 2) Normal hearing: audiometric thresholds 20 dB HL or better from 500 to 8000 Hz 3) Speak English as their primary language

    Please see participants.tsv for more information.

    Apparatus

    Participants sat in a darkened sound-isolating booth and rested or watched silent videos with closed captioning. Stimuli were presented at an average level of 65 dB SPL (per story; total for 5 talkers = 71 dB) and a sampling rate of 48 kHz through ER-2 insert earphones plugged into an RME Babyface Pro digital sound card. Custom python scripts using expyfun were used to control the experiment and stimulus presentation.

    Details about the experiment

    For a detailed description of the task, see Polonenko & Maddox (2024) and the supplied task-peaky_snr_eeg.json file. The 4 SNR speech conditions and the story tokens were randomized. This means that the participant would not be able to follow the stories. For clicks the trials were not randomized (already random clicks).

    Trigger onset times in the tsv files have already been corrected for the tubing delay of the insert earphones (but not in the events of the raw files). Triggers with values of "1" were recorded to the onset of the 10 s audio, and shortly after triggers with values of "4" or "8" were stamped to indicate info about the trial. This was done by converting the decimal trial number to bits, denoted b, then calculating 2 ** (b + 2). We've specified these trial triggers and more metadata of the events in each of the '*_eeg_events.tsv" file, which is sufficient to know which trial corresponded to which type of stimulus (clicks or speech), snr, and which files of which stories were presented. e.g., alice_000_peaky_diotic_regress.hdf5 for the first file of the story called 'alice' (Alice in Wonderland).

  14. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Stone, Kate; Nicenboim, Bruno; Vasishth, Shravan; Rösler, Frank (2022). Raw EEG data for the experiment reported in "Understanding the effects of constraint and predictability in ERP" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6992084

Raw EEG data for the experiment reported in "Understanding the effects of constraint and predictability in ERP"

Explore at:
Dataset updated
Aug 19, 2022
Dataset provided by
University of Hamburg
University of Potsdam
Tilburg University
Authors
Stone, Kate; Nicenboim, Bruno; Vasishth, Shravan; Rösler, Frank
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This repository contains the raw EEG data for the above-named paper. Preprocessing scripts are stored at: https://osf.io/fndk5/

The raw EEG data are the files *.eeg, *.vmrk and *.vhdr (BrainVision format EEG data). The numeric prefix indicates the participant ID. All three files must be stored in the same directory to work with the preprocessing script. Individual participant log files from the experimental presentation paradigm are stored in the zipped subdirectory opensesame_logs.zip. To work with the preprocessing script, these must be unzipped into a folder called opensesame_logs, stored in the folder containing the raw EEG files.

A repository of intermediate preprocessing files based on the raw data is at https://zenodo.org/record/7002697.

Note that the following raw files are included in the dataset for transparency but were not preprocessed for the final analysis: subject 15 due to a recording software crash mid-experiment, and subjects 40:43 as their EEG data were corrupted.

Search
Clear search
Close search
Google apps
Main menu