CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Multi-subject, multi-modal (sMRI+EEG) neuroimaging dataset on face processing. Original data described at https://www.nature.com/articles/sdata20151 This is repackaged version of the EEG data in EEGLAB format. The data has gone through minimal preprocessing including (see wh_extracteeg_BIDS.m): - Ignoring fMRI and MEG data (sMRI preserved for EEG source localization) - Extracting EEG channels out of the MEG/EEG fif data - Adding fiducials - Renaming EOG and EKG channels - Extracting events from event channel - Removing spurious events 5, 6, 7, 13, 14, 15, 17, 18 and 19 - Removing spurious event 24 for subject 3 run 4 - Renaming events taking into account button assigned to each subject - Correcting event latencies (events have a shift of 34 ms) - Resampling data to 250 Hz (this is a step that is done because this dataset is used as tutorial for EEGLAB and need to be lightweight) - Merging run 1 to 6 - Removing event fields urevent and duration - Filling up empty fields for events boundary and stim_file. - Saving as EEGLAB .set format
Ramon Martinez, Dung Truong, Scott Makeig, Arnaud Delorme (UCSD, La Jolla, CA, USA)
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Overview —————————————————— This data is from the paper "Capacity for movement is a major organisational principle in object representations". This is the data of Experiment 2 (EEG: movement). The paper is now published in NeuroImage: https://doi.org/10.1016/j.neuroimage.2022.119517
Abstract: The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans.
In this experiment, participants completed two tasks - classification and passive viewing. In the classification task, participants classified single images that appeared on the screen as "can move" or "still". This task was time-pressured, and trials timed out after 1 second. In the passive viewing task, participants viewed rapid (RSVP) streams of images, and pressed a button to indicate when the fixation cross changed colour.
Contents of the dataset: - Raw EEG data is available in individual subject folders (BrainVision raw formats .eeg, .vmrk, .vhdr). Pre-processed EEG data is available in the derivatives folders in EEGlab (.set, .fdt) and cosmoMVPA dataset (.mat) format. This experiment has 24 subjects. - Scripts for data analysis and running the experiment are available in the code folder. Note that all code runs on both EEG experiments together, so you must download both this and the movement experiment data in order to replicate analyses. - Stimuli are also available (400 CC0 images) - Results of decoding analyses are available in the derivatives folder.
Further notes:
Note that the code is designed to run analyses for data and its partner data (experiments 2 and 3 of the paper). Copies in both folders are identical. Scripts need to be run in a particular order (detailed at the top of each script)
Further explanations of the code:
To only look at the results, the results for each of these analyses is saved in the derivatives already, so there is no need to run any of them again.
Each file named plot_X.m will create a graph as in the paper. Each is reliant on saved data from the above analyses, which are saved in the derivatives folder.
Citing this dataset ——————————————————— If using this data, please cite the associated paper:
Contact ———————————————————
Contact Sophia Shatek (sophia.shatek@sydney.edu.au) for additional information. ORCID: 0000-0002-7787-1379
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set contains intracranial EEG data, analysis code and results associated with the manuscript, "Hippocampal Sharp-wave Ripples Linked to Visual Episodic Recollection in Humans". [DOI: 10.1126/science.aax1030]
Data files (.mat) and associated scripts (.m) are divided into folders according to the subject of the analysis (e.g. ripple detection, ripple-triggered averages, multivariate pattern analysis etc.) and are all contained in the .zip file: “Norman_et_al_2019_data_and_code_zenodo.zip".
The code is written in Matlab R2018b and run on a desktop computer with a 3.4Ghz Intel Core i7-6700 CPU with 64GB RAM.
Matlab's Signal Processing Toolbox is required.
General notes:
1) The data does not contain identifying details about the patients, nor voice recordings.
2) Before running the analyses, make sure you set the correct paths in the "startup_script.m" located in the main folder where the zip file was extracted.
3) To run the code, the following open-source toolboxes are required:
EEGLAB (https://sccn.ucsd.edu/eeglab/download.php), version: "eeglab14_1_2b".
A. Delorme, S. Makeig, EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods. 134, 9–21 (2004).
Mass Univariate ERP Toolbox (https://openwetware.org/wiki/Mass_Univariate_ERP_Toolbox), version: "dmgroppe-Mass_Univariate_ERP_Toolbox-d1e60d4".
D. M. Groppe, T. P. Urbach, M. Kutas, Mass univariate analysis of event-related brain potentials/fields I: A critical tutorial review. Psychophysiology. 48, 1711–1725 (2011).
*** Make sure you download the relevant toolboxes and save them in the "path_to_toolboxes" before running the analysis scripts (see "startup_script.m")
4) Code developed by other authors (redistributed here as part of the analysis code):
DRtoolbox (https://lvdmaaten.github.io/drtoolbox/), version: 0.8.1b.
L.J.P. van der Maaten, E.O. Postma, and H.J. van den Herik. Dimensionality Reduction: A Comparative Review. Tilburg University Technical Report, TiCC-TR 2009-005, 2009.
Scott Lowe / superbar (https://github.com/scottclowe/superbar), version: 1.5.0.
Oliver J. Woodford, Yair M. Altman / export_fig (https://github.com/altmany/export_fig).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises EEG recordings collected from 10 healthy participants (9 male, 1 female) aged 18–23 years from the Department of Information Technology, Gauhati University, India, with the aim of analyzing brain activity associated with visually induced emotions—specifically happiness, sadness, and fear. Visual stimuli were drawn from the Open Images Dataset v7 and Extensions, with each emotion block consisting of three images (5 seconds each), separated by relaxation intervals. EEG data were recorded using a 32-channel Emotiv Epoc Flex gel-based system at a sampling rate of 128 Hz, following the international 10–20 electrode placement standard. Reference electrodes were placed at P3 (CMS) and P4 (DRL), as per Emotiv's default setup. Participants first viewed a “Relax” screen (10 seconds) before emotional stimuli were presented in two phases using different image sets to increase variability. Verbal feedback was collected after each phase to confirm the participants' emotional experiences; only data from those whose self-reported emotions matched the intended labels were retained. Participants were asked to minimize movement, and multiple relaxation periods were incorporated to maintain a calm baseline. All participants provided written informed consent.The raw EEG signals were preprocessed through a structured pipeline to ensure artifact-free data suitable for analysis. First, a bandpass filter (0.5–45 Hz) was applied using zero-phase forward-backward FIR filtering to preserve the integrity of frequency components relevant to emotional states. Next, Savitzky–Golay smoothing (frame length 127, order 5) was used to remove slow-varying trends by subtracting the smoothed reference from the EEG signals. Finally, Independent Component Analysis (ICA) via EEGLAB’s runica function was employed to identify and remove ocular artifacts, retaining only clean components for analysis.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Experimental procedure:The participants were comfortably seated on a chair with their arms lying symmetrically ahead on a table. Auditory cues of 70 dB, 1000 Hz, and 50 ms of duration (Kida et al., 2006) were used to trigger right index finger movements. All the subjects underwent three different conditions (80 trials each) in randomized order. Each condition lasted approx. 13 min with 10 min breaks between the conditions. The whole session – including EEG net preparation, training, pain threshold measurement, and experiment – lasted approx. 2.5 hours. In the experimental condition (Unilateral with Mirror, or UM+), the mirror was placed on the desk, perpendicularly to the subjects’ midsagittal plane with the reflecting face on the right side (Figure 1). Subjects were instructed to move the right index finger in response to the auditory cue while watching the image of the reflected moving hand in the mirror to give the illusion of the simultaneous left index finger movement. The position of the left hand behind the mirror corresponded to the image of the left hand reflected in the mirror. In the control conditions, the mirror was removed from the experimental setting and the left hand was directly visible to the participants. In one control condition (Unilateral without Mirror, UM-), subjects were asked to perform the same unilateral right index finger movements. In the other control condition (Bilateral without Mirror, BM-), subjects had to perform synchronous movements of both index fingers. The movements consisted of a double extension of the index finger with a slow release toward the bottom (approx. 1 s). All the participants received a brief training to perform the movement correctly and keep the left hand as still as possible during the unilateral conditions. In each condition, an electrical stimulus was delivered on the tip of the left index finger 100 ms after the auditory cue to induce cortical sensory-motor interaction (Figure 2). The interval between the stimuli was fixed at 10 s, a sufficient period to reset the desynchronization of the alpha rhythms (Babiloni et al., 2008). A fixed interval also allowed to produce a sort of predictability of the upcoming auditory and electrical stimuli, optimal to study the anticipatory alpha ERD/ERS responses. However, subjects were not informed of this fixed interval to avoid any counting strategies.EEG data are uploaded in .hdf5 extension (g.recorder amplifier) to be analyzed with EEGLAB toolbox for MATLAB.Raw data coding slightly differs from the one presented in the original manuscript:no-M+P = UM-no-M+P-bil = BM-yes-M+P = UM+
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Overview —————————————————— This data is from the paper "Capacity for movement is a major organisational principle in object representations". This is the data of Experiment 3 (EEG: movement)
Abstract: The ability to perceive moving objects is crucial for survival and threat identification. Recent neuroimaging evidence has shown that the visual system processes objects on a spectrum according to their ability to engage in self-propelled, goal-directed movement. The association between the ability to move and being alive is learned early in childhood, yet evidently not all moving objects are alive. Natural, non-agentive movement (e.g., in clouds, or fire) cause confusion in children and adults under time pressure. In the current study, we investigated the relationship between movement and aliveness using both behavioural and neural measures. We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Behavioural classification showed two key categorisation biases: moving natural things were often mistaken to be alive, and often classified as not moving. Movement explained significant variance in the EEG data, during both a classification task and passive viewing. These results highlight that capacity for movement is an important dimension in the structure of human visual object representations.
In this experiment, participants completed two tasks - classification and passive viewing. In the classification task, participants classified single images that appeared on the screen as "can move" or "still". This task was time-pressured, and trials timed out after 1 second. In the passive viewing task, participants viewed rapid (RSVP) streams of images, and pressed a button to indicate when the fixation cross changed colour.
Contents of the dataset: - Raw EEG data is available in individual subject folders (BrainVision raw formats .eeg, .vmrk, .vhdr). Pre-processed EEG data is available in the derivatives folders in EEGlab (.set, .fdt) and cosmoMVPA dataset (.mat) format. This experiment has 24 subjects. - Scripts for data analysis and running the experiment are available in the code folder. Note that all code runs on both EEG experiments together, so you must download both this and the movement experiment data in order to replicate analyses. - Stimuli are also available (400 CC0 images) - Results of decoding analyses are available in the derivatives folder.
Further notes:
Note that the code is designed to run analyses for data and its partner data (experiments 2 and 3 of the paper). Copies in both folders are identical. Scripts need to be run in a particular order (detailed at the top of each script)
Further explanations of the code:
To only look at the results, the results for each of these analyses is saved in the derivatives already, so there is no need to run any of them again.
Each file named plot_X.m will create a graph as in the paper. Each is reliant on saved data from the above analyses, which are saved in the derivatives folder.
Citing this dataset ——————————————————— If using this data, please cite the associated paper:
Contact ———————————————————
Contact Sophia Shatek (sophia.shatek@sydney.edu.au) for additional information. ORCID: 0000-0002-7787-1379
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
127 channels (FCz referenced) EEG snippets tainted with semi-synthetic artefact used to run benchmarks on cleaning algorithms. The artefactual types here reproducedare muscolar, blink, and tACS induced. ~ARTREM.zip is an archive containing data saved in Matlab format. Each dataset is comprised of:
a "clean" field, with a clean snippet containing only EEG data (60 seconds).;
a "data" field, with data snippet containing EEG data summed with the artefact (60 seconds);
a "arte" field, with artefact snippet containing only artefactual data (60 seconds);
a "fs" field, with the sampling frequency of the data;
an "artFreq" field, only populated for the tACS artefacts, containing information about the tACS stimulation frequency.
bp128_corr.sfp is a simple text file with an EEG electrode position template. It can easily be read and imported in MATLAB using (e.g.) the eeglab "readlocs" function.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Overview —————————————————— This data is from the paper "Capacity for movement is a major organisational principle in object representations". This is the data of Experiment 3 (EEG: movement). Access the preprint here: https://psyarxiv.com/3x2qh/
Abstract: The ability to perceive moving objects is crucial for survival and threat identification. Recent neuroimaging evidence has shown that the visual system processes objects on a spectrum according to their ability to engage in self-propelled, goal-directed movement. The association between the ability to move and being alive is learned early in childhood, yet evidently not all moving objects are alive. Natural, non-agentive movement (e.g., in clouds, or fire) cause confusion in children and adults under time pressure. In the current study, we investigated the relationship between movement and aliveness using both behavioural and neural measures. We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Behavioural classification showed two key categorisation biases: moving natural things were often mistaken to be alive, and often classified as not moving. Movement explained significant variance in the EEG data, during both a classification task and passive viewing. These results highlight that capacity for movement is an important dimension in the structure of human visual object representations.
In this experiment, participants completed two tasks - classification and passive viewing. In the classification task, participants classified single images that appeared on the screen as "can move" or "still". This task was time-pressured, and trials timed out after 1 second. In the passive viewing task, participants viewed rapid (RSVP) streams of images, and pressed a button to indicate when the fixation cross changed colour.
Contents of the dataset: - Raw EEG data is available in individual subject folders (BrainVision raw formats .eeg, .vmrk, .vhdr). Pre-processed EEG data is available in the derivatives folders in EEGlab (.set, .fdt) and cosmoMVPA dataset (.mat) format. This experiment has 24 subjects. - Scripts for data analysis and running the experiment are available in the code folder. Note that all code runs on both EEG experiments together, so you must download both this and the movement experiment data in order to replicate analyses. - Stimuli are also available (400 CC0 images) - Results of decoding analyses are available in the derivatives folder.
Further notes:
Note that the code is designed to run analyses for data and its partner data (experiments 2 and 3 of the paper). Copies in both folders are identical. Scripts need to be run in a particular order (detailed at the top of each script)
Further explanations of the code:
To only look at the results, the results for each of these analyses is saved in the derivatives already, so there is no need to run any of them again.
Each file named plot_X.m will create a graph as in the paper. Each is reliant on saved data from the above analyses, which are saved in the derivatives folder.
Citing this dataset ——————————————————— If using this data, please cite the associated paper:
Contact ———————————————————
Contact Sophia Shatek (sophia.shatek@sydney.edu.au) for additional information. ORCID: 0000-0002-7787-1379
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset consists of the MEEG (sMRI+MEG+EEG) portion of the multisubject, multimodal face processing dataset facing processing dataset (ds000117). This dataset was originally acquired and shared by Daniel Wakeman and Richard Hensen (https://pubmed.ncbi.nlm.nih.gov/25977808/). The data has been repackaged in EEGLAB format and has undergone minimal preprocessing as well as reorganization and annotation of the dataset events. The MEG and EEG were simultaneously recorded, and sMRI was preserved for EEG source localization.
The preprocessing, which was performed using the wh_extracteeg_BIDS.m located in the code directory, includes the following steps: - Ignore MRI data except for sMRI. - Extract EEG channels out of the MEG/EEG fif data - Add fiducials - Rename EOG and EKG channels - Extract events from event channel - Remove spurious events 5, 6, 7, 13, 14, 15, 17, 18 and 19 - Remove spurious event 24 for subject 3 run 4 - Rename events taking into account button assigned to each subject - Correct event latencies (events have a shift of 34 ms) - Resample data to 250 Hz (this step is performed because this dataset is used in a tutorial for EEGLAB and needs to be lightweight) - Remove event fields urevent and duration - Fill empty fields for events boundary and stim_file. - Save as EEGLAB .set format
Ramon Martinez, Dung Truong, Kay Robbins, Scott Makeig, Arnaud Delorme (UCSD, La Jolla, CA, USA)
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Data collection took place at the Meditation Research Institute (MRI) in Rishikesh, India under the supervision of Arnaud Delorme, PhD. The project was approved by the local MRI Indian ethical committee and the ethical committee of the University of California San Diego (IRB project # 090731).
Participants sat either on a blanket on the floor or on a chair for both experimental periods depending on their personal preference. They were asked to keep their eyes closed and all lighting in the room was turned off during data collection. An intercom allowed communication between the experimental and the recording room.
Participants performed three identical sessions of 13 minutes each. 750 stimuli were presented with 70% of them being standard (500 Hz pure tone lasting 60 milliseconds), 15% being oddball (1000 Hz pure tone lasting 60 ms) and 15% being distractors (1000 Hz white noise lasting 60 ms). All sounds took 5 milliseconds to ramp up and 5 milliseconds to ramp down. Sounds were presented at a rate of 1 per second with a random gaussian jitter of standard deviation 25 ms. Participants were instructed to respond to oddball by pressing a key on a keypad that was resting on their lap.
Data collection was performed with an Active Two Biosemi system (Biosemi, Inc.) at 1024Hz and 10-20 standard caps from the same company tailored to the subject’s head size. Stimuli were presented with the psychophysics MATLAB toolbox. The code for presenting stimuli and all the data is made available (see code and stimuli folder). Before making the data public, the raw data were resampled at 256 Hz using the standalone tool provided by Biosemi, then converted to the EEGLAB data format. No further data manipulation was performed.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Multi-subject, multi-modal (sMRI+EEG) neuroimaging dataset on face processing. Original data described at https://www.nature.com/articles/sdata20151 This is repackaged version of the EEG data in EEGLAB format. The data has gone through minimal preprocessing including (see wh_extracteeg_BIDS.m): - Ignoring fMRI and MEG data (sMRI preserved for EEG source localization) - Extracting EEG channels out of the MEG/EEG fif data - Adding fiducials - Renaming EOG and EKG channels - Extracting events from event channel - Removing spurious events 5, 6, 7, 13, 14, 15, 17, 18 and 19 - Removing spurious event 24 for subject 3 run 4 - Renaming events taking into account button assigned to each subject - Correcting event latencies (events have a shift of 34 ms) - Resampling data to 250 Hz (this is a step that is done because this dataset is used as tutorial for EEGLAB and need to be lightweight) - Merging run 1 to 6 - Removing event fields urevent and duration - Filling up empty fields for events boundary and stim_file. - Saving as EEGLAB .set format
Ramon Martinez, Dung Truong, Scott Makeig, Arnaud Delorme (UCSD, La Jolla, CA, USA)