Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Cortical flat maps for three subjects derived from the anatomical MRI images. Cortical surfaces were reconstructed from T1-weighted and T2-weighted anatomical images with freesurfer's reconall procedure. Relaxation cuts were placed manually to allow for flattening of each hemisphere's surface. Results of any analysis of the fMRI data can be viewed on these flat maps with pycortex.
Part of THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior
See related materials in Collection at: https://doi.org/10.25452/figshare.plus.c.6161151
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This repository contains structural and functional MRI data of 126 monolingual and bilingual participants with varying language backgrounds and proficiencies.
This README is organized into two sections:
If you just want access to the processed brain and language data, go to Quick Start.
There are two ways that one can go about this dataset. If you want to jump immediately into analyzing participants and their language profiles, then go to Quick Start. If instead you are looking to go from low-level MRI data to cleaned CSVs with various brain measure types, either to learn the process or double check our work, then go to Data Replication.
If you just want access to cleaned brain measure and language history data of 126 participants, they can be found in the following folders:
Each folder has a metadata.xlsx file that gives more information on the files and their fields. Have fun, go nuts.
If you are looking to go through the steps required to create the data from start to finish, we first start with the raw structural and functional MRI data, which can be found in ./sub-EBE{XXXX}. Information on the data in this folder, which follows BIDS, can be found here.
The data in ./sub-EBE{XXXX} is then inputted into various processing pipelines, the versions for which can be found at Dependency versions. The following processing pipelines are used:
fMRIprep is a neuroimaging procesing tool used for task-based and resting-state fMRI data. fMRIprep is not used directly to create brain measure CSVs used in analysis, but instead creates processed fMRI data used in the CONN toolbox. For more information on fMRIprep and how to use it, click here.
We use the CAT12 toolbox, which stands for Computational Anatomy Toolbox, to calculate brain region volumes using voxel-based morphometry (VBM). CAT12 works through SPM12 and Matlab, and requires that both be installed. We have included the Matlab scripts used to create the files in ./derivatives/CAT12 in preprocessing_scripts/cat12. To use it, install necessary dependencies (CAT12, SPM12, and Matlab) and run preprocessing_scripts/cat12/CAT12_segmentation_n2.m in Matlab. You will also need to update for your local path to Matlab on lines 12, 24, and 41. For more information on CAT12 and how to use it to calculate brain region volumes using VBM, click here.
CONN is a functional connectivity toolbox, which we used to generate participant brain connectivity measures. CONN requires first that you run the fMRIprep pipeline, as it uses some of fMRIprep's outputs as input. Like CAT12, CONN works through SPM12 and Matlab and requires that both be installed. For more information on CONN and how to use it, click here.
We used FMRIB's Diffusion Toolbox (FDT) for extracting values from diffusion weighted images. For more information on FDT and how to use it, click here.
FreeSurfer is a software package for the analysis and visualization of structural and functional neuroimaging data, which we use to extract region volumes and cortical thickness through surface-based morphometry (SBM). For more information on Freesurfer and how to use it, click here.
The results from these pipelines, which use the data in ./sub-EBE{XXXX} as input, are then outputted into folders in ./derivatives. For information on which folder stores each pipeline result, see Directories.
After running these pipelines, we can take their outputs and convert them into CSVs for analysis. To do this, we use preprocessing_scripts/brain_data_preprocessing.ipynb. This Python notebook takes the data in ./derivatives as input and outputs CSVs to processing_output. Outputted from this notebook are CSVs containing brain volumes, cortical thicknesses, fractional anisotropy values, and connectivity measures. Information on the outputted CSVs can be found at processing_output/metadata.xlsx.
Also included in this dataset is code used in the analyses of Chen, Salvaore, & Blanco-Elorrieta (submitted). If you are interested in running analyses used in that paper, see the README in chen_salvadore_elorrieta/code.
participants.json: Describes participants.tsv.
Each of these directories contain the BIDS formatted anatomical and functional MRI data, with the name of the directory corresponding to the subject's unique identifier. For more information on the subfolders, see BIDS information here.
This directory contains outputs of common processing pipelines run on the raw MRI data from ./sub-EBE{XXXX}.
Results of the CAT12 toolbox, which stands for Computational Anatomy Toolbox, and is used to calculate brain region volumes using voxel-based morphometry (VBM).
Results of the CONN toolbox, used to generate data on functional connectivity from brain fMRI sequences.
Results of the FMRIB's Diffusion Toolbox (FDT), used for extracting values from diffusion weighted images.
Results from fMRIprep, a preprocessing pipeline for task-based and resting-state functional MRI data.
Results from FreeSurfer, a software package for the analysis and visualization of structural and functional neuroimaging data.
Participant information is kept on the first level of the dataset and includes information on language history, demographics, and their composite multilingualism score. Below is a list of all participant information files.
language_background.csv: Full subject language information and history.
metadata.xlsx: Metadata on each file in this directory.
multilingual_measure.csv: Each participant’s composite multilingualism score specified in Chen & Blanco-Elorrieta (in review).
This directory contains processed brain measure data for brain volumes, cortical thickness, FA, and connectivity. The CSVs are created from scripts in the directory processing_scripts using files in the derivatives directory as input. Descriptions of each file can be found below.
connectivity_network.csv: Contains 36 Network-to-Network connectivity values for each participant.
connectivity_roi.csv: Contains 13,336 ROI-to-ROI connectivity values for each participant.
dti.csv: Contains averaged white matter FA values for 76 brain regions for each participant based on Diffusion tensor imaging.
metadata.xlsx: Metadata on each file in this directory.
sbm_thickness.csv: Contains cortical thickness values for 68 brain regions for each participant based on Surface-based morphometry.
sbm_volume.csv: Contains volume values for 165 brain regions for each participant based on Surface-based morphometry.
tiv.csv: Contains two total intracranial volumes for each subject, calculated using SBM and VBM respectively
vbm_volume.csv: Contains volume values for 153 brain regions for each participant based on Voxel-based morphometry. `
Code involved in processing raw MRI data.
##
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Summary
In everyday environments, partially occluded objects are more common than fully visible ones. Despite their visual incompleteness, the human brain can reconstruct these objects to form coherent perceptual representations, a phenomenon referred to as amodal completion. However, current computer vision systems still struggle to accurately infer the hidden portions of occluded objects. While the neural mechanisms underlying amodal completion have been partially explored, existing findings often lack consistency, likely due to limited sample sizes and varied stimulus materials. To address these gaps, we introduce a novel fMRI dataset,the Occluded Image Interpretation Dataset (OIID), which captures human perception during image interpretation under different levels of occlusion. This dataset includes fMRI responses and behavioral data from 65 participants. The OIID enables researchers to identify the brain regions involved in processing occluded images and examines individual differences in functional responses. Our work contributes to a deeper understanding of how the human brain interprets incomplete visual information and offers valuable insights for advancing both theoretical research and related practical applications in amodal completion fields.
Data record
The data were organized according to the Brain-Imaging-Data-Structure (BIDS) Specification version 1.0.2 and can be accessed from the OpenNeuro public repository (accession number: XXX). In short, raw data of each subject were stored in “sub-
Stimulus The stimulus for different fMRI experiments are stored as “stimuli”.
Raw MRI data Each participant's folder is comprised of 2 session folders: “sub-
Freesurfer recon-all The results of reconstructing the Cortical Surface were saved as “derivatives/recon-all-FreeSurfer/sub-
Pre-processed volume data from pre-processing The pre-processed volume-based fMRI data were saved as“pre-processed_volume_data/sub-
Preprocessed surface-based data The pre-processed surface-based data were saved as “derivatives/volumetosurface/sub-
The technical validation of the NFED The results of the technical validation were saved as “derivatives/validation/results” for each participant.The code of the technical validation were saved as “derivatives/validation/code”.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Summary
Facial expression is among the most natural methods for human beings to convey their emotional information in daily life. Although the neural mechanisms of facial expression have been extensively studied employing lab-controlled images and a small number of lab-controlled video stimuli, how the human brain processes natural dynamic facial expression videos still needs to be investigated. To our knowledge, this type of data specifically on large-scale natural facial expression videos is currently missing. We describe here the natural Facial Expressions Dataset (NFED), an fMRI dataset including responses to 1,320 short (3-second) natural facial expression video clips. These video clips are annotated with three types of labels: emotion, gender, and ethnicity, along with accompanying metadata. We validate that the dataset has good quality within and across participants and, notably, can capture temporal and spatial stimuli features. NFED provides researchers with fMRI data for understanding of the visual processing of large number of natural facial expression videos.
Data Records
Data Records The data, which is structured following the BIDS format53, were accessible at https://openneuro.org/datasets/ds00504754. The “sub-
Stimulus. Distinct folders store the stimuli for distinct fMRI experiments: "stimuli/face-video", "stimuli/floc", and "stimuli/prf" (Fig. 2b). The category labels and metadata corresponding to video stimuli are stored in the "videos-stimuli_category_metadata.tsv”. The “videos-stimuli_description.json” file describes category and metadata information of video stimuli(Fig. 2b).
Raw MRI data. Each participant's folder is comprised of 11 session folders: “sub-
Volume data from pre-processing. The pre-processed volume-based fMRI data were in the folder named “pre-processed_volume_data/sub-
Surface data from pre-processing. The pre-processed surface-based data were stored in a file named “volumetosurface/sub-
FreeSurfer recon-all. The results of reconstructing the cortical surface are saved as “recon-all-FreeSurfer/sub-
Surface-based GLM analysis data. We have conducted GLMsingle on the data of the main experiment. There is a file named “sub--
Validation. The code of technical validation was saved in the “derivatives/validation/code” folder. The results of technical validation were saved in the “derivatives/validation/results” folder (Fig. 2h). The “README.md” describes the detailed information of code and results.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Real-time fMRI (rtfMRI) has enormous potential for both mechanistic brain imaging studies or treatment-oriented neuromodulation. However, the adaption of rtfMRI has been limited due to technical difficulties in implementing an efficient computational framework. Here, we introduce a python library for real-time fMRI (rtfMRI) data processing systems, Real-Time Processing System in python (RTPSpy), to provide building blocks for a custom rtfMRI application with extensive and advanced functionalities. RTPSpy is a library package including (1) a fast, comprehensive, and flexible online fMRI image processing modules comparable to offline denoising, (2) utilities for fast and accurate anatomical image processing to define an anatomical target region, (3) a simulation system of online fMRI processing to optimize a pipeline and target signal calculation, (4) simple interface to an external application for feedback presentation, and (5) a boilerplate graphical user interface (GUI) integrating operations with RTPSpy library. The fast and accurate anatomical image processing utility wraps external tools, including FastSurfer, ANTs, and AFNI, to make tissue segmentation and region of interest masks. We confirmed that the quality of the output masks was comparable with FreeSurfer, and the anatomical image processing could complete in a few minutes. The modular nature of RTPSpy provides the ability to use it for a simulation analysis to optimize a processing pipeline and target signal calculation. We present a sample script for building a real-time processing pipeline and running a simulation using RTPSpy. The library also offers a simple signal exchange mechanism with an external application using a TCP/IP socket. While the main components of the RTPSpy are the library modules, we also provide a GUI class for easy access to the RTPSpy functions. The boilerplate GUI application provided with the package allows users to develop a customized rtfMRI application with minimum scripting labor. The limitations of the package as it relates to environment-specific implementations are discussed. These library components can be customized and can be used in parts. Taken together, RTPSpy is an efficient and adaptable option for developing rtfMRI applications.Code available at:https://github.com/mamisaki/RTPSpy
Facebook
TwitterArtificial neural networks (ANNs) are sensitive to perturbations and adversarial attacks. One hypothesized solution to adversarial robustness is to align manifolds in the embedded space of neural networks with biologically grounded manifolds. Recent state-of-the-art works that emphasize learning robust neural representations, rather than optimizing for a specific target task like classification, support the idea that researchers should investigate this hypothesis. Â While works have shown that fine-tuning ANNs to coincide with biological vision does increase robustness to both perturbations and adversarial attacks, these works have relied on proprietary datasets- the lack of publicly available biological benchmarks make it difficult to evaluate the efficacy of these claims. Here, we deliver a curated dataset consisting of biological representations of images taken from two commonly used computer vision datasets, ImageNet and COCO, that can be easily integrated into model training and eva..., , , # BOLD5000 Additional ROIs and RDMs for Neural Network Research
This dataset is made available as part of the publication of the following journal article:
Pickard W, Sikes K, Jamil H, Chaffee N, Blanchard N, Kirby M and Peterson C (2023) Exploring fMRI RDMs: enhancing model robustness through neurobiological data. Front. Comput. Sci. 5:1275026. doi: 10.3389/fcomp.2023.127502
This dataset is derivative of the BOLD5000 Release 2.0. Additional post-processing steps were performed to make the data more accessible in machine learning (ML) research using representational similarity analysis (RSA).
As a general overview, the following additional post-processing steps were performed with the results made available here:
Freesurfer was used to create the new ROIs. Freesurfer derivatives for...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data Acquisition
The cohort consists of a total of 27 healthy participants (age 35 ± 6.8 years) and 27 schizophrenic patients (age 41 ± 9.6), scanned in a 3-Tesla MRI scanner (Trio, Siemens Medical, Germany) using a 32-channel head-coil. The schizophrenic patients are from the Service of General Psychiatry at the Lausanne University Hospital (CHUV). All of them were diagnosed with schizophrenic and schizoaffective disorders after meeting the DSM-IV criteria (American Psychiatric Association (2000): Diagnostic and Statistical Manual of Mental Disorders, 4th ed. DSM-IV-TR. American Psychiatric Pub, Arlington, VA22209, USA). The Diagnostic Interview for Genetic Studies assessment was used to recruits the healthy controls (Preisig et al. 1999). 24 out of the 27 schizophrenics were under medication with mean chlorpromazine equivalent dose (CPZ) of 431 ± 288 mg. The written consent was obtained for all subjects - in accordance with institutional guidelines of the Ethics Committee of Clinical Research of the Faculty of Biology and Medicine, University of Lausanne, Switzerland, #82/14, #382/11, #26.4.2005). All subjects were fully anonymised.
The session protocol consisted of (1) a magnetization-prepared rapid acquisition gradient echo (MPRAGE) sequence sensitive to white/gray matter contrast (1-mm in-plane resolution, 1.2-mm slice thickness), (2) a Diffusion Spectrum Imaging (DSI) sequence (128 diffusion-weighted volumes and a single b0 volume, maximum b-value 8,000 s/mm2, 2.2x2.2x3.0 mm voxel size), and (3) a gradient echo EPI sequence sensitive to BOLD contrast (3.3-mm in-plane resolution and slice thickness with a 0.3-mm gap, TE 30 ms, TR 1,920 ms, resulting in 280 images per participant). During the fMRI scan, participants were not engaged in any overt task, and the scan was treated as eyes-open resting-state fMRI (rs-fMRI).
Data Pre-processing
Initial signal processing of all MPRAGE, DSI, and rs-fMRI data was performed using the Connectome Mapper pipeline (Daducci et al. 2012). Grey and white matter were segmented from the MPRAGE volume using freesurfer (Desikan et al. 2006) and parcellated into 83 cortical and subcortical areas. The parcels were then further subdivided into 129, 234, 463 and 1015 approximately equally sized parcels according to the Lausanne anatomical atlas following the method proposed by (Cammoun et al. 2012). DSI data were reconstructed following the protocol described by (Wedeen et al. 2005), allowing us to estimate multiple diffusion directions per voxel. The diffusion probability density function was reconstructed as the discrete 3D Fourier transform of the signal modulus. The orientation distribution function (ODF) was calculated as the radial summation of the normalized 3D probability distribution function. Thus, the ODF is defined on a discrete sphere and captures the diffusion intensity in every direction.
Structural Connectivity
Structural connectivity matrices were estimated for individual participants using deterministic streamline tractography on reconstructed DSI data, initiating 32 streamline propagations per diffusion direction, per white matter voxel (Wedeen et al. 2008). Structural connectivity between pairs of regions was measured in terms of fiber density, defined as the number of streamlines between the two regions, normalized by the average length of the streamlines and average surface area of the two regions (Hagmann et al. 2008). The goal of this normalization was to compensate for the bias toward longer fibers inherent in the tractography procedure, as well as differences in region size. The number of fibers and fiber length were also included in the dataset. For the quantitative measure of structural connectivity, the generalised fractional anisotropy (gFA, Tuch et al. 2004) and average apparent diffusion coefficient (ADC, Sener et al. 2001) were also computed for each tract.
Functional Connectivity
Functional data were pre-processed using routines designed to facilitate subsequent network exploration (Murphy et al. 2009, Power et al. 2012). The first four time points were excluded from subsequent analysis to allow the time series to stabilize. The signal was linearly detrended and further physiological (white-matter and cerebrospinal fluid regressors) and motion artefacts (three translational and three rotational regressors) confounds were regressed. Then, the signal was spatially smoothed and bandpass-filtered between 0.01-0.1 Hz with Hamming windowed sinc FIR filter. To obtain the brain regions for different atlas scales the signal was linearly registered to the MPRAGE image and averaged within a given region (Jenkinson et al. 2012). Functional matrices were obtained by computing Pearson’s correlation between the individual pairs of regions. All of the above was carried out in subject’s native space (Daducci et al. 2012, Griffa et al. 2017).
Brain cortical bert freesurfer rendering for the 5 scales of the Lausanne2008 atlas is available on https://github.com/jvohryzek/bert4lausanne2008.
Facebook
TwitterStandard fMRI data during performance of 12 tasks in the scanner, resulting in activation and funcitonal connectivity maps.
Structural data was processed with Freesurfer.
All details of acquisition, pre-processing and data analysis are given in the PLOS original article.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This repository contains raw MRI data of 127 subjects with varying language backgrounds and proficiencies. Below is a detailed outline of the file structure used:
Each of these directories contain the BIDS formatted anatomical and functional MRI data, with the name of the directory corresponding to the subject's unique identifier.
For more information on the subdirectories, see BIDS information at https://bids-specification.readthedocs.io/en/stable/appendices/entity-table.html
This directory contains outputs of common processing pipelines run on the raw MRI data from "data/sub-EBE****".
These are the results of the CAT12 toolbox, which stands for Computational Anatomy Toolbox, and is used to calculate brain region volumes using voxel-based morphometry (VBM). A few things are required to download for this process.
CONN is used to generate data on functional connectivity from brain fMRI sequences. A few things are required to download for this process.
We used FMRIB's Diffusion Toolbox (FDT) for extracting values from diffusion weighted images. To use FDT, you need to download the following modules through CLI:
For more information on the toolbox, visit https://fsl.fmrib.ox.ac.uk/fsl/docs/#/diffusion/index.
fMRIprep is the preprocessing of task-based and resting-state functional MRI. We use it to generate data for connectivity.
We used fMRIprep v23.0.2. For more information, visit https://fmriprep.org/en/stable/index.html.
FreeSurfer is a software package for the analysis and visualization of structural and functional neuroimaging data, which we use to extract region volumes through surface-based morphometry (SBM).
We used freesurfer v7.4.1. For more information, visit https://surfer.nmr.mgh.harvard.edu/fswiki.
This directory contains data and code used in the analysis of Chen, Salvadore, Blanco-Elorrieta (submitted).
This directory contains python and R code used in the analysis of Chen, Salvadore, Blanco-Elorrieta (submitted), with each python notebook corresponding to a different part of the paper's analysis. For more details on each file and subdirectories, see "analysis/code/README.md".
This directory contains language data on each subject, including a composite multilingualism score from Chen & Blanco-Elorrieta (submitted), information on language knowledge, exposure, mixing, use in education, and family members’ language ability in the participants’ known languages from early childhood to the present day. For more information on the files and their fields, see "analysis/participant_data/metadata.xlsx".
This directory contains MRI data, both anatomical and functional, that is the final result of processing raw MRI data. This includes brain volumes, cortical thickness, fractional anisotropy values, and connectivity measures. For more information on the files within this directory, see "analysis/processed_mri_data/metadata.xlsx".
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Audio Cartography project investigated the influence of temporal arrangement on the interpretation of information from a simple spatial data set. I designed and implemented three auditory map types (audio types), and evaluated differences in the responses to those audio types.
The three audio types represented simplified raster data (eight rows x eight columns). First, a "sequential" representation read values one at a time from each cell of the raster, following an English reading order, and encoded the data value as loudness of a single fixed-duration and fixed-frequency note. Second, an augmented-sequential ("augmented") representation used the same reading order, but encoded the data value as volume, the row as frequency, and the column as the rate of the notes play (constant total cell duration). Third, a "concurrent" representation used the same encoding as the augmented type, but allowed the notes to overlap in time.
Participants completed a training session in a computer-lab setting, where they were introduced to the audio types and practiced making a comparison between data values at two locations within the display based on what they heard. The training sessions, including associated paperwork, lasted up to one hour. In a second study session, participants listened to the auditory maps and made decisions about the data they represented while the fMRI scanner recorded digital brain images.
The task consisted of listening to an auditory representation of geospatial data ("map"), and then making a decision about the relative values of data at two specified locations. After listening to the map ("listen"), a graphic depicted two locations within a square (white background). Each location was marked with a small square (size: 2x2 grid cells); one square had a black solid outline and transparent black fill, the other had a red dashed outline and transparent red fill. The decision ("response") was made under one of two conditions. Under the active listening condition ("active") the map was played a second time while participants made their decision; in the memory condition ("memory"), a decision was made in relative quiet (general scanner noises and intermittent acquisition noise persisted). During the initial map listening, participants were aware of neither the locations of the response options within the map extent, nor the response conditions under which they would make their decision. Participants could respond any time after the graphic was displayed; once a response was entered, the playback stopped (active response condition only) and the presentation continued to the next trial.
Data was collected in accordance with a protocol approved by the Institutional Review Board at the University of Oregon.
Additional details about the specific maps used in this are available through University of Oregon's ScholarsBank (DOI 10.7264/3b49-tr85).
Details of the design process and evaluation are provided in the associated dissertation, which is available from ProQuest and University of Oregon's ScholarsBank.
Scripts that created the experimental stimuli and automated processing are available through University of Oregon's ScholarsBank (DOI 10.7264/3b49-tr85).
Conversion of the DICOM files produced by the scanner to NiFTi format was performed by MRIConvert (LCNI). Orientation to standard axes was performed and recorded in the NiFTi header (FMRIB, fslreorient2std). The excess slices in the anatomical images that represented tissue in the next were trimmed (FMRIB, robustfov). Participant identity was protected through automated defacing of the anatomical data (FreeSurfer, mri_deface), with additional post-processing to ensure that no brain voxels were erroneously removed from the image (FMRIB, BET; brain mask dilated with three iterations "fslmaths -dilM").
The dcm2niix tool (Rorden) was used to create draft JSON sidecar files with metadata extracted from the DICOM headers. The draft sidecar file were revised to augment the JSON elements with additional tags (e.g., "Orientation" and "TaskDescription") and to make a more human-friendly version of tag contents (e.g., "InstitutionAddress" and "DepartmentName"). The device serial number was constant throughout the data collection (i.e., all data collection was conducted on the same scanner), and the respective metadata values were replaced with an anonymous identifier: "Scanner1".
The stimuli consisted of eighteen auditory maps. Spatial data were generated with the rgeos, sp, and spatstat libraries in R; auditory maps were rendered with the Pyo (Belanger) library for Python and prepared for presentation in Audacity. Stimuli were presented using PsychoPy (Peirce, 2007), which produced log files from which event details were extracted. The log files included timestamped entries for stimulus timing and trigger pulses from the scanner.
Audacity® software is copyright © 1999-2018 Audacity Team. Web site: https://audacityteam.org/. The name Audacity® is a registered trademark of Dominic Mazzoni.
FMRIB (Functional Magnetic Resonance Imaging of the Brain). FMRIB Software Library (FSL; fslreorient2std, robustfov, BET). Oxford, v5.0.9, Available: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/
FreeSurfer (mri_deface). Harvard, v1.22, Available: https://surfer.nmr.mgh.harvard.edu/fswiki/AutomatedDefacingTools)
LCNI (Lewis Center for Neuroimaging). MRIConvert (mcverter), v2.1.0 build 440, Available: https://lcni.uoregon.edu/downloads/mriconvert/mriconvert-and-mcverter
Peirce, JW. PsychoPy–psychophysics software in Python. Journal of Neuroscience Methods, 162(1–2):8 – 13, 2007. Software Available: http://www.psychopy.org/
Python software is copyright © 2001-2015 Python Software Foundation. Web site: https://www.python.org
Pyo software is copyright © 2009-2015 Olivier Belanger. Web site: http://ajaxsoundstudio.com/software/pyo/.
R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Available: https://www.R-project.org/.
rgeos software is copyright © 2016 Bivand and Rundel. Web site: https://CRAN.R-project.org/package=rgeos
Rorden, C. dcm2niix, v1.0.20171215, Available: https://github.com/rordenlab/dcm2niix
spatstat software is copyright © 2016 Baddeley, Rubak, and Turner. Web site: https://CRAN.R-project.org/package=spatstat
sp software is copyright © 2016 Pebesma and Bivand. Web site: https://CRAN.R-project.org/package=sp
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This contains the raw and pre-processed fMRI data and structural images (T1) used in the article, "Reading reshapes stimulus selectivity in the visual word form area. The preprint is available here, and the article will be in press at eNeuro.
Additional processed data and analysis code are available in an OSF repository.
Details about the study are included here.
We recruited 17 participants (Age range 19 to 38, 21.12 ± 4.44, 4 self-identified as male, 1 left-handed) from the Barnard College and Columbia University student body. The study was approved by the Internal Review Board at Barnard College, Columbia University. All participants provided written informed consent, acquired digitally, and were monetarily compensated for their participation. All participants had learned English before the age of five.
To ensure high data quality, we used the following criteria for excluding functional runs and participants. If the participant moved by a distance greater than 2 voxels (4 mm) within a single run, that run was excluded from analysis. Additionally, if the participant responded in less than 50% of the trials in the main experiment, that run was removed. Finally, if half or more of the runs met any of these criteria for a single participant, that participant was dropped from the dataset. Using these constraints, the analysis reported here is based on data from 16 participants. They ranged in age from 19 to 38 years (mean = 21.12 ± 4.58,). 4 participants self-identified as male, and 1 was left-handed. A total of 6 runs were removed from three of the remaining participants due to excessive head motion.
We collected MRI data at the Zuckerman Institute, Columbia University, a 3T Siemens Prisma scanner and a 64-channel head coil. In each MR session, we acquired a T1 weighted structural scan, with voxels measuring 1 mm isometrically. We acquired functional data with a T2* echo planar imaging sequences with multiband echo sequencing (SMS3) for whole brain coverage. The TR was 1.5s, TE was 30 ms and the flip angle was 62°. The voxel size was 2 mm isotropic.
Stimuli were presented on an LCD screen that the participants viewed through a mirror with a viewing distance of 142 cm. The display had a resolution of 1920 by 1080 pixels, and a refresh rate of 60 Hz. We presented the stimuli using custom code written in MATLAB and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Throughout the scan, we recorded monocular gaze position using an SR Research Eyelink 1000 tracker. Participants responded with their right hand via three buttons on an MR-safe response pad.
Participants performed three different tasks during different runs, two of which required attending to the character strings, and one that encouraged participants to ignore them. In the lexical decision task, participants reported whether the character string on each trial was a real word or not. In the stimulus color task, participants reported whether the color of the character string was red or gray. In the fixation color task, participants reported whether or not the fixation dot turned red.
On each trial, a single character string flashed for 150 ms at one of three locations: centered at fixation, 3 dva left, or 3 dva right). The stimulus was followed by a blank with only the fixation mark present for 3850 ms, during which the participant had the opportunity to respond with a button press. After every five trials, there was a rest period (no task except to fixation on the dot). The duration of the rest period was either 4, 6 or 8 s in duration (randomly and uniformly selected).
Participants viewed sequences of images, each of which contained 3 items of one category: words, pseudowords, false fonts, faces, and limbs. Participants performed a one-back repetition detection task. On 33% of the trials, the exact same images flashed twice in a row. The participant’s task was to push a button with their right index finger whenever they detected such a repetition. Each participant performed 4 runs of the localizer task. Each run consisted of 77 four-second trials, lasting for approximately 6 minutes. Each category was presented 56 times through the course of the experiment.
The stimuli on each trial were a sequence of 12 written words or pronounceable pseudowords, presented one at a time. The words were presented as meaningful sentences, while pseudowords formed “Jabberwocky” phrases that served as a control condition. Participants were instructed to read the stimuli silently to themselves, and also to push a button upon seeing the icon of a hand that appeared between trials. Participants performed three runs of the language localizer. Each run included 16 trials and lasted for 6 minutes. Each trial lasted for 6s, beginning with a blank screen for 100ms, followed by the presentation of 12 words or pseudowords for 450ms each (5400s total), followed by a response prompt for 400ms and a final blank screen for 100ms. Each run also included 5 blank trials (6 seconds each).
This repository contains three main folders, complying with BIDS specifications.
- Inputs contain BIDS compliant raw data, with the only change being defacing the anatomicals using pydeface. Data was converted to BIDS format using heudiconv.
- Outputs contain preprocessed data obtained using fMRIPrep. In addition to subject specific folders, we also provide the freesurfer reconstructions obtained using fMRIPrep, with defaced anatomicals. Subject specific ROIs are also included in the label folder for each subject in the freesurfer directory.
- Derivatives contain all additional whole brain analyses performed on this dataset.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Normalization
# Generate a resting state (rs) timeseries (ts)
# Install / load package to make fake fMRI ts
# install.packages("neuRosim")
library(neuRosim)
# Generate a ts
ts.rs <- simTSrestingstate(nscan=2000, TR=1, SNR=1)
# 3dDetrend -normalize
# R command version for 3dDetrend -normalize -polort 0 which normalizes by making "the sum-of-squares equal to 1"
# Do for the full timeseries
ts.normalised.long <- (ts.rs-mean(ts.rs))/sqrt(sum((ts.rs-mean(ts.rs))^2));
# Do this again for a shorter version of the same timeseries
ts.shorter.length <- length(ts.normalised.long)/4
ts.normalised.short <- (ts.rs[1:ts.shorter.length]- mean(ts.rs[1:ts.shorter.length]))/sqrt(sum((ts.rs[1:ts.shorter.length]- mean(ts.rs[1:ts.shorter.length]))^2));
# By looking at the summaries, it can be seen that the median values become larger
summary(ts.normalised.long)
summary(ts.normalised.short)
# Plot results for the long and short ts
# Truncate the longer ts for plotting only
ts.normalised.long.made.shorter <- ts.normalised.long[1:ts.shorter.length]
# Give the plot a title
title <- "3dDetrend -normalize for long (blue) and short (red) timeseries";
plot(x=0, y=0, main=title, xlab="", ylab="", xaxs='i', xlim=c(1,length(ts.normalised.short)), ylim=c(min(ts.normalised.short),max(ts.normalised.short)));
# Add zero line
lines(x=c(-1,ts.shorter.length), y=rep(0,2), col='grey');
# 3dDetrend -normalize -polort 0 for long timeseries
lines(ts.normalised.long.made.shorter, col='blue');
# 3dDetrend -normalize -polort 0 for short timeseries
lines(ts.normalised.short, col='red');
Standardization/modernization
New afni_proc.py command line
afni_proc.py \
-subj_id "$sub_id_name_1" \
-blocks despike tshift align tlrc volreg mask blur scale regress \
-radial_correlate_blocks tcat volreg \
-copy_anat anatomical_warped/anatSS.1.nii.gz \
-anat_has_skull no \
-anat_follower anat_w_skull anat anatomical_warped/anatU.1.nii.gz \
-anat_follower_ROI aaseg anat freesurfer/SUMA/aparc.a2009s+aseg.nii.gz \
-anat_follower_ROI aeseg epi freesurfer/SUMA/aparc.a2009s+aseg.nii.gz \
-anat_follower_ROI fsvent epi freesurfer/SUMA/fs_ap_latvent.nii.gz \
-anat_follower_ROI fswm epi freesurfer/SUMA/fs_ap_wm.nii.gz \
-anat_follower_ROI fsgm epi freesurfer/SUMA/fs_ap_gm.nii.gz \
-anat_follower_erode fsvent fswm \
-dsets media_?.nii.gz \
-tcat_remove_first_trs 8 \
-tshift_opts_ts -tpattern alt+z2 \
-align_opts_aea -cost lpc+ZZ -giant_move -check_flip \
-tlrc_base "$basedset" \
-tlrc_NL_warp \
-tlrc_NL_warped_dsets \
anatomical_warped/anatQQ.1.nii.gz \
anatomical_warped/anatQQ.1.aff12.1D \
anatomical_warped/anatQQ.1_WARP.nii.gz \
-volreg_align_to MIN_OUTLIER \
-volreg_post_vr_allin yes \
-volreg_pvra_base_index MIN_OUTLIER \
-volreg_align_e2a \
-volreg_tlrc_warp \
-mask_opts_automask -clfrac 0.10 \
-mask_epi_anat yes \
-blur_to_fwhm -blur_size $blur \
-regress_motion_per_run \
-regress_ROI_PC fsvent 3 \
-regress_ROI_PC_per_run fsvent \
-regress_make_corr_vols aeseg fsvent \
-regress_anaticor_fast \
-regress_anaticor_label fswm \
-regress_censor_motion 0.3 \
-regress_censor_outliers 0.1 \
-regress_apply_mot_types demean deriv \
-regress_est_blur_epits \
-regress_est_blur_errts \
-regress_run_clustsim no \
-regress_polort 2 \
-regress_bandpass 0.01 1 \
-html_review_style pythonic
We used similar command lines to generate ‘blurred and not censored’ and the ‘not blurred and not censored’ timeseries files (described more fully below). We will provide the code used to make all derivative files available on our github site (https://github.com/lab-lab/nndb).We made one choice above that is different enough from our original pipeline that it is worth mentioning here. Specifically, we have quite long runs, with the average being ~40 minutes but this number can be variable (thus leading to the above issue with 3dDetrend’s -normalise). A discussion on the AFNI message board with one of our team (starting here, https://afni.nimh.nih.gov/afni/community/board/read.php?1,165243,165256#msg-165256), led to the suggestion that '-regress_polort 2' with '-regress_bandpass 0.01 1' be used for long runs. We had previously used only a variable polort with the suggested 1 + int(D/150) approach. Our new polort 2 + bandpass approach has the added benefit of working well with afni_proc.py.
Which timeseries file you use is up to you but I have been encouraged by Rick and Paul to include a sort of PSA about this. In Paul’s own words: * Blurred data should not be used for ROI-based analyses (and potentially not for ICA? I am not certain about standard practice). * Unblurred data for ISC might be pretty noisy for voxelwise analyses, since blurring should effectively boost the SNR of active regions (and even good alignment won't be perfect everywhere). * For uncensored data, one should be concerned about motion effects being left in the data (e.g., spikes in the data). * For censored data: * Performing ISC requires the users to unionize the censoring patterns during the correlation calculation. * If wanting to calculate power spectra or spectral parameters like ALFF/fALFF/RSFA etc. (which some people might do for naturalistic tasks still), then standard FT-based methods can't be used because sampling is no longer uniform. Instead, people could use something like 3dLombScargle+3dAmpToRSFC, which calculates power spectra (and RSFC params) based on a generalization of the FT that can handle non-uniform sampling, as long as the censoring pattern is mostly random and, say, only up to about 10-15% of the data. In sum, think very carefully about which files you use. If you find you need a file we have not provided, we can happily generate different versions of the timeseries upon request and can generally do so in a week or less.
Effect on results
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The dataset is designed to support research into brain function during both naturalistic and controlled experimental conditions. Data were acquired at 3T with multiple imaging modalities, complemented by physiological and behavioral measures. Before their visit, participants (N=40) filled out a questionnaire about demographic information, language background, musical experience, and knowledge of movies. Then the participant completed a set of cognitive tests and two scanning sessions were scheduled. During Session 1, the participant watched the entirety of 'Back To The Future' (backtothefuture), divided into three parts. Eye-tracker calibration was performed before each part. The entire process for Session 1 took about three hours. The participant completed several different tasks inside the scanner during Session 2: somatotopic mapping, where the participant performed movements inside the scanner; retinotopic mapping, where different checkerboard patterns were presented while the participant was instructed to fixate on the dot in the middle of the screen and respond every time the dot changed colour; tonotopic mapping, during which a sequence of beeps was played and the participant was instructed to press a button when they noticed a difference in tone. All tasks together took approximately 2.5 hours.
The dataset follows the BIDS specification:
- sub-<participant_id> directories contain session-level raw data.
- sourcedata/ includes raw files of eye-tracker (in asc and edf) and pulse oximetry (in tsv) data.
- derivatives/ include preprocessing outputs.
All preprocessing scripts and analysis code are available on GitHub: https://github.com/levchenkoegor/movieproject2
The dataset is released under the CC-BY 4.0 License
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
dataset for Ritz & Shenhav 'Orthogonal neural encoding of targets and distractors supports multivariate cognitive control'.
Undergraduate participants performed a color-motion decision-making task (random dot kinematogram) during fMRI (N=29, 90min scan). Alternating runs of color-response (hard, longer runs, data of primary interest) and motion-response (easy, shorter runs). At the end of the session, participants performed color/motion localizers (see paper). For event timing, download code & behavioral files here: https://github.com/shenhavlab/PACT_fMRI_public/tree/main/0_behavior
sub-80xx_ses-01_task-RDMmotion_run-xx_bold.nii.gz - Participants repond to majority motion direction, ignoring color (easy)
sub-80xx_ses-01_task-RDMcolor_run-xx_bold.nii.gz - Participants repond to majority color, ignoring color (hard)
sub-80xx_ses-01_task-RDMlocalizer_run-xx_bold.nii.gz - Color and motion localizers (for order, see behavioral files)
See manuscript for full details (available at https://harrisonritz.github.io; publication forthcoming). Relevant methods section:
Twenty-nine individuals (17 females, Age: M = 21.2, SD = 3.4) provided informed consent and participated in this experiment for compensation ($40 USD; IRB approval code: 1606001539). All participants had self-reported normal color vision and no history of neurological disorders. Two participants missed one Attend-Color block (see below) due to a scanner removal, and one participant missed a motion localizer due to a technical failure, but all participants were retained for analysis. This study was approved by Brown University’s institutional review board.
The main task closely followed our previously reported behavioral experiment 10. On each trial, participants saw a random dot kinematogram (RDK) against a black background. This RDK consisted of colored dots that moved left or right, and participants responded to the stimulus with button presses using their left or right thumbs.
In Attend-Color blocks (six blocks of 150 trials), participants responded depending on which color was in the majority. Two colors were mapped to each response (four colors total), and dots were a mixture of one color from each possible response. Dots colors were approximately isolument (uncalibrated RGB: [239, 143, 143], [191, 239, 143], [143, 239, 239], [191, 143, 239]), and we counterbalanced their assignment to responses across participants.
In Attend-Motion blocks (six blocks of 45 trials), participants responded based on the dot motion instead of the dot color. Dot motion consisted of a mixture between dots moving coherently (either left or right) and dots moving in a random direction. Attend-Motion blocks were shorter because they acted to reinforce motion sensitivity and provide a test of stimulus-dependent effects.
Critically, dots always had color and motion, and we varied the strength of color coherence (percentage of dots in the majority) and motion coherence (percentage of dots moving coherently) across trials. Our previous experiments have found that in Attend-Color blocks, participants are still influenced by motion information, introducing a response conflict when color and motion are associated with different responses 10. Target coherence (e.g., color coherence during Attend-Color) was linearly spaced between 65% and 95% with 5 levels, and distractor congruence (signed coherence relative to the target response) was linearly spaced between -95% and 95% with 5 levels. In order to increase the salience of the motion dimension relative to the color dimension, the display was large (~10 degrees of visual angle) and dots moved quickly (~10 degrees of visual angle per second).
Participants had 1.5 seconds from the onset of the stimulus to make their response, and the RDK stayed on the screen for this full duration to avoid confusing reaction time and visual stimulation (the fixation cross changed from white to gray to register the response). The inter-trial interval was uniformly sampled from 1.0, 1.5, or 2.0 seconds. This ITI was relatively short in order to maximize the behavioral effect, and because efficiency simulations showed that it increased power to detect parametric effects of target and distractor coherence (e.g., relative to a more standard 5 second ITI). The fixation cross changed from gray to white for the last 0.5 seconds before the stimulus to provide an alerting cue.
Before the scanning session, participants provided consent and practiced the task in a mock MRI scanner. First, participants learned to associate four colors with two button presses (two colors for each response). After being instructed on the color-button mappings, participants practiced the task with feedback (correct, error, or 1.5 second time-out). Errors or time-out feedback were accompanied with a diagram of the color-button mappings. Participants performed 50 trials with full color coherence, and then 50 trials with variable color coherence, all with 0% motion coherence. Next, participants practiced the motion task. After being shown the motion mappings, participants performed 50 trials with full motion coherence, and then 50 trials with variable motion coherence, all with 0% color coherence. Finally, participants practiced 20 trials of the Attend-Color task and 20 trials of Attend-Motion tasks with variable color and motion coherence (same as scanner task).
Following the twelve blocks of the scanner task, participants underwent localizers for color and motion, based on the tasks used in our previous experiments 30. Both localizers were block designs, alternating between 16 seconds of feature present and 16 seconds of feature absent for seven cycles. For the color localizer, participants saw an aperture the same size as the task, either filled with colored squares that were resampled every second during stimulus-on (‘Mondrian stimulus’), or luminance-matched gray squares that were similarly resampled during stimulus-off. For the motion localizer, participants saw white dots that were moving with full coherence in a different direction every second during stimulus-on, or still dots for stimulus-off. No responses were required during the localizers.
We scanned participants with a Siemens Prisma 3T MR system. We used the following sequence parameters for our functional runs: field of view (FOV) = 211 mm × 211 mm (60 slices), voxel size = 2.4 mm3, repetition time (TR) = 1.2 sec with interleaved multiband acquisitions (acceleration factor 4), echo time (TE) = 33 ms, and flip angle (FA) = 62°. Slices were acquired anterior to posterior, with an auto-aligned slice orientation tilted 15° relative to the AC/PC plane. At the start of the imaging session, we collected a high-resolution structural MPRAGE with the following sequence parameters: FOV = 205 mm × 205 mm (192 slices), voxel size = 0.8 mm3, TR = 2.4 sec, TE1 = 1.86 ms, TE2 = 3.78 ms, TE3 = 5.7 ms, TE4 = 7.62, and FA = 7°. At the end of the scan, we collected a field map for susceptibility distortion correction (TR = 588ms, TE1 = 4.92 ms, TE2 = 7.38 ms, FA = 60°).
We preprocessed our structural and functional data using fMRIprep (v20.2.6; 122 based on the Nipype platform 123. We used FreeSurfer and ANTs to nonlinearly register structural T1w images to the MNI152NLin6Asym template (resampling to 2mm). To preprocess functional T2w images, we applied susceptibility distortion correction using fMRIprep, co-registered our functional images to our T1w images using FreeSurfer, and slice-time corrected to the midpoint of the acquisition using AFNI. We then registered our images into MNI152NLin6Asym space using the transformation that ANTs computed for the T1w images, resampling our functional images in a single step. For univariate analyses, we smoothed our functional images using a Gaussian kernel (8mm FWHM, as dACC responses often have a large spatial extent). For multivariate analyses, we worked in the unsmoothed template space (see below).
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Cortical flat maps for three subjects derived from the anatomical MRI images. Cortical surfaces were reconstructed from T1-weighted and T2-weighted anatomical images with freesurfer's reconall procedure. Relaxation cuts were placed manually to allow for flattening of each hemisphere's surface. Results of any analysis of the fMRI data can be viewed on these flat maps with pycortex.
Part of THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior
See related materials in Collection at: https://doi.org/10.25452/figshare.plus.c.6161151