14 datasets found
  1. f

    THINGS-data: fMRI cortical surface flat maps

    • plus.figshare.com
    zip
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin Hebart; Oliver Contier; Lina Teichmann; Adam Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris Baker (2023). THINGS-data: fMRI cortical surface flat maps [Dataset]. http://doi.org/10.25452/figshare.plus.20496702.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Figshare+
    Authors
    Martin Hebart; Oliver Contier; Lina Teichmann; Adam Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris Baker
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Cortical flat maps for three subjects derived from the anatomical MRI images. Cortical surfaces were reconstructed from T1-weighted and T2-weighted anatomical images with freesurfer's reconall procedure. Relaxation cuts were placed manually to allow for flattening of each hemisphere's surface. Results of any analysis of the fMRI data can be viewed on these flat maps with pycortex.

    Part of THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior

    See related materials in Collection at: https://doi.org/10.25452/figshare.plus.c.6161151

  2. MIND: Multilingual Imaging Neuro Dataset

    • openneuro.org
    Updated Aug 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xuanyi Jessica Chen; Maxwell Salvadore; Esti Blanco-Elorrieta (2025). MIND: Multilingual Imaging Neuro Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds006391.v2.0.0
    Explore at:
    Dataset updated
    Aug 6, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Xuanyi Jessica Chen; Maxwell Salvadore; Esti Blanco-Elorrieta
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    MIND: Multilingual Imaging Neuro Dataset

    This repository contains structural and functional MRI data of 126 monolingual and bilingual participants with varying language backgrounds and proficiencies.

    This README is organized into two sections:

    1. Usage describes how one can go about recreating data derivatives and brain measures from start to finish.
    2. Directories gives information on the file structure of the dataset.

    If you just want access to the processed brain and language data, go to Quick Start.

    Usage

    There are two ways that one can go about this dataset. If you want to jump immediately into analyzing participants and their language profiles, then go to Quick Start. If instead you are looking to go from low-level MRI data to cleaned CSVs with various brain measure types, either to learn the process or double check our work, then go to Data Replication.

    Quick Start

    If you just want access to cleaned brain measure and language history data of 126 participants, they can be found in the following folders:

    Each folder has a metadata.xlsx file that gives more information on the files and their fields. Have fun, go nuts.

    Data Replication

    If you are looking to go through the steps required to create the data from start to finish, we first start with the raw structural and functional MRI data, which can be found in ./sub-EBE{XXXX}. Information on the data in this folder, which follows BIDS, can be found here.

    The data in ./sub-EBE{XXXX} is then inputted into various processing pipelines, the versions for which can be found at Dependency versions. The following processing pipelines are used:

    • fMRIprep

      fMRIprep is a neuroimaging procesing tool used for task-based and resting-state fMRI data. fMRIprep is not used directly to create brain measure CSVs used in analysis, but instead creates processed fMRI data used in the CONN toolbox. For more information on fMRIprep and how to use it, click here.

    • CAT12

      We use the CAT12 toolbox, which stands for Computational Anatomy Toolbox, to calculate brain region volumes using voxel-based morphometry (VBM). CAT12 works through SPM12 and Matlab, and requires that both be installed. We have included the Matlab scripts used to create the files in ./derivatives/CAT12 in preprocessing_scripts/cat12. To use it, install necessary dependencies (CAT12, SPM12, and Matlab) and run preprocessing_scripts/cat12/CAT12_segmentation_n2.m in Matlab. You will also need to update for your local path to Matlab on lines 12, 24, and 41. For more information on CAT12 and how to use it to calculate brain region volumes using VBM, click here.

    • CONN

      CONN is a functional connectivity toolbox, which we used to generate participant brain connectivity measures. CONN requires first that you run the fMRIprep pipeline, as it uses some of fMRIprep's outputs as input. Like CAT12, CONN works through SPM12 and Matlab and requires that both be installed. For more information on CONN and how to use it, click here.

    • FDT

      We used FMRIB's Diffusion Toolbox (FDT) for extracting values from diffusion weighted images. For more information on FDT and how to use it, click here.

    • Freesurfer

      FreeSurfer is a software package for the analysis and visualization of structural and functional neuroimaging data, which we use to extract region volumes and cortical thickness through surface-based morphometry (SBM). For more information on Freesurfer and how to use it, click here.


    The results from these pipelines, which use the data in ./sub-EBE{XXXX} as input, are then outputted into folders in ./derivatives. For information on which folder stores each pipeline result, see Directories.

    After running these pipelines, we can take their outputs and convert them into CSVs for analysis. To do this, we use preprocessing_scripts/brain_data_preprocessing.ipynb. This Python notebook takes the data in ./derivatives as input and outputs CSVs to processing_output. Outputted from this notebook are CSVs containing brain volumes, cortical thicknesses, fractional anisotropy values, and connectivity measures. Information on the outputted CSVs can be found at processing_output/metadata.xlsx.

    Dependency versions

    1. MATLAB v. R2023a
    2. SPM12
    3. CAT12 v8.2
    4. CONN v22a
    5. FSL v6.0.2
    6. Freesurfer v7.4.1
    7. fMRIprep v23.0.2

    Chen, Salvadore, & Blanco-Elorrieta Paper Replication

    Also included in this dataset is code used in the analyses of Chen, Salvaore, & Blanco-Elorrieta (submitted). If you are interested in running analyses used in that paper, see the README in chen_salvadore_elorrieta/code.


    Directories

    • participants.tsv: Subject demographic information.
    • participants.json: Describes participants.tsv.

    • sub-EBE

      Each of these directories contain the BIDS formatted anatomical and functional MRI data, with the name of the directory corresponding to the subject's unique identifier. For more information on the subfolders, see BIDS information here.

    • derivatives

      This directory contains outputs of common processing pipelines run on the raw MRI data from ./sub-EBE{XXXX}.

      • CAT12

        Results of the CAT12 toolbox, which stands for Computational Anatomy Toolbox, and is used to calculate brain region volumes using voxel-based morphometry (VBM).

      • conn

        Results of the CONN toolbox, used to generate data on functional connectivity from brain fMRI sequences.

      • fdt

        Results of the FMRIB's Diffusion Toolbox (FDT), used for extracting values from diffusion weighted images.

      • fMRIprep

        Results from fMRIprep, a preprocessing pipeline for task-based and resting-state functional MRI data.

      • freesurfer

        Results from FreeSurfer, a software package for the analysis and visualization of structural and functional neuroimaging data.

    • language_background

      Participant information is kept on the first level of the dataset and includes information on language history, demographics, and their composite multilingualism score. Below is a list of all participant information files.

      • language_background.csv: Full subject language information and history.

      • metadata.xlsx: Metadata on each file in this directory.

      • multilingual_measure.csv: Each participant’s composite multilingualism score specified in Chen & Blanco-Elorrieta (in review).

    • processing_output

      This directory contains processed brain measure data for brain volumes, cortical thickness, FA, and connectivity. The CSVs are created from scripts in the directory processing_scripts using files in the derivatives directory as input. Descriptions of each file can be found below.

      • connectivity_network.csv: Contains 36 Network-to-Network connectivity values for each participant.

      • connectivity_roi.csv: Contains 13,336 ROI-to-ROI connectivity values for each participant.

      • dti.csv: Contains averaged white matter FA values for 76 brain regions for each participant based on Diffusion tensor imaging.

      • metadata.xlsx: Metadata on each file in this directory.

      • sbm_thickness.csv: Contains cortical thickness values for 68 brain regions for each participant based on Surface-based morphometry.

      • sbm_volume.csv: Contains volume values for 165 brain regions for each participant based on Surface-based morphometry.

      • tiv.csv: Contains two total intracranial volumes for each subject, calculated using SBM and VBM respectively

      • vbm_volume.csv: Contains volume values for 153 brain regions for each participant based on Voxel-based morphometry. `

    • preprocessing_scripts

      Code involved in processing raw MRI data.

      • brain_data_preprocessing.ipynb Python notebook used to create CSVs with brain measure values used in analyses. For more information on the code and how to use it, read Data Replication.
      • ### raw_mri_preprocessing Scripts used to create files some files in ./derviatives folder from raw MRI data in ./sub-EBE{XXXX}. For more information on the scripts, read Data Replication.
      • ### toolbox_outputs Intermediary files created and used by analysis/processing_scripts/brain_data_preprocessing.ipynb.
    • ##

  3. Data from: An fMRI Dataset on Occluded Image Interpretation for Human Amodal...

    • openneuro.org
    Updated Feb 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bao Li; Li Tong; Chi Zhang; Panppan Chen; Long Cao; Hui Gao; Ziya Yu; Linyuan Wang; Bin Yan (2025). An fMRI Dataset on Occluded Image Interpretation for Human Amodal Completion Research [Dataset]. http://doi.org/10.18112/openneuro.ds005226.v1.0.5
    Explore at:
    Dataset updated
    Feb 14, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Bao Li; Li Tong; Chi Zhang; Panppan Chen; Long Cao; Hui Gao; Ziya Yu; Linyuan Wang; Bin Yan
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Summary

    In everyday environments, partially occluded objects are more common than fully visible ones. Despite their visual incompleteness, the human brain can reconstruct these objects to form coherent perceptual representations, a phenomenon referred to as amodal completion. However, current computer vision systems still struggle to accurately infer the hidden portions of occluded objects. While the neural mechanisms underlying amodal completion have been partially explored, existing findings often lack consistency, likely due to limited sample sizes and varied stimulus materials. To address these gaps, we introduce a novel fMRI dataset,the Occluded Image Interpretation Dataset (OIID), which captures human perception during image interpretation under different levels of occlusion. This dataset includes fMRI responses and behavioral data from 65 participants. The OIID enables researchers to identify the brain regions involved in processing occluded images and examines individual differences in functional responses. Our work contributes to a deeper understanding of how the human brain interprets incomplete visual information and offers valuable insights for advancing both theoretical research and related practical applications in amodal completion fields.

    Data record

    The data were organized according to the Brain-Imaging-Data-Structure (BIDS) Specification version 1.0.2 and can be accessed from the OpenNeuro public repository (accession number: XXX). In short, raw data of each subject were stored in “sub-

    Stimulus The stimulus for different fMRI experiments are stored as “stimuli”.

    Raw MRI data Each participant's folder is comprised of 2 session folders: “sub-

    Freesurfer recon-all The results of reconstructing the Cortical Surface were saved as “derivatives/recon-all-FreeSurfer/sub-

    Pre-processed volume data from pre-processing The pre-processed volume-based fMRI data were saved as“pre-processed_volume_data/sub-

    Preprocessed surface-based data The pre-processed surface-based data were saved as “derivatives/volumetosurface/sub-

    The technical validation of the NFED The results of the technical validation were saved as “derivatives/validation/results” for each participant.The code of the technical validation were saved as “derivatives/validation/code”.

  4. Data from: An fMRI dataset in response to large-scale short natural dynamic...

    • openneuro.org
    Updated Oct 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panpan Chen; Chi Zhang; Bao Li; Li Tong; Linyuan Wang; Shuxiao Ma; Long Cao; Ziya Yu; Bin Yan (2024). An fMRI dataset in response to large-scale short natural dynamic facial expression videos [Dataset]. http://doi.org/10.18112/openneuro.ds005047.v1.0.6
    Explore at:
    Dataset updated
    Oct 15, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Panpan Chen; Chi Zhang; Bao Li; Li Tong; Linyuan Wang; Shuxiao Ma; Long Cao; Ziya Yu; Bin Yan
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Summary

    Facial expression is among the most natural methods for human beings to convey their emotional information in daily life. Although the neural mechanisms of facial expression have been extensively studied employing lab-controlled images and a small number of lab-controlled video stimuli, how the human brain processes natural dynamic facial expression videos still needs to be investigated. To our knowledge, this type of data specifically on large-scale natural facial expression videos is currently missing. We describe here the natural Facial Expressions Dataset (NFED), an fMRI dataset including responses to 1,320 short (3-second) natural facial expression video clips. These video clips are annotated with three types of labels: emotion, gender, and ethnicity, along with accompanying metadata. We validate that the dataset has good quality within and across participants and, notably, can capture temporal and spatial stimuli features. NFED provides researchers with fMRI data for understanding of the visual processing of large number of natural facial expression videos.

    Data Records

    Data Records The data, which is structured following the BIDS format53, were accessible at https://openneuro.org/datasets/ds00504754. The “sub-

    Stimulus. Distinct folders store the stimuli for distinct fMRI experiments: "stimuli/face-video", "stimuli/floc", and "stimuli/prf" (Fig. 2b). The category labels and metadata corresponding to video stimuli are stored in the "videos-stimuli_category_metadata.tsv”. The “videos-stimuli_description.json” file describes category and metadata information of video stimuli(Fig. 2b).

    Raw MRI data. Each participant's folder is comprised of 11 session folders: “sub-

    Volume data from pre-processing. The pre-processed volume-based fMRI data were in the folder named “pre-processed_volume_data/sub-

    Surface data from pre-processing. The pre-processed surface-based data were stored in a file named “volumetosurface/sub-

    FreeSurfer recon-all. The results of reconstructing the cortical surface are saved as “recon-all-FreeSurfer/sub-

    Surface-based GLM analysis data. We have conducted GLMsingle on the data of the main experiment. There is a file named “sub--

    Validation. The code of technical validation was saved in the “derivatives/validation/code” folder. The results of technical validation were saved in the “derivatives/validation/results” folder (Fig. 2h). The “README.md” describes the detailed information of code and results.

  5. Data_Sheet_1_A Library for fMRI Real-Time Processing Systems in Python...

    • frontiersin.figshare.com
    • datasetcatalog.nlm.nih.gov
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Masaya Misaki; Jerzy Bodurka; Martin P. Paulus (2023). Data_Sheet_1_A Library for fMRI Real-Time Processing Systems in Python (RTPSpy) With Comprehensive Online Noise Reduction, Fast and Accurate Anatomical Image Processing, and Online Processing Simulation.pdf [Dataset]. http://doi.org/10.3389/fnins.2022.834827.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Frontiers Mediahttp://www.frontiersin.org/
    Authors
    Masaya Misaki; Jerzy Bodurka; Martin P. Paulus
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Real-time fMRI (rtfMRI) has enormous potential for both mechanistic brain imaging studies or treatment-oriented neuromodulation. However, the adaption of rtfMRI has been limited due to technical difficulties in implementing an efficient computational framework. Here, we introduce a python library for real-time fMRI (rtfMRI) data processing systems, Real-Time Processing System in python (RTPSpy), to provide building blocks for a custom rtfMRI application with extensive and advanced functionalities. RTPSpy is a library package including (1) a fast, comprehensive, and flexible online fMRI image processing modules comparable to offline denoising, (2) utilities for fast and accurate anatomical image processing to define an anatomical target region, (3) a simulation system of online fMRI processing to optimize a pipeline and target signal calculation, (4) simple interface to an external application for feedback presentation, and (5) a boilerplate graphical user interface (GUI) integrating operations with RTPSpy library. The fast and accurate anatomical image processing utility wraps external tools, including FastSurfer, ANTs, and AFNI, to make tissue segmentation and region of interest masks. We confirmed that the quality of the output masks was comparable with FreeSurfer, and the anatomical image processing could complete in a few minutes. The modular nature of RTPSpy provides the ability to use it for a simulation analysis to optimize a processing pipeline and target signal calculation. We present a sample script for building a real-time processing pipeline and running a simulation using RTPSpy. The library also offers a simple signal exchange mechanism with an external application using a TCP/IP socket. While the main components of the RTPSpy are the library modules, we also provide a GUI class for easy access to the RTPSpy functions. The boilerplate GUI application provided with the package allows users to develop a customized rtfMRI application with minimum scripting labor. The limitations of the package as it relates to environment-specific implementations are discussed. These library components can be customized and can be used in parts. Taken together, RTPSpy is an efficient and adaptable option for developing rtfMRI applications.Code available at:https://github.com/mamisaki/RTPSpy

  6. d

    BOLD5000 Additional ROIs and RDMs for neural network research

    • search.dataone.org
    • data.niaid.nih.gov
    • +3more
    Updated Jun 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    William Pickard; Kelsey Sikes; Huma Jamil; Nicholas Chaffee; Nathaniel Blanchard; Michael Kirby; Chris Peterson (2024). BOLD5000 Additional ROIs and RDMs for neural network research [Dataset]. http://doi.org/10.5061/dryad.wpzgmsbtr
    Explore at:
    Dataset updated
    Jun 22, 2024
    Dataset provided by
    Dryad Digital Repository
    Authors
    William Pickard; Kelsey Sikes; Huma Jamil; Nicholas Chaffee; Nathaniel Blanchard; Michael Kirby; Chris Peterson
    Time period covered
    Jan 1, 2023
    Description

    Artificial neural networks (ANNs) are sensitive to perturbations and adversarial attacks. One hypothesized solution to adversarial robustness is to align manifolds in the embedded space of neural networks with biologically grounded manifolds. Recent state-of-the-art works that emphasize learning robust neural representations, rather than optimizing for a specific target task like classification, support the idea that researchers should investigate this hypothesis. Â While works have shown that fine-tuning ANNs to coincide with biological vision does increase robustness to both perturbations and adversarial attacks, these works have relied on proprietary datasets- the lack of publicly available biological benchmarks make it difficult to evaluate the efficacy of these claims. Here, we deliver a curated dataset consisting of biological representations of images taken from two commonly used computer vision datasets, ImageNet and COCO, that can be easily integrated into model training and eva..., , , # BOLD5000 Additional ROIs and RDMs for Neural Network Research

    This dataset is made available as part of the publication of the following journal article:

    Pickard W, Sikes K, Jamil H, Chaffee N, Blanchard N, Kirby M and Peterson C (2023) Exploring fMRI RDMs: enhancing model robustness through neurobiological data. Front. Comput. Sci. 5:1275026. doi: 10.3389/fcomp.2023.127502

    Description of the data and file structure

    This dataset is derivative of the BOLD5000 Release 2.0. Additional post-processing steps were performed to make the data more accessible in machine learning (ML) research using representational similarity analysis (RSA).

    As a general overview, the following additional post-processing steps were performed with the results made available here:

    1. New cortical regions of interest (ROIs) were defined for each subject using vcAtlast and visfAtlas.

    Freesurfer was used to create the new ROIs. Freesurfer derivatives for...

  7. Structural and functional connectomes from 27 schizophrenic patients and 27...

    • zenodo.org
    bin, xls
    Updated Apr 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jakub Vohryzek; Jakub Vohryzek; Yasser Aleman-Gomez; Yasser Aleman-Gomez; Alessandra Griffa; Alessandra Griffa; Jeni Raoul; Martine Cleusix; Philipp S. Baumann; Philipp S. Baumann; Philippe Conus; Philippe Conus; Kim Do Cuenod; Patric Hagmann; Patric Hagmann; Jeni Raoul; Martine Cleusix; Kim Do Cuenod (2020). Structural and functional connectomes from 27 schizophrenic patients and 27 matched healthy adults [Dataset]. http://doi.org/10.5281/zenodo.3758534
    Explore at:
    bin, xlsAvailable download formats
    Dataset updated
    Apr 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jakub Vohryzek; Jakub Vohryzek; Yasser Aleman-Gomez; Yasser Aleman-Gomez; Alessandra Griffa; Alessandra Griffa; Jeni Raoul; Martine Cleusix; Philipp S. Baumann; Philipp S. Baumann; Philippe Conus; Philippe Conus; Kim Do Cuenod; Patric Hagmann; Patric Hagmann; Jeni Raoul; Martine Cleusix; Kim Do Cuenod
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data Acquisition

    The cohort consists of a total of 27 healthy participants (age 35 ± 6.8 years) and 27 schizophrenic patients (age 41 ± 9.6), scanned in a 3-Tesla MRI scanner (Trio, Siemens Medical, Germany) using a 32-channel head-coil. The schizophrenic patients are from the Service of General Psychiatry at the Lausanne University Hospital (CHUV). All of them were diagnosed with schizophrenic and schizoaffective disorders after meeting the DSM-IV criteria (American Psychiatric Association (2000): Diagnostic and Statistical Manual of Mental Disorders, 4th ed. DSM-IV-TR. American Psychiatric Pub, Arlington, VA22209, USA). The Diagnostic Interview for Genetic Studies assessment was used to recruits the healthy controls (Preisig et al. 1999). 24 out of the 27 schizophrenics were under medication with mean chlorpromazine equivalent dose (CPZ) of 431 ± 288 mg. The written consent was obtained for all subjects - in accordance with institutional guidelines of the Ethics Committee of Clinical Research of the Faculty of Biology and Medicine, University of Lausanne, Switzerland, #82/14, #382/11, #26.4.2005). All subjects were fully anonymised.

    The session protocol consisted of (1) a magnetization-prepared rapid acquisition gradient echo (MPRAGE) sequence sensitive to white/gray matter contrast (1-mm in-plane resolution, 1.2-mm slice thickness), (2) a Diffusion Spectrum Imaging (DSI) sequence (128 diffusion-weighted volumes and a single b0 volume, maximum b-value 8,000 s/mm2, 2.2x2.2x3.0 mm voxel size), and (3) a gradient echo EPI sequence sensitive to BOLD contrast (3.3-mm in-plane resolution and slice thickness with a 0.3-mm gap, TE 30 ms, TR 1,920 ms, resulting in 280 images per participant). During the fMRI scan, participants were not engaged in any overt task, and the scan was treated as eyes-open resting-state fMRI (rs-fMRI).

    Data Pre-processing

    Initial signal processing of all MPRAGE, DSI, and rs-fMRI data was performed using the Connectome Mapper pipeline (Daducci et al. 2012). Grey and white matter were segmented from the MPRAGE volume using freesurfer (Desikan et al. 2006) and parcellated into 83 cortical and subcortical areas. The parcels were then further subdivided into 129, 234, 463 and 1015 approximately equally sized parcels according to the Lausanne anatomical atlas following the method proposed by (Cammoun et al. 2012). DSI data were reconstructed following the protocol described by (Wedeen et al. 2005), allowing us to estimate multiple diffusion directions per voxel. The diffusion probability density function was reconstructed as the discrete 3D Fourier transform of the signal modulus. The orientation distribution function (ODF) was calculated as the radial summation of the normalized 3D probability distribution function. Thus, the ODF is defined on a discrete sphere and captures the diffusion intensity in every direction.

    Structural Connectivity

    Structural connectivity matrices were estimated for individual participants using deterministic streamline tractography on reconstructed DSI data, initiating 32 streamline propagations per diffusion direction, per white matter voxel (Wedeen et al. 2008). Structural connectivity between pairs of regions was measured in terms of fiber density, defined as the number of streamlines between the two regions, normalized by the average length of the streamlines and average surface area of the two regions (Hagmann et al. 2008). The goal of this normalization was to compensate for the bias toward longer fibers inherent in the tractography procedure, as well as differences in region size. The number of fibers and fiber length were also included in the dataset. For the quantitative measure of structural connectivity, the generalised fractional anisotropy (gFA, Tuch et al. 2004) and average apparent diffusion coefficient (ADC, Sener et al. 2001) were also computed for each tract.

    Functional Connectivity

    Functional data were pre-processed using routines designed to facilitate subsequent network exploration (Murphy et al. 2009, Power et al. 2012). The first four time points were excluded from subsequent analysis to allow the time series to stabilize. The signal was linearly detrended and further physiological (white-matter and cerebrospinal fluid regressors) and motion artefacts (three translational and three rotational regressors) confounds were regressed. Then, the signal was spatially smoothed and bandpass-filtered between 0.01-0.1 Hz with Hamming windowed sinc FIR filter. To obtain the brain regions for different atlas scales the signal was linearly registered to the MPRAGE image and averaged within a given region (Jenkinson et al. 2012). Functional matrices were obtained by computing Pearson’s correlation between the individual pairs of regions. All of the above was carried out in subject’s native space (Daducci et al. 2012, Griffa et al. 2017).

    Brain cortical bert freesurfer rendering for the 5 scales of the Lausanne2008 atlas is available on https://github.com/jvohryzek/bert4lausanne2008.

  8. d

    Predictive utility of task-related functional connectivity vs. voxel...

    • datadryad.org
    zip
    Updated Dec 9, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christian Habeck; Qolamreza Razlighi; Yaakov Stern (2021). Predictive utility of task-related functional connectivity vs. voxel activation - Data and code archive [Dataset]. http://doi.org/10.5061/dryad.k3j9kd56k
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 9, 2021
    Dataset provided by
    Dryad
    Authors
    Christian Habeck; Qolamreza Razlighi; Yaakov Stern
    Time period covered
    Nov 26, 2021
    Description

    Standard fMRI data during performance of 12 tasks in the scanner, resulting in activation and funcitonal connectivity maps.

    Structural data was processed with Freesurfer.

    All details of acquisition, pre-processing and data analysis are given in the PLOS original article.

  9. BAD: Bilingual Adaptations Dataset

    • openneuro.org
    Updated Jul 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xuanyi Jessica Chen; Maxwell Salvadore; Esti Blanco-Elorrieta (2025). BAD: Bilingual Adaptations Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds006391.v1.0.0
    Explore at:
    Dataset updated
    Jul 15, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Xuanyi Jessica Chen; Maxwell Salvadore; Esti Blanco-Elorrieta
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    README

    This repository contains raw MRI data of 127 subjects with varying language backgrounds and proficiencies. Below is a detailed outline of the file structure used:


    sub-EBE****

    Each of these directories contain the BIDS formatted anatomical and functional MRI data, with the name of the directory corresponding to the subject's unique identifier.

    For more information on the subdirectories, see BIDS information at https://bids-specification.readthedocs.io/en/stable/appendices/entity-table.html


    derivatives

    This directory contains outputs of common processing pipelines run on the raw MRI data from "data/sub-EBE****".

    derivatives/CAT12

    These are the results of the CAT12 toolbox, which stands for Computational Anatomy Toolbox, and is used to calculate brain region volumes using voxel-based morphometry (VBM). A few things are required to download for this process.

    1. MATLAB v. R2023a (https://www.mathworks.com/products/new_products/release2023a.html)
    2. SPM (https://www.fil.ion.ucl.ac.uk/spm/software/spm12/)
    3. CAT12 (https://neuro-jena.github.io/cat/index.html#DOWNLOAD)

    derivatives/conn

    CONN is used to generate data on functional connectivity from brain fMRI sequences. A few things are required to download for this process.

    1. MATLAB v. R2023a (https://www.mathworks.com/products/new_products/release2023a.html)
    2. SPM (https://www.fil.ion.ucl.ac.uk/spm/software/spm12/)
    3. CAT12 (https://neuro-jena.github.io/cat/index.html#DOWNLOAD)
    4. Conn – MATLAB toolbox for functional connectivity (https://web.conn-toolbox.org/)

    derivatives/fdt

    We used FMRIB's Diffusion Toolbox (FDT) for extracting values from diffusion weighted images. To use FDT, you need to download the following modules through CLI:

    1. module load fsl/6.0.2
    2. module load freesurfer/7.4.1

    For more information on the toolbox, visit https://fsl.fmrib.ox.ac.uk/fsl/docs/#/diffusion/index.

    derivatives/fMRIprep

    fMRIprep is the preprocessing of task-based and resting-state functional MRI. We use it to generate data for connectivity.

    We used fMRIprep v23.0.2. For more information, visit https://fmriprep.org/en/stable/index.html.

    derivatives/freesurfer

    FreeSurfer is a software package for the analysis and visualization of structural and functional neuroimaging data, which we use to extract region volumes through surface-based morphometry (SBM).

    We used freesurfer v7.4.1. For more information, visit https://surfer.nmr.mgh.harvard.edu/fswiki.



    analysis/

    This directory contains data and code used in the analysis of Chen, Salvadore, Blanco-Elorrieta (submitted).

    analysis/code

    This directory contains python and R code used in the analysis of Chen, Salvadore, Blanco-Elorrieta (submitted), with each python notebook corresponding to a different part of the paper's analysis. For more details on each file and subdirectories, see "analysis/code/README.md".

    analysis/participant_data

    This directory contains language data on each subject, including a composite multilingualism score from Chen & Blanco-Elorrieta (submitted), information on language knowledge, exposure, mixing, use in education, and family members’ language ability in the participants’ known languages from early childhood to the present day. For more information on the files and their fields, see "analysis/participant_data/metadata.xlsx".

    analysis/processed_mri_data

    This directory contains MRI data, both anatomical and functional, that is the final result of processing raw MRI data. This includes brain volumes, cortical thickness, fractional anisotropy values, and connectivity measures. For more information on the files within this directory, see "analysis/processed_mri_data/metadata.xlsx".

  10. Audio Cartography

    • openneuro.org
    Updated Aug 8, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Megen Brittell (2020). Audio Cartography [Dataset]. http://doi.org/10.18112/openneuro.ds001415.v1.0.0
    Explore at:
    Dataset updated
    Aug 8, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Megen Brittell
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Audio Cartography project investigated the influence of temporal arrangement on the interpretation of information from a simple spatial data set. I designed and implemented three auditory map types (audio types), and evaluated differences in the responses to those audio types.

    The three audio types represented simplified raster data (eight rows x eight columns). First, a "sequential" representation read values one at a time from each cell of the raster, following an English reading order, and encoded the data value as loudness of a single fixed-duration and fixed-frequency note. Second, an augmented-sequential ("augmented") representation used the same reading order, but encoded the data value as volume, the row as frequency, and the column as the rate of the notes play (constant total cell duration). Third, a "concurrent" representation used the same encoding as the augmented type, but allowed the notes to overlap in time.

    Participants completed a training session in a computer-lab setting, where they were introduced to the audio types and practiced making a comparison between data values at two locations within the display based on what they heard. The training sessions, including associated paperwork, lasted up to one hour. In a second study session, participants listened to the auditory maps and made decisions about the data they represented while the fMRI scanner recorded digital brain images.

    The task consisted of listening to an auditory representation of geospatial data ("map"), and then making a decision about the relative values of data at two specified locations. After listening to the map ("listen"), a graphic depicted two locations within a square (white background). Each location was marked with a small square (size: 2x2 grid cells); one square had a black solid outline and transparent black fill, the other had a red dashed outline and transparent red fill. The decision ("response") was made under one of two conditions. Under the active listening condition ("active") the map was played a second time while participants made their decision; in the memory condition ("memory"), a decision was made in relative quiet (general scanner noises and intermittent acquisition noise persisted). During the initial map listening, participants were aware of neither the locations of the response options within the map extent, nor the response conditions under which they would make their decision. Participants could respond any time after the graphic was displayed; once a response was entered, the playback stopped (active response condition only) and the presentation continued to the next trial.

    Data was collected in accordance with a protocol approved by the Institutional Review Board at the University of Oregon.

    • Additional details about the specific maps used in this are available through University of Oregon's ScholarsBank (DOI 10.7264/3b49-tr85).

    • Details of the design process and evaluation are provided in the associated dissertation, which is available from ProQuest and University of Oregon's ScholarsBank.

    • Scripts that created the experimental stimuli and automated processing are available through University of Oregon's ScholarsBank (DOI 10.7264/3b49-tr85).

    Preparation of fMRI Data

    Conversion of the DICOM files produced by the scanner to NiFTi format was performed by MRIConvert (LCNI). Orientation to standard axes was performed and recorded in the NiFTi header (FMRIB, fslreorient2std). The excess slices in the anatomical images that represented tissue in the next were trimmed (FMRIB, robustfov). Participant identity was protected through automated defacing of the anatomical data (FreeSurfer, mri_deface), with additional post-processing to ensure that no brain voxels were erroneously removed from the image (FMRIB, BET; brain mask dilated with three iterations "fslmaths -dilM").

    Preparation of Metadata

    The dcm2niix tool (Rorden) was used to create draft JSON sidecar files with metadata extracted from the DICOM headers. The draft sidecar file were revised to augment the JSON elements with additional tags (e.g., "Orientation" and "TaskDescription") and to make a more human-friendly version of tag contents (e.g., "InstitutionAddress" and "DepartmentName"). The device serial number was constant throughout the data collection (i.e., all data collection was conducted on the same scanner), and the respective metadata values were replaced with an anonymous identifier: "Scanner1".

    Preparation of Behavioral Data

    The stimuli consisted of eighteen auditory maps. Spatial data were generated with the rgeos, sp, and spatstat libraries in R; auditory maps were rendered with the Pyo (Belanger) library for Python and prepared for presentation in Audacity. Stimuli were presented using PsychoPy (Peirce, 2007), which produced log files from which event details were extracted. The log files included timestamped entries for stimulus timing and trigger pulses from the scanner.

    • Log files are available in "sourcedata/behavioral".
    • Extracted event details accompany BOLD images in "sub-NN/func/*events.tsv".
    • Three column explanatory variable files are in "derivatives/ev/sub-NN".

    References

    Audacity® software is copyright © 1999-2018 Audacity Team. Web site: https://audacityteam.org/. The name Audacity® is a registered trademark of Dominic Mazzoni.

    FMRIB (Functional Magnetic Resonance Imaging of the Brain). FMRIB Software Library (FSL; fslreorient2std, robustfov, BET). Oxford, v5.0.9, Available: https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/

    FreeSurfer (mri_deface). Harvard, v1.22, Available: https://surfer.nmr.mgh.harvard.edu/fswiki/AutomatedDefacingTools)

    LCNI (Lewis Center for Neuroimaging). MRIConvert (mcverter), v2.1.0 build 440, Available: https://lcni.uoregon.edu/downloads/mriconvert/mriconvert-and-mcverter

    Peirce, JW. PsychoPy–psychophysics software in Python. Journal of Neuroscience Methods, 162(1–2):8 – 13, 2007. Software Available: http://www.psychopy.org/

    Python software is copyright © 2001-2015 Python Software Foundation. Web site: https://www.python.org

    Pyo software is copyright © 2009-2015 Olivier Belanger. Web site: http://ajaxsoundstudio.com/software/pyo/.

    R Core Team (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Available: https://www.R-project.org/.

    rgeos software is copyright © 2016 Bivand and Rundel. Web site: https://CRAN.R-project.org/package=rgeos

    Rorden, C. dcm2niix, v1.0.20171215, Available: https://github.com/rordenlab/dcm2niix

    spatstat software is copyright © 2016 Baddeley, Rubak, and Turner. Web site: https://CRAN.R-project.org/package=spatstat

    sp software is copyright © 2016 Pebesma and Bivand. Web site: https://CRAN.R-project.org/package=sp

  11. TODO: name of the dataset

    • openneuro.org
    Updated Sep 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    TODO:; First1 Last1; First2 Last2; ... (2024). TODO: name of the dataset [Dataset]. http://doi.org/10.18112/openneuro.ds005295.v1.0.1
    Explore at:
    Dataset updated
    Sep 25, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    TODO:; First1 Last1; First2 Last2; ...
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Data for Reading reshapes stimulus selectivity in the visual word form area

    This contains the raw and pre-processed fMRI data and structural images (T1) used in the article, "Reading reshapes stimulus selectivity in the visual word form area. The preprint is available here, and the article will be in press at eNeuro.

    Additional processed data and analysis code are available in an OSF repository.

    Details about the study are included here.

    Participants

    We recruited 17 participants (Age range 19 to 38, 21.12 ± 4.44, 4 self-identified as male, 1 left-handed) from the Barnard College and Columbia University student body. The study was approved by the Internal Review Board at Barnard College, Columbia University. All participants provided written informed consent, acquired digitally, and were monetarily compensated for their participation. All participants had learned English before the age of five.

    To ensure high data quality, we used the following criteria for excluding functional runs and participants. If the participant moved by a distance greater than 2 voxels (4 mm) within a single run, that run was excluded from analysis. Additionally, if the participant responded in less than 50% of the trials in the main experiment, that run was removed. Finally, if half or more of the runs met any of these criteria for a single participant, that participant was dropped from the dataset. Using these constraints, the analysis reported here is based on data from 16 participants. They ranged in age from 19 to 38 years (mean = 21.12 ± 4.58,). 4 participants self-identified as male, and 1 was left-handed. A total of 6 runs were removed from three of the remaining participants due to excessive head motion.

    Equipment

    We collected MRI data at the Zuckerman Institute, Columbia University, a 3T Siemens Prisma scanner and a 64-channel head coil. In each MR session, we acquired a T1 weighted structural scan, with voxels measuring 1 mm isometrically. We acquired functional data with a T2* echo planar imaging sequences with multiband echo sequencing (SMS3) for whole brain coverage. The TR was 1.5s, TE was 30 ms and the flip angle was 62°. The voxel size was 2 mm isotropic.

    Stimuli were presented on an LCD screen that the participants viewed through a mirror with a viewing distance of 142 cm. The display had a resolution of 1920 by 1080 pixels, and a refresh rate of 60 Hz. We presented the stimuli using custom code written in MATLAB and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Throughout the scan, we recorded monocular gaze position using an SR Research Eyelink 1000 tracker. Participants responded with their right hand via three buttons on an MR-safe response pad.

    Tasks

    Main Task

    Participants performed three different tasks during different runs, two of which required attending to the character strings, and one that encouraged participants to ignore them. In the lexical decision task, participants reported whether the character string on each trial was a real word or not. In the stimulus color task, participants reported whether the color of the character string was red or gray. In the fixation color task, participants reported whether or not the fixation dot turned red.

    On each trial, a single character string flashed for 150 ms at one of three locations: centered at fixation, 3 dva left, or 3 dva right). The stimulus was followed by a blank with only the fixation mark present for 3850 ms, during which the participant had the opportunity to respond with a button press. After every five trials, there was a rest period (no task except to fixation on the dot). The duration of the rest period was either 4, 6 or 8 s in duration (randomly and uniformly selected).

    Localizer for visual category-selective ventral temporal regions

    Participants viewed sequences of images, each of which contained 3 items of one category: words, pseudowords, false fonts, faces, and limbs. Participants performed a one-back repetition detection task. On 33% of the trials, the exact same images flashed twice in a row. The participant’s task was to push a button with their right index finger whenever they detected such a repetition. Each participant performed 4 runs of the localizer task. Each run consisted of 77 four-second trials, lasting for approximately 6 minutes. Each category was presented 56 times through the course of the experiment.

    Localizer for language processing regions

    The stimuli on each trial were a sequence of 12 written words or pronounceable pseudowords, presented one at a time. The words were presented as meaningful sentences, while pseudowords formed “Jabberwocky” phrases that served as a control condition. Participants were instructed to read the stimuli silently to themselves, and also to push a button upon seeing the icon of a hand that appeared between trials. Participants performed three runs of the language localizer. Each run included 16 trials and lasted for 6 minutes. Each trial lasted for 6s, beginning with a blank screen for 100ms, followed by the presentation of 12 words or pseudowords for 450ms each (5400s total), followed by a response prompt for 400ms and a final blank screen for 100ms. Each run also included 5 blank trials (6 seconds each).

    Data organization

    This repository contains three main folders, complying with BIDS specifications. - Inputs contain BIDS compliant raw data, with the only change being defacing the anatomicals using pydeface. Data was converted to BIDS format using heudiconv.
    - Outputs contain preprocessed data obtained using fMRIPrep. In addition to subject specific folders, we also provide the freesurfer reconstructions obtained using fMRIPrep, with defaced anatomicals. Subject specific ROIs are also included in the label folder for each subject in the freesurfer directory. - Derivatives contain all additional whole brain analyses performed on this dataset.

  12. Naturalistic Neuroimaging Database

    • openneuro.org
    Updated Apr 20, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Aliko; Jiawen Huang; Florin Gheorghiu; Stefanie Meliss; Jeremy I Skipper (2021). Naturalistic Neuroimaging Database [Dataset]. http://doi.org/10.18112/openneuro.ds002837.v1.1.3
    Explore at:
    Dataset updated
    Apr 20, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Sarah Aliko; Jiawen Huang; Florin Gheorghiu; Stefanie Meliss; Jeremy I Skipper
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview

    • The Naturalistic Neuroimaging Database (NNDb v2.0) contains datasets from 86 human participants doing the NIH Toolbox and then watching one of 10 full-length movies during functional magnetic resonance imaging (fMRI).The participants were all right-handed, native English speakers, with no history of neurological/psychiatric illnesses, with no hearing impairments, unimpaired or corrected vision and taking no medication. Each movie was stopped in 40-50 minute intervals or when participants asked for a break, resulting in 2-6 runs of BOLD-fMRI. A 10 minute high-resolution defaced T1-weighted anatomical MRI scan (MPRAGE) is also provided.
    • The NNDb V2.0 is now on Neuroscout, a platform for fast and flexible re-analysis of (naturalistic) fMRI studies. See: https://neuroscout.org/

    v2.0 Changes

    • Overview
      • We have replaced our own preprocessing pipeline with that implemented in AFNI’s afni_proc.py, thus changing only the derivative files. This introduces a fix for an issue with our normalization (i.e., scaling) step and modernizes and standardizes the preprocessing applied to the NNDb derivative files. We have done a bit of testing and have found that results in both pipelines are quite similar in terms of the resulting spatial patterns of activity but with the benefit that the afni_proc.py results are 'cleaner' and statistically more robust.
    • Normalization

      • Emily Finn and Clare Grall at Dartmouth and Rick Reynolds and Paul Taylor at AFNI, discovered and showed us that the normalization procedure we used for the derivative files was less than ideal for timeseries runs of varying lengths. Specifically, the 3dDetrend flag -normalize makes 'the sum-of-squares equal to 1'. We had not thought through that an implication of this is that the resulting normalized timeseries amplitudes will be affected by run length, increasing as run length decreases (and maybe this should go in 3dDetrend’s help text). To demonstrate this, I wrote a version of 3dDetrend’s -normalize for R so you can see for yourselves by running the following code:
      # Generate a resting state (rs) timeseries (ts)
      # Install / load package to make fake fMRI ts
      # install.packages("neuRosim")
      library(neuRosim)
      # Generate a ts
      ts.rs <- simTSrestingstate(nscan=2000, TR=1, SNR=1)
      # 3dDetrend -normalize
      # R command version for 3dDetrend -normalize -polort 0 which normalizes by making "the sum-of-squares equal to 1"
      # Do for the full timeseries
      ts.normalised.long <- (ts.rs-mean(ts.rs))/sqrt(sum((ts.rs-mean(ts.rs))^2));
      # Do this again for a shorter version of the same timeseries
      ts.shorter.length <- length(ts.normalised.long)/4
      ts.normalised.short <- (ts.rs[1:ts.shorter.length]- mean(ts.rs[1:ts.shorter.length]))/sqrt(sum((ts.rs[1:ts.shorter.length]- mean(ts.rs[1:ts.shorter.length]))^2));
      # By looking at the summaries, it can be seen that the median values become  larger
      summary(ts.normalised.long)
      summary(ts.normalised.short)
      # Plot results for the long and short ts
      # Truncate the longer ts for plotting only
      ts.normalised.long.made.shorter <- ts.normalised.long[1:ts.shorter.length]
      # Give the plot a title
      title <- "3dDetrend -normalize for long (blue) and short (red) timeseries";
      plot(x=0, y=0, main=title, xlab="", ylab="", xaxs='i', xlim=c(1,length(ts.normalised.short)), ylim=c(min(ts.normalised.short),max(ts.normalised.short)));
      # Add zero line
      lines(x=c(-1,ts.shorter.length), y=rep(0,2), col='grey');
      # 3dDetrend -normalize -polort 0 for long timeseries
      lines(ts.normalised.long.made.shorter, col='blue');
      # 3dDetrend -normalize -polort 0 for short timeseries
      lines(ts.normalised.short, col='red');
      
    • Standardization/modernization

      • The above individuals also encouraged us to implement the afni_proc.py script over our own pipeline. It introduces at least three additional improvements: First, we now use Bob’s @SSwarper to align our anatomical files with an MNI template (now MNI152_2009_template_SSW.nii.gz) and this, in turn, integrates nicely into the afni_proc.py pipeline. This seems to result in a generally better or more consistent alignment, though this is only a qualitative observation. Second, all the transformations / interpolations and detrending are now done in fewers steps compared to our pipeline. This is preferable because, e.g., there is less chance of inadvertently reintroducing noise back into the timeseries (see Lindquist, Geuter, Wager, & Caffo 2019). Finally, many groups are advocating using tools like fMRIPrep or afni_proc.py to increase standardization of analyses practices in our neuroimaging community. This presumably results in less error, less heterogeneity and more interpretability of results across studies. Along these lines, the quality control (‘QC’) html pages generated by afni_proc.py are a real help in assessing data quality and almost a joy to use.
    • New afni_proc.py command line

      • The following is the afni_proc.py command line that we used to generate blurred and censored timeseries files. The afni_proc.py tool comes with extensive help and examples. As such, you can quickly understand our preprocessing decisions by scrutinising the below. Specifically, the following command is most similar to Example 11 for ‘Resting state analysis’ in the help file (see https://afni.nimh.nih.gov/pub/dist/doc/program_help/afni_proc.py.html): afni_proc.py \ -subj_id "$sub_id_name_1" \ -blocks despike tshift align tlrc volreg mask blur scale regress \ -radial_correlate_blocks tcat volreg \ -copy_anat anatomical_warped/anatSS.1.nii.gz \ -anat_has_skull no \ -anat_follower anat_w_skull anat anatomical_warped/anatU.1.nii.gz \ -anat_follower_ROI aaseg anat freesurfer/SUMA/aparc.a2009s+aseg.nii.gz \ -anat_follower_ROI aeseg epi freesurfer/SUMA/aparc.a2009s+aseg.nii.gz \ -anat_follower_ROI fsvent epi freesurfer/SUMA/fs_ap_latvent.nii.gz \ -anat_follower_ROI fswm epi freesurfer/SUMA/fs_ap_wm.nii.gz \ -anat_follower_ROI fsgm epi freesurfer/SUMA/fs_ap_gm.nii.gz \ -anat_follower_erode fsvent fswm \ -dsets media_?.nii.gz \ -tcat_remove_first_trs 8 \ -tshift_opts_ts -tpattern alt+z2 \ -align_opts_aea -cost lpc+ZZ -giant_move -check_flip \ -tlrc_base "$basedset" \ -tlrc_NL_warp \ -tlrc_NL_warped_dsets \ anatomical_warped/anatQQ.1.nii.gz \ anatomical_warped/anatQQ.1.aff12.1D \ anatomical_warped/anatQQ.1_WARP.nii.gz \ -volreg_align_to MIN_OUTLIER \ -volreg_post_vr_allin yes \ -volreg_pvra_base_index MIN_OUTLIER \ -volreg_align_e2a \ -volreg_tlrc_warp \ -mask_opts_automask -clfrac 0.10 \ -mask_epi_anat yes \ -blur_to_fwhm -blur_size $blur \ -regress_motion_per_run \ -regress_ROI_PC fsvent 3 \ -regress_ROI_PC_per_run fsvent \ -regress_make_corr_vols aeseg fsvent \ -regress_anaticor_fast \ -regress_anaticor_label fswm \ -regress_censor_motion 0.3 \ -regress_censor_outliers 0.1 \ -regress_apply_mot_types demean deriv \ -regress_est_blur_epits \ -regress_est_blur_errts \ -regress_run_clustsim no \ -regress_polort 2 \ -regress_bandpass 0.01 1 \ -html_review_style pythonic We used similar command lines to generate ‘blurred and not censored’ and the ‘not blurred and not censored’ timeseries files (described more fully below). We will provide the code used to make all derivative files available on our github site (https://github.com/lab-lab/nndb).

      We made one choice above that is different enough from our original pipeline that it is worth mentioning here. Specifically, we have quite long runs, with the average being ~40 minutes but this number can be variable (thus leading to the above issue with 3dDetrend’s -normalise). A discussion on the AFNI message board with one of our team (starting here, https://afni.nimh.nih.gov/afni/community/board/read.php?1,165243,165256#msg-165256), led to the suggestion that '-regress_polort 2' with '-regress_bandpass 0.01 1' be used for long runs. We had previously used only a variable polort with the suggested 1 + int(D/150) approach. Our new polort 2 + bandpass approach has the added benefit of working well with afni_proc.py.

      Which timeseries file you use is up to you but I have been encouraged by Rick and Paul to include a sort of PSA about this. In Paul’s own words: * Blurred data should not be used for ROI-based analyses (and potentially not for ICA? I am not certain about standard practice). * Unblurred data for ISC might be pretty noisy for voxelwise analyses, since blurring should effectively boost the SNR of active regions (and even good alignment won't be perfect everywhere). * For uncensored data, one should be concerned about motion effects being left in the data (e.g., spikes in the data). * For censored data: * Performing ISC requires the users to unionize the censoring patterns during the correlation calculation. * If wanting to calculate power spectra or spectral parameters like ALFF/fALFF/RSFA etc. (which some people might do for naturalistic tasks still), then standard FT-based methods can't be used because sampling is no longer uniform. Instead, people could use something like 3dLombScargle+3dAmpToRSFC, which calculates power spectra (and RSFC params) based on a generalization of the FT that can handle non-uniform sampling, as long as the censoring pattern is mostly random and, say, only up to about 10-15% of the data. In sum, think very carefully about which files you use. If you find you need a file we have not provided, we can happily generate different versions of the timeseries upon request and can generally do so in a week or less.

    • Effect on results

      • From numerous tests on our own analyses, we have qualitatively found that results using our old vs the new afni_proc.py preprocessing pipeline do not change all that much in terms of general spatial patterns. There is, however, an
  13. Naturalistic Neuroimaging Database 3T+

    • openneuro.org
    Updated Oct 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Egor Levchenko; Hugo Chow-Wing-Bom; Fred Dick; Adam Tierney; Jeremy I Skipper (2025). Naturalistic Neuroimaging Database 3T+ [Dataset]. http://doi.org/10.18112/openneuro.ds006642.v1.0.1
    Explore at:
    Dataset updated
    Oct 1, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Egor Levchenko; Hugo Chow-Wing-Bom; Fred Dick; Adam Tierney; Jeremy I Skipper
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    NNDb-3T+: A neuroimaging database combining movie-watching, eye-tracking, sensorimotor mapping, and cognitive tasks

    Overview

    The dataset is designed to support research into brain function during both naturalistic and controlled experimental conditions. Data were acquired at 3T with multiple imaging modalities, complemented by physiological and behavioral measures. Before their visit, participants (N=40) filled out a questionnaire about demographic information, language background, musical experience, and knowledge of movies. Then the participant completed a set of cognitive tests and two scanning sessions were scheduled. During Session 1, the participant watched the entirety of 'Back To The Future' (backtothefuture), divided into three parts. Eye-tracker calibration was performed before each part. The entire process for Session 1 took about three hours. The participant completed several different tasks inside the scanner during Session 2: somatotopic mapping, where the participant performed movements inside the scanner; retinotopic mapping, where different checkerboard patterns were presented while the participant was instructed to fixate on the dot in the middle of the screen and respond every time the dot changed colour; tonotopic mapping, during which a sequence of beeps was played and the participant was instructed to press a button when they noticed a difference in tone. All tasks together took approximately 2.5 hours.

    Data Organization

    The dataset follows the BIDS specification: - sub-<participant_id> directories contain session-level raw data. - sourcedata/ includes raw files of eye-tracker (in asc and edf) and pulse oximetry (in tsv) data. - derivatives/ include preprocessing outputs.

    Contents

    • fMRI data: Preprocessed and raw fMRI from movie-watching and mapping tasks.
    • Eye-tracking: Calibrated gaze and pupil data synchronized with fMRI during movie-wathing and retinotopy tasks.
    • Mapping tasks: Retinotopy, tonotopy, and somatotopy tasks.
    • Physiological recordings: Pulse oximetry.
    • Behavioral and cognitive assessments: Demographic and psychometric data.
    • Derivatives: Preprocessed files including Freesurfer and AFNI outputs.

    All preprocessing scripts and analysis code are available on GitHub: https://github.com/levchenkoegor/movieproject2

    Access

    The dataset is released under the CC-BY 4.0 License

  14. PACT_fMRI

    • openneuro.org
    Updated Jan 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Harrison Ritz; Amitai Shenhav (2024). PACT_fMRI [Dataset]. http://doi.org/10.18112/openneuro.ds004909.v1.0.0
    Explore at:
    Dataset updated
    Jan 4, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Harrison Ritz; Amitai Shenhav
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    dataset for Ritz & Shenhav 'Orthogonal neural encoding of targets and distractors supports multivariate cognitive control'.

    Undergraduate participants performed a color-motion decision-making task (random dot kinematogram) during fMRI (N=29, 90min scan). Alternating runs of color-response (hard, longer runs, data of primary interest) and motion-response (easy, shorter runs). At the end of the session, participants performed color/motion localizers (see paper). For event timing, download code & behavioral files here: https://github.com/shenhavlab/PACT_fMRI_public/tree/main/0_behavior

    Run types

    sub-80xx_ses-01_task-RDMmotion_run-xx_bold.nii.gz - Participants repond to majority motion direction, ignoring color (easy)

    sub-80xx_ses-01_task-RDMcolor_run-xx_bold.nii.gz - Participants repond to majority color, ignoring color (hard)

    sub-80xx_ses-01_task-RDMlocalizer_run-xx_bold.nii.gz - Color and motion localizers (for order, see behavioral files)

    See manuscript for full details (available at https://harrisonritz.github.io; publication forthcoming). Relevant methods section:

    Participants

    Twenty-nine individuals (17 females, Age: M = 21.2, SD = 3.4) provided informed consent and participated in this experiment for compensation ($40 USD; IRB approval code: 1606001539). All participants had self-reported normal color vision and no history of neurological disorders. Two participants missed one Attend-Color block (see below) due to a scanner removal, and one participant missed a motion localizer due to a technical failure, but all participants were retained for analysis. This study was approved by Brown University’s institutional review board.

    Task

    The main task closely followed our previously reported behavioral experiment 10. On each trial, participants saw a random dot kinematogram (RDK) against a black background. This RDK consisted of colored dots that moved left or right, and participants responded to the stimulus with button presses using their left or right thumbs.

    In Attend-Color blocks (six blocks of 150 trials), participants responded depending on which color was in the majority. Two colors were mapped to each response (four colors total), and dots were a mixture of one color from each possible response. Dots colors were approximately isolument (uncalibrated RGB: [239, 143, 143], [191, 239, 143], [143, 239, 239], [191, 143, 239]), and we counterbalanced their assignment to responses across participants.

    In Attend-Motion blocks (six blocks of 45 trials), participants responded based on the dot motion instead of the dot color. Dot motion consisted of a mixture between dots moving coherently (either left or right) and dots moving in a random direction. Attend-Motion blocks were shorter because they acted to reinforce motion sensitivity and provide a test of stimulus-dependent effects.

    Critically, dots always had color and motion, and we varied the strength of color coherence (percentage of dots in the majority) and motion coherence (percentage of dots moving coherently) across trials. Our previous experiments have found that in Attend-Color blocks, participants are still influenced by motion information, introducing a response conflict when color and motion are associated with different responses 10. Target coherence (e.g., color coherence during Attend-Color) was linearly spaced between 65% and 95% with 5 levels, and distractor congruence (signed coherence relative to the target response) was linearly spaced between -95% and 95% with 5 levels. In order to increase the salience of the motion dimension relative to the color dimension, the display was large (~10 degrees of visual angle) and dots moved quickly (~10 degrees of visual angle per second).

    Participants had 1.5 seconds from the onset of the stimulus to make their response, and the RDK stayed on the screen for this full duration to avoid confusing reaction time and visual stimulation (the fixation cross changed from white to gray to register the response). The inter-trial interval was uniformly sampled from 1.0, 1.5, or 2.0 seconds. This ITI was relatively short in order to maximize the behavioral effect, and because efficiency simulations showed that it increased power to detect parametric effects of target and distractor coherence (e.g., relative to a more standard 5 second ITI). The fixation cross changed from gray to white for the last 0.5 seconds before the stimulus to provide an alerting cue.

    Procedure

    Before the scanning session, participants provided consent and practiced the task in a mock MRI scanner. First, participants learned to associate four colors with two button presses (two colors for each response). After being instructed on the color-button mappings, participants practiced the task with feedback (correct, error, or 1.5 second time-out). Errors or time-out feedback were accompanied with a diagram of the color-button mappings. Participants performed 50 trials with full color coherence, and then 50 trials with variable color coherence, all with 0% motion coherence. Next, participants practiced the motion task. After being shown the motion mappings, participants performed 50 trials with full motion coherence, and then 50 trials with variable motion coherence, all with 0% color coherence. Finally, participants practiced 20 trials of the Attend-Color task and 20 trials of Attend-Motion tasks with variable color and motion coherence (same as scanner task).

    Following the twelve blocks of the scanner task, participants underwent localizers for color and motion, based on the tasks used in our previous experiments 30. Both localizers were block designs, alternating between 16 seconds of feature present and 16 seconds of feature absent for seven cycles. For the color localizer, participants saw an aperture the same size as the task, either filled with colored squares that were resampled every second during stimulus-on (‘Mondrian stimulus’), or luminance-matched gray squares that were similarly resampled during stimulus-off. For the motion localizer, participants saw white dots that were moving with full coherence in a different direction every second during stimulus-on, or still dots for stimulus-off. No responses were required during the localizers.

    MRI sequence

    We scanned participants with a Siemens Prisma 3T MR system. We used the following sequence parameters for our functional runs: field of view (FOV) = 211 mm × 211 mm (60 slices), voxel size = 2.4 mm3, repetition time (TR) = 1.2 sec with interleaved multiband acquisitions (acceleration factor 4), echo time (TE) = 33 ms, and flip angle (FA) = 62°. Slices were acquired anterior to posterior, with an auto-aligned slice orientation tilted 15° relative to the AC/PC plane. At the start of the imaging session, we collected a high-resolution structural MPRAGE with the following sequence parameters: FOV = 205 mm × 205 mm (192 slices), voxel size = 0.8 mm3, TR = 2.4 sec, TE1 = 1.86 ms, TE2 = 3.78 ms, TE3 = 5.7 ms, TE4 = 7.62, and FA = 7°. At the end of the scan, we collected a field map for susceptibility distortion correction (TR = 588ms, TE1 = 4.92 ms, TE2 = 7.38 ms, FA = 60°).

    fMRI preprocessing

    We preprocessed our structural and functional data using fMRIprep (v20.2.6; 122 based on the Nipype platform 123. We used FreeSurfer and ANTs to nonlinearly register structural T1w images to the MNI152NLin6Asym template (resampling to 2mm). To preprocess functional T2w images, we applied susceptibility distortion correction using fMRIprep, co-registered our functional images to our T1w images using FreeSurfer, and slice-time corrected to the midpoint of the acquisition using AFNI. We then registered our images into MNI152NLin6Asym space using the transformation that ANTs computed for the T1w images, resampling our functional images in a single step. For univariate analyses, we smoothed our functional images using a Gaussian kernel (8mm FWHM, as dACC responses often have a large spatial extent). For multivariate analyses, we worked in the unsmoothed template space (see below).

  15. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Martin Hebart; Oliver Contier; Lina Teichmann; Adam Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris Baker (2023). THINGS-data: fMRI cortical surface flat maps [Dataset]. http://doi.org/10.25452/figshare.plus.20496702.v1

THINGS-data: fMRI cortical surface flat maps

Related Article
Explore at:
zipAvailable download formats
Dataset updated
Jun 1, 2023
Dataset provided by
Figshare+
Authors
Martin Hebart; Oliver Contier; Lina Teichmann; Adam Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris Baker
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

Cortical flat maps for three subjects derived from the anatomical MRI images. Cortical surfaces were reconstructed from T1-weighted and T2-weighted anatomical images with freesurfer's reconall procedure. Relaxation cuts were placed manually to allow for flattening of each hemisphere's surface. Results of any analysis of the fMRI data can be viewed on these flat maps with pycortex.

Part of THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior

See related materials in Collection at: https://doi.org/10.25452/figshare.plus.c.6161151

Search
Clear search
Close search
Google apps
Main menu