100+ datasets found
  1. THINGS-fMRI

    • openneuro.org
    Updated Sep 9, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker (2024). THINGS-fMRI [Dataset]. http://doi.org/10.18112/openneuro.ds004192.v1.0.7
    Explore at:
    Dataset updated
    Sep 9, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    THINGS-fMRI

    Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.

    Dataset overview

    We collected extensively sampled object representations using functional MRI (fMRI). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.

    During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=3, 8740 unique images of 720 objects). Images were shown in fast succession (4.5s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=100) were shown repeatedly in each session.

    Beyond the core functional imaging data in response to THINGS images, additional structural and functional imaging data were gathered. We collected high-resolution anatomical images (T1- and T2-weighted), measures of brain vasculature (Time-of-Flight angiography, T2*-weighted) and gradient-echo field maps. In addition, we ran a functional localizer to identify numerous functionally specific brain regions, a retinotopic localizer for estimating population receptive fields, and an additional run without external stimulation for estimating resting-state functional connectivity.

    Besides raw data this datasets holds

    • brainmasks (fmriprep)
    • cortical flat maps (pycoretx_filestore)
    • single trial response estimations (ICA-betas)

    More derivatives can be found on figshare.

    Provenance

    Provenance information is given in 'dataset_description.json' as well as in the paper and preprocessing and analysis code is shared on GitHub.

  2. D

    BACE: fMRI data and MRI data

    • researchdata.ntu.edu.sg
    tsv, zip
    Updated Feb 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DR-NTU (Data) (2024). BACE: fMRI data and MRI data [Dataset]. http://doi.org/10.21979/N9/BFZ26X
    Explore at:
    zip(494692611), zip(474192699), zip(473073766), zip(468169664), zip(482472771), zip(476242745), zip(486996437), zip(482230709), zip(462587030), zip(464013328), zip(474317203), zip(467575838), zip(496769408), zip(479129134), zip(480003403), zip(481400290), zip(448323133), zip(474443746), zip(509108345), zip(469505459), zip(497673072), zip(457212775), zip(487180533), zip(529991800), zip(478828744), zip(465970746), zip(551560978), zip(469943734), zip(496032011), zip(511825740), tsv(720), zip(493475112), zip(514482637), zip(515030285), zip(490900899), zip(486677480), zip(39667)Available download formats
    Dataset updated
    Feb 19, 2024
    Dataset provided by
    DR-NTU (Data)
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Dataset funded by
    Ministry of Education (MOE)
    Description

    Specific facial features in infants automatically elicit attention, affection, and nurturing behaviour of adults, known as the baby schema effect. There is also an innate tendency to categorize people into in-group and out-group members based on salient features such as ethnicity. Societies are becoming increasingly multi-cultural and multi-ethnic, and there are limited investigations into the underlying neural mechanism of the baby schema effect in a multi-ethnic context. Functional magnetic resonance imaging (fMRI) was used to examine parents’ (N = 27) neural responses to a) non-own ethnic in-group and out-group infants, b) non-own in-group and own infants, and c) non-own out-group and own infants. Parents showed similar brain activations that may be considered a baby schema response network, regardless of ethnicity and kinship, in regions associated with attention, reward processing, empathy, goal-directed action planning, and social cognition. The same regions were activated to a higher degree when viewing the parents’ own infant. These regions have overlaps with the empathy, reward and motor networks, suggesting the evolutionary significance of parenting. These findings contribute further understanding to the dynamics of baby schema effect in an increasingly interconnected social world.

  3. r

    fMRI Data Center

    • rrid.site
    • dknet.org
    Updated Sep 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). fMRI Data Center [Dataset]. http://identifiers.org/RRID:SCR_007278
    Explore at:
    Dataset updated
    Sep 22, 2025
    Description

    THIS RESOURCE IS NO LONGER IN SERVICE, documented August 25, 2013 Public curated repository of peer reviewed fMRI studies and their underlying data. This Web-accessible database has data mining capabilities and the means to deliver requested data to the user (via Web, CD, or digital tape). Datasets available: 107 NOTE: The fMRIDC is down temporarily while it moves to a new home at UCLA. Check back again in late Jan 2013! The goal of the Center is to help speed the progress and the understanding of cognitive processes and the neural substrates that underlie them by: * Providing a publicly accessible repository of peer-reviewed fMRI studies. * Providing all data necessary to interpret, analyze, and replicate these fMRI studies. * Provide training for both the academic and professional communities. The Center will accept data from those researchers who are publishing fMRI imaging articles in peer-reviewed journals. The goal is to serve the entire fMRI community.

  4. Magic, Memory, and Curiosity (MMC) fMRI Dataset

    • openneuro.org
    Updated May 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefanie Meliss; Cristina Pascua-Martin; Jeremy Skipper; Kou Murayama (2023). Magic, Memory, and Curiosity (MMC) fMRI Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds004182.v1.0.1
    Explore at:
    Dataset updated
    May 1, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Stefanie Meliss; Cristina Pascua-Martin; Jeremy Skipper; Kou Murayama
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Overview

    • The Magic, Memory, Curiosity (MMC) dataset contains data from 50 healthy human adults incidentally encoding 36 videos of magic tricks inside the MRI scanner across three runs.
    • Before and after incidental learning, a 10-min resting-state scan was acquired.
    • The MMC dataset includes contextual incentive manipulation, curiosity ratings for the magic tricks, as well as incidental memory performance tested a week later using a surprise cued recall and recognition test .
    • Working memory and constructs potentially relevant in the context of motivated learning (e.g., need for cognition, fear of failure) were additionally assessed.

    Stimuli

    The stimuli used here were short videos of magic tricks taken from a validated stimulus set (MagicCATs, Ozono et al., 2021) specifically created for the usage in fMRI studies. All final stimuli are available upon request. The request procedure is outlined in the Open Science Framework repository associated with the MagicCATs stimulus set (https://osf.io/ad6uc/).

    Participant responses

    Participants’ responses to demographic questions, questionnaires, and performance in the working memory assessment as well as both tasks are available in comma-separated value (CSV) files. Demographic (MMC_demographics.csv), raw questionnaire (MMC_raw_quest_data.csv) and other score data (MMC_scores.csv) as well as other information (MMC_other_information.csv) are structured as one line per participant with questions and/or scores as columns. Explicit wordings and naming of variables can be found in the supplementary information. Participant scan summaries (MMC_scan_subj_sum.csv) contain descriptives of brain coverage, TSNR, and framewise displacement (one row per participant) averaged first within acquisitions and then within participants. Participants’ responses and reaction times in the magic trick watching and memory task (MMC_experimental_data.csv) are stored as one row per trial per participant.

    Preprocessing

    Data was preprocessed using the AFNI (version 21.2.03) software suite. As a first step, the EPI timeseries were distortion-corrected along the encoding axis (P>>A) using the phase difference map (‘epi_b0_correct.py’). The resulting distortion-corrected EPIs were then processed separately for each task, but scans from the same task were processed together. The same blocks were applied to both task and resting-state distortion-corrected EPI data using afni_proc.py (see below): despiking, slice-timing and head-motion correction, intrasubject alignment between anatomy and EPI, intersubject registration to MNI, masking, smoothing, scaling, and denoising. For more details, please refer to the data descriptor (LINK) or the Github repository (https://github.com/stefaniemeliss/MMC_dataset).

    afni_proc.py -subj_id "${subjstr}" \
      -blocks despike tshift align tlrc volreg mask blur scale regress \
      -radial_correlate_blocks tcat volreg \
      -copy_anat $derivindir/$anatSS \
      -anat_has_skull no \
      -anat_follower anat_w_skull anat $derivindir/$anatUAC \
      -anat_follower_ROI aaseg anat $sswindir/$fsparc \
      -anat_follower_ROI aeseg epi $sswindir/$fsparc \
      -anat_follower_ROI FSvent epi $sswindir/$fsvent \
      -anat_follower_ROI FSWMe epi $sswindir/$fswm \
      -anat_follower_ROI FSGMe epi $sswindir/$fsgm \
      -anat_follower_erode FSvent FSWMe \
      -dsets $epi_dpattern \
      -outlier_polort $POLORT \
      -tcat_remove_first_trs 0 \
      -tshift_opts_ts -tpattern altplus \
      -align_opts_aea -cost lpc+ZZ -giant_move -check_flip \
      -align_epi_strip_method 3dSkullStrip \
      -tlrc_base MNI152_2009_template_SSW.nii.gz \
      -tlrc_NL_warp \
      -tlrc_NL_warped_dsets $sswindir/$anatQQ $sswindir/$matrix $sswindir/$warp \
      -volreg_base_ind 1 $min_out_first_run \
      -volreg_post_vr_allin yes \
      -volreg_pvra_base_index MIN_OUTLIER \
      -volreg_align_e2a \
      -volreg_tlrc_warp \
      -volreg_no_extent_mask \
      -mask_dilate 8 \
      -mask_epi_anat yes \
      -blur_to_fwhm -blur_size 8 \
      -regress_motion_per_run \
      -regress_ROI_PC FSvent 3 \
      -regress_ROI_PC_per_run FSvent \
      -regress_make_corr_vols aeseg FSvent \
      -regress_anaticor_fast \
      -regress_anaticor_label FSWMe \
      -regress_censor_motion 0.3 \
      -regress_censor_outliers 0.1 \
      -regress_apply_mot_types demean deriv \
      -regress_est_blur_epits \
      -regress_est_blur_errts \
      -regress_run_clustsim no \
      -regress_polort 2 \
      -regress_bandpass 0.01 1 \
      -html_review_style pythonic
    

    Derivatives

    The anat folder contains derivatives associated with the anatomical scan. The skull-stripped image created using @SSwarper is available in original and ICBM 2009c Nonlinear Asymmetric Template space as sub-[group][ID]_space-[space]_desc-skullstripped_T1w.nii.gz together with the corresponding affine matrix (sub-[group][ID]_aff12.1D) and incremental warp (sub-[group][ID]_warp.nii.gz). Output generated using @SUMA_Make_Spec_FS (defaced anatomical image, whole brain and tissue masks, as well as FreeSurfer discrete segmentations based on the Desikan-Killiany cortical atlas and the Destrieux cortical atlas) are also available as sub-[group][ID]_space-orig_desc-surfvol_T1w.nii.gz, sub-[group][ID]_space-orig_label-[label]_mask.nii.gz, and sub-[group][ID]_space-orig_desc-[atlas]_dseg.nii.gz, respectively.

    The func folder contains derivatives associated with the functional scans. To enhance re-usability, the fully preprocessed and denoised files are shared as sub-[group][ID]_task-[task]_desc-fullpreproc_bold.nii.gz. Additionally, partially preprocessed files (distortion corrected, despiked, slice-timing/head-motion corrected, aligned to anatomy and template space) are uploaded as sub-[group][ID]_task-[task]_run-[1-3]_desc-MNIaligned_bold.nii.gz together with slightly dilated brain mask in EPI resolution and template space where white matter and lateral ventricle were removed (sub-[group][ID]_task-[task]_space-MNI152NLin2009cAsym_label-dilatedGM_mask.nii.gz) as well as tissue masks in EPI resolution and template space (sub-[group][ID]_task-[task]_space-MNI152NLin2009cAsym_label-[tissue]_mask.nii.gz).

    The regressors folder contains nuisance regressors stemming from the output of the full afni_proc.py preprocessing pipeline. They are provided as space-delimited text values where each row represents one volume concatenated across all runs for each task separately. Those estimates that are provided per run contain the data for the volumes of one run and zeros for the volumes of other runs. This allows them to be regressed out separately for each run. The motion estimates show rotation (degree counterclockwise) in roll, pitch, and yaw and displacement (mm) in superior, left, and posterior direction. In addition to the motion parameters with respect to the base volume (sub-[group][ID]_task-[task]_label-mot_regressor.1D), motion derivatives (sub-[group][ID]_task-[task]_run[1-3]_label-motderiv_regressor.1D) and demeaned motion parameters (sub-[group][ID]_task-[task]_run[1-3]_label-motdemean_regressor.1D) are also available for each run separately. The sub-[group][ID]_task-[task]_run[1-3]_label-ventriclePC_regressor.1D files contain time course of the first three PCs of the lateral ventricle per run. Additionally, outlier fractions for each volume are provided (sub-[group][ID]_task-[task]_label-outlierfrac_regressor.1D) and sub-[group][ID]_task-[task]_label-censorTRs_regressor.1D shows which volumes were censored because motion or outlier fraction exceeded the limits specified. The voxelwise time course of local WM regressors created using fast ANATICOR is shared as sub-[group][ID]_task-[task]_label-localWM_regressor.nii.gz.

  5. Paris & HCP brain connectivity data

    • figshare.com
    bin
    Updated Oct 27, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arnaud Messé; Guillaume Marrelec; Alain Giron; David Rudrauf (2016). Paris & HCP brain connectivity data [Dataset]. http://doi.org/10.6084/m9.figshare.3749595.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Oct 27, 2016
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Arnaud Messé; Guillaume Marrelec; Alain Giron; David Rudrauf
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Paris
    Description

    This database contains the Paris and HCP datasets used in Marrelec et al. (2016). It includes the following files:* empirical_Paris.mat: preprocessed resting-state fMRI time series (TS) and associated diffusion MRI structural connectivity matrices (MAP) for 21 subjects from Paris using the Freesurfer parcellation. The healthy volunteers (right-handed) were recruited within Paris local community. All participants gave written informed consent and the protocol was approved by the local ethics committee. Data were acquired using a 3T Siemens Trio TIM MRI scanner (CENIR, Paris, France). Resting-state fMRI series were recorded during ~11 minutes with a repetition time (TR) of 3.29 s.* empirical_HCP: preprocessed resting-state fMRI time series (TS) and associated diffusion MRI structural connectivity matrices (MAP) for 40 subjects from the Human Connectome Porject (HCP) using the Freesurfer parcellation. Data from healthy, unrelated adults were obtained from the second quarter release (Q2, June 2013) of the HCP database (http://www.humanconnectome.org/documentation/Q2/). Data were collected on a custom 3T Siemens Skyra MRI scanner (Washington University, Saint Louis, United States). Resting-state fMRI data were acquired in four runs of approximately 15 minutes each with a TR of 0.72 s. The four runs were concatenated in time.* freesurferlabels.txt: Freesurfer labels of the 160 regions used for the parcellation.* simulations_individuals_Paris.mat: simulated functional connectivity (FC) matrices generated using an abstract model of brain activity (the SAR model) and simulated resting-state fMRI time series (TS) generated using 6 mainstream computational models of brain activity (models), all using as input the structural connectivity of each individual subject belonging to the Paris dataset. Simulated resting-state fMRI data were simulated during ~8 minutes at a sampling frequency of 2 Hz.* simulations_average_Paris.mat: simulated functional connectivity (FC) matrices generated using an abstract model of brain activity (the SAR model) and simulated resting-state fMRI time series (TS) generated using 6 mainstream computational models of brain activity (models), all using as input the average structural connectivity of all subjects belonging to the Paris dataset. Simulated resting-state fMRI data were simulated during ~8 minutes at a sampling frequency of 2 Hz.* simulations_average_homotopic.mat: simulated functional connectivity (FC) matrices generated using an abstract model of brain activity (the SAR model) and simulated resting-state fMRI time series (TS) generated using 6 mainstream computational models of brain activity (models), all using as input the average structural connectivity of all subjects belonging to the Paris dataset and an artificial addition of homotopic structural connections. Simulated resting-state fMRI data were simulated during ~8 minutes at a sampling frequency of 2 Hz.Reference:Marrelec G, Messé A, Giron A, Rudrauf D (2016) Functional Connectivity’s Degenerate View of Brain Computation. PLoS Comput Biol 12(10): e1005031. doi:10.1371/journal.pcbi.1005031

  6. Data from: An fMRI dataset in response to large-scale short natural dynamic...

    • openneuro.org
    Updated Oct 15, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Panpan Chen; Chi Zhang; Bao Li; Li Tong; Linyuan Wang; Shuxiao Ma; Long Cao; Ziya Yu; Bin Yan (2024). An fMRI dataset in response to large-scale short natural dynamic facial expression videos [Dataset]. http://doi.org/10.18112/openneuro.ds005047.v1.0.7
    Explore at:
    Dataset updated
    Oct 15, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Panpan Chen; Chi Zhang; Bao Li; Li Tong; Linyuan Wang; Shuxiao Ma; Long Cao; Ziya Yu; Bin Yan
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Summary

    Facial expression is among the most natural methods for human beings to convey their emotional information in daily life. Although the neural mechanisms of facial expression have been extensively studied employing lab-controlled images and a small number of lab-controlled video stimuli, how the human brain processes natural dynamic facial expression videos still needs to be investigated. To our knowledge, this type of data specifically on large-scale natural facial expression videos is currently missing. We describe here the natural Facial Expressions Dataset (NFED), an fMRI dataset including responses to 1,320 short (3-second) natural facial expression video clips. These video clips are annotated with three types of labels: emotion, gender, and ethnicity, along with accompanying metadata. We validate that the dataset has good quality within and across participants and, notably, can capture temporal and spatial stimuli features. NFED provides researchers with fMRI data for understanding of the visual processing of large number of natural facial expression videos.

    Data Records

    Data Records The data, which is structured following the BIDS format53, were accessible at https://openneuro.org/datasets/ds00504754. The “sub-

    Stimulus. Distinct folders store the stimuli for distinct fMRI experiments: "stimuli/face-video", "stimuli/floc", and "stimuli/prf" (Fig. 2b). The category labels and metadata corresponding to video stimuli are stored in the "videos-stimuli_category_metadata.tsv”. The “videos-stimuli_description.json” file describes category and metadata information of video stimuli(Fig. 2b).

    Raw MRI data. Each participant's folder is comprised of 11 session folders: “sub-

    Volume data from pre-processing. The pre-processed volume-based fMRI data were in the folder named “pre-processed_volume_data/sub-

    Surface data from pre-processing. The pre-processed surface-based data were stored in a file named “volumetosurface/sub-

    FreeSurfer recon-all. The results of reconstructing the cortical surface are saved as “recon-all-FreeSurfer/sub-

    Surface-based GLM analysis data. We have conducted GLMsingle on the data of the main experiment. There is a file named “sub--

    Validation. The code of technical validation was saved in the “derivatives/validation/code” folder. The results of technical validation were saved in the “derivatives/validation/results” folder (Fig. 2h). The “README.md” describes the detailed information of code and results.

  7. S

    RMP Rumination fMRI Dataset

    • scidb.cn
    Updated Apr 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiao Chen; Chao-Gan Yan (2022). RMP Rumination fMRI Dataset [Dataset]. http://doi.org/10.57760/sciencedb.o00115.00002
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 29, 2022
    Dataset provided by
    Science Data Bank
    Authors
    Xiao Chen; Chao-Gan Yan
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    This dataset was used to investigate the brain mechanism underlying rumination state (Chen et al., 2020, NeuroImage). The data was shared through the R-fMRI Maps Project (RMP) and Psychological Science Data Bank.Investigators and AffiliationsXiao Chen, Ph. D. 1, 2, 3, 4, Chao-Gan Yan, Ph. D. 1, 2, 3, 41. CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing 100101, China;2. International Big-Data Center for Depression Research, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;3. Magnetic Resonance Imaging Research Center, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;4. Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China. AcknowledgmentsWe would like to thank the National Center for Protein Sciences at Peking University in Beijing, China, for assistance with data acquisition at PKU and Dr. Men Weiwei for his technical support during data collection. FundingNational Key R&D Program of China (2017YFC1309902);National Natural Science Foundation of China (81671774 and 81630031);13th Five-year Informatization Plan of Chinese Academy of Sciences (XXH13505);Key Research Program of the Chinese Academy of Sciences (ZDBS-SSW-JSC006);Beijing Nova Program of Science and Technology (Z191100001119104);Scientific Foundation of Institute of Psychology, Chinese Academy of Sciences (Y9CX422005);China Postdoctoral Science Foundation (2019M660847). Publication Related to This DatasetThe following publication include the data shared in this data collection:Chen, X., Chen, N.X., Shen, Y.Q., Li, H.X., Li, L., Lu, B., Zhu, Z.C., Fan, Z., Yan, C.G. (2020). The subsystem mechanism of default mode network underlying rumination: A reproducible neuroimaging study. Neuroimage, 221, 117185, doi:10.1016/j.neuroimage.2020.117185. Sample SizeTotal: 41 (22 females; mean age = 22.7 ± 4.1 years).Exclusion criteria: Any MRI contraindications, current psychiatric or neurological disorders, clinical diagnosis of neurologic trauma, use of psychotropic medication and any history of substance or alcohol abuse. Scan procedures and ParametersMRI scanningSeveral days prior to scanning, participants were interviewed and briefed on the purpose of the study and the mental states to be induced in the scanner. Subjects also generated key words of 4 individual negative autobiographical events as the stimuli for the sad memory phase. We measured participants’ rumination tendency with the Ruminative Response Scale (RRS) (Nolen-Hoeksema and Morrow, 1991), which can be further divided into a more unconstructive subtype, brooding and a more adaptive subtype, reflection (Treynor, 2003). All participants completed identical fMRI tasks on 3 different MRI scanners (order was counter-balanced across participants). Time elapsed between 2 sequential visits were 22.0 ± 14.6 days. The fMRI session included 4 runs: resting state, sad memory, rumination state and distraction state. An 8-minute resting state came first as a baseline. Participants were prompted to look at a fixation cross on the screen, not to think anything in particular and stay awake. Then participants would recall negative autobiographical events prompted by individualized keywords from the prior interview. Participants were asked to recall as vividly as they could and imagine they were re-experiencing those negative events. In the rumination state, questions such as “Think: Analyze your personality to understand why you feel so depressed in the events you just remembered” were presented to help participants think about themselves, while in the distraction state, prompts like “Think: The layout of a typical classroom” were presented to help participants focus on an objective and concrete scene. All mental states (sad memory, rumination and distraction) except for the resting state contained four randomly sequentially presented stimuli (keywords or prompts). Each stimulus lasted for 2 minutes, and then was switched to the next without any inter-stimuli intervals (ISI), forming an 8-minute continuous mental state. The resting state and negative autobiographical events recall were sequenced first and second while the order of rumination and distraction states was counter-balanced across participants. Before the resting state and after each mental state, we assessed participants’ subjective affect with a scale (item score ranged from 1 = very unhappy to 9 = very happy). Thinking contents and the phenomenology during each mental state were assessed with a series of items which were derived from a factor analysis (Gorgolewski et al., 2014) regarding self-generated thoughts (item scores ranged from 1 = not at all to 9 = almost all). Image AcquisitionImages were acquired on 3 Tesla GE MR750 scanners at the Magnetic Resonance Imaging Research Center, Institute of Psychology, Chinese Academy of Sciences (henceforth IPCAS) and Peking University (henceforth PKUGE) with 8-channel head-coils. Another 3 Tesla SIEMENS PRISMA scanner (henceforth PKUSIEMENS) with an 8-channel head-coil in Peking University was also used. Before functional image acquisitions, all participants underwent a 3D T1-weighted scan first (IPCAS/PKUGE: 192 sagittal slices, TR = 6.7 ms, TE = 2.90 ms, slice thickness/gap = 1/0mm, in-plane resolution = 256 × 256, inversion time (IT) = 450ms, FOV = 256 × 256 mm, flip angle = 7º, average = 1; PKUSIEMENS: 192 sagittal slices, TR = 2530 ms, TE = 2.98 ms, slice thickness/gap = 1/0 mm, in-plane resolution = 256 × 224, inversion time (TI) = 1100 ms, FOV = 256 × 224 mm, flip angle = 7º, average=1). After T1 image acquisition, functional images were obtained for the resting state and all three mental states (sad memory, rumination and distraction) (IPCAS/PKUGE: 33 axial slices, TR = 2000 ms, TE = 30 ms, FA = 90º, thickness/gap = 3.5/0.6 mm, FOV = 220 × 220 mm, matrix = 64 × 64; PKUSIEMENS: 62 axial slices, TR = 2000 ms, TE = 30 ms, FA = 90º, thickness = 2 mm, multiband factor = 2, FOV = 224 × 224 mm). Code availabilityAnalysis codes and other behavioral data are openly shared at https://github.com/Chaogan-Yan/PaperScripts/tree/master/Chen_2020_NeuroImage. ReferencesGorgolewski, K.J., Lurie, D., Urchs, S., Kipping, J.A., Craddock, R.C., Milham, M.P., Margulies, D.S., Smallwood, J., 2014. A correspondence between individual differences in the brain's intrinsic functional architecture and the content and form of self-generated thoughts. PLoS One 9, e97176-e97176.Nolen-Hoeksema, S., Morrow, J., 1991. A Prospective Study of Depression and Posttraumatic Stress Symptoms After a Natural Disaster: The 1989 Loma Prieta Earthquake.Treynor, W., 2003. Rumination Reconsidered: A Psychometric Analysis.(Note: Part of the content of this post was adapted from the original NeuroImage paper)

  8. 7T fMRI Resting-state Dataset

    • openneuro.org
    Updated Apr 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiahe Zhang; Danlei Chen; Philip Deming; Tara Srirangarajan; Jordan Theriault; Philip A. Kragel; Ludger Hartley; Kent M. Lee; Kieran McVeigh; Tor D. Wager; Lawrence L. Wald; Ajay B. Satpute; Karen S. Quigley; Susan Whitfield-Gabrieli; Lisa Feldman Barrett; Marta Bianciardi (2025). 7T fMRI Resting-state Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds005747.v1.2.1
    Explore at:
    Dataset updated
    Apr 2, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Jiahe Zhang; Danlei Chen; Philip Deming; Tara Srirangarajan; Jordan Theriault; Philip A. Kragel; Ludger Hartley; Kent M. Lee; Kieran McVeigh; Tor D. Wager; Lawrence L. Wald; Ajay B. Satpute; Karen S. Quigley; Susan Whitfield-Gabrieli; Lisa Feldman Barrett; Marta Bianciardi
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset was derived from a 7T fMRI study of N=90 healthy human adults (18-40 years old, M=26.9 years old, SD=6.2 years; 40 females). Participants completed one T1-weighted structural scan and three ten-minute resting-state functional scans. Raw structural and functional data and preprocessed functional data are available here.

    Please cite: Zhang et al. (2025). "Cortical and subcortical mapping of the allostatic-interoceptive system in the human brain using 7 Tesla fMRI." bioRxiv.

  9. g

    Data from: Gallant Lab Natural Short Clips 3T fMRI Data

    • doi.gin.g-node.org
    Updated May 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander G. Huth; Shinji Nishimoto; An T. Vu; Tom Dupre la Tour; Jack L. Gallant (2022). Gallant Lab Natural Short Clips 3T fMRI Data [Dataset]. http://doi.org/10.12751/g-node.vy1zjd
    Explore at:
    Dataset updated
    May 3, 2022
    Dataset provided by
    University of California, Berkeley
    Authors
    Alexander G. Huth; Shinji Nishimoto; An T. Vu; Tom Dupre la Tour; Jack L. Gallant
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Dataset funded by
    CSoI
    NEI
    Description

    This data set contains BOLD fMRI responses in human subjects viewing a set of natural short clips. The functional data were collected for five subjects, in three sessions over three separate days for each subject. Details of the experiment are described in the original publication.

  10. raw fMRI data set

    • figshare.com
    application/gzip
    Updated Jan 19, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shuntaro Sasai (2016). raw fMRI data set [Dataset]. http://doi.org/10.6084/m9.figshare.1309742.v18
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jan 19, 2016
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Shuntaro Sasai
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    An fMRI data set used in "Boly et al. Stimulus set meaningfulness and neurophysiological differentiation: a functional magnetic resonance imaging study"

  11. N

    Data from: A test-retest fMRI dataset for motor, language and spatial...

    • neurovault.org
    zip
    Updated Jun 30, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). A test-retest fMRI dataset for motor, language and spatial attention functions [Dataset]. http://identifiers.org/neurovault.collection:63
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 30, 2018
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    A collection of 11 brain maps. Each brain map is a 3D array of values representing properties of the brain at different locations.

    Collection description

    OpenfMRI ds000114

  12. fMRI data.

    • plos.figshare.com
    zip
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Salch; Adam Regalski; Hassan Abdallah; Raviteja Suryadevara; Michael J. Catanzaro; Vaibhav A. Diwadkar (2023). fMRI data. [Dataset]. http://doi.org/10.1371/journal.pone.0255859.s004
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Andrew Salch; Adam Regalski; Hassan Abdallah; Raviteja Suryadevara; Michael J. Catanzaro; Vaibhav A. Diwadkar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The included archive of .csv files has one .csv file for each time index. Each .csv file contains a single matrix, which has one column for each of the spatial coordinates x, y, z and one column for the fMRI signal amplitude at voxel (x, y, z) at that time index. This is normalized scan data for a single patient from our study, described in the paper, with an ACC (anterior cingulate cortex) mask applied. This is the data we used, together with our software implementation of our workflow (available at https://github.com/regalski/Wayne-State-TDA), to produce the persistence diagrams, vineyards, and loop locations pictured in the figures and described in the Workflow section of our paper. (ZIP)

  13. N

    IAPS fMRI dataset - block design with three emotional valence conditions,...

    • neurovault.org
    nifti
    Updated Jan 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). IAPS fMRI dataset - block design with three emotional valence conditions, Hsiao et al., 2024, Brain Imaging and Behavior: sub007 positive [Dataset]. http://identifiers.org/neurovault.image:839687
    Explore at:
    niftiAvailable download formats
    Dataset updated
    Jan 27, 2024
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    sub007_positive.nii.gz (old ext: .nii.gz)

    Collection description

    This dataset is from Hsiao et al., 2024, Brain Imaging and Behavior. Please cite this paper if any researcher has used this dataset for their studies. This dataset contains neuroimaging data from 56 participants. Among these 56 participants, only 53 participants (sub001-sub053) completed STAI questionnaires. Each participant was instructed to complete an emotional reactivity task in the MRI scanner. A total of 90 emotional scenes were selected from the International Affective Picture System to be used as stimuli in the emotional reactivity task. Within this task, participants were instructed to judge each scene as either indoor or outdoor in a block design paradigm. Each block consisted of six scenes sharing the same valence (i.e., positive, negative, or neutral), with each scene displayed on the screen for a duration of 2.5 seconds, resulting in a total block duration of 15 seconds. Each emotional scene block was then alternated with a fixation block lasting for 15 seconds. Five positive, five neutral, and another five negative emotional blocks were presented in a counterbalanced order across participants. The data is preprocessed by SPM8. Each participant has a beta image for the positive (e.g., sub001_positive.nii.gz), negative (e.g., sub001_negative.nii.gz), and neutral condition (e.g., sub001_neutral.nii.gz). Paper doi: https://doi.org/10.1101/2023.07.29.551128

    Subject species

    homo sapiens

    Modality

    fMRI-BOLD

    Analysis level

    single-subject

    Cognitive paradigm (task)

    Positive and Negative Images Task

    Map type

    U

  14. f

    Group results of the fMRI data.

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Oct 24, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Madsen, Kristoffer Hougaard; Svolgaard, Olivia; Andersen, Kasper Winther; Bauer, Christian; Blinkenberg, Morten; Siebner, Hartwig Roman; Selleberg, Finn (2018). Group results of the fMRI data. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000694066
    Explore at:
    Dataset updated
    Oct 24, 2018
    Authors
    Madsen, Kristoffer Hougaard; Svolgaard, Olivia; Andersen, Kasper Winther; Bauer, Christian; Blinkenberg, Morten; Siebner, Hartwig Roman; Selleberg, Finn
    Description

    Group results of the fMRI data.

  15. S

    Chinese Human Connectome Project

    • scidb.cn
    Updated Dec 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guoyuan Yang; Jianqiao Ge; Jia-Hong Gao (2022). Chinese Human Connectome Project [Dataset]. http://doi.org/10.11922/sciencedb.01374
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 3, 2022
    Dataset provided by
    Science Data Bank
    Authors
    Guoyuan Yang; Jianqiao Ge; Jia-Hong Gao
    Description

    CHCP Overview:The human behavior and brain are shaped by genetic, environmental and cultural interactions. Recent advances in neuroimaging integrate multimodal imaging data from a large population and start to explore the large-scale structural and functional connectomic architectures of the human brain. One of the major pioneers is the Human Connectome Project (HCP) that developed sophisticated imaging protocols and has built a collection of high-quality multimodal neuroimaging, behavioral and genetic data from US population. A large-scale neuroimaging project parallel to the HCP, but with a focus on the East Asian population, will allow comparisons of brain-behavior associations across different ethnicities and cultures. The Chinese Human Connectome Project (CHCP) is launched in 2017 and led by Professor Jia-Hong GAO at Peking University, Beijing, China. CHCP aims to provide large sets of multimodal neuroimaging, behavioral and genetic data on the Chinese population that are comparable to the data of the HCP. The CHCP protocols were almost identical to those of the HCP, including the procedure for 3T MRI scanning, the data acquisition parameters, and the task paradigms for functional brain imaging. The CHCP also collected behavioral and genetic data that were compatible with the HCP dataset. The first public release of the CHCP dataset is in 2022. CHCP dataset includes high-resolution structural MR images (T1W and T2W), resting-state fMRI (rfMRI), task fMRI (tfMRI), and high angular resolution diffusion MR images (dMRI) of the human brain as well as behavioral data based on Chinese population. The unprocessed "raw" images of CHCP dataset (about 1.85 TB) have been released on the platform and can be downloaded. Considering our current cloud-storage service, sharing full preprocessed images (up to 70 TB) requires further construction. We will be actively cooperating with researchers who contact us for academic request, offering case-by-case solution to access the preprocessed data in a timely manner, such as by mailing hard disks or a third-party trusted cloud-storage service. V2 Release (Date: January 16, 2023):Here, we released the seven major domains task fMRI EVs files, including: 1) visual, motion, somatosensory, and motor systems; 2) category specific representations; 3) working memory/cognitive control systems; 4) language processing (semantic and phonological processing); 5) social cognition (Theory of Mind); 6) relational processing; and 7) emotion processing.V3 Release (Date: January 12, 2024):This version of data release primarily discloses the CHCP raw MRI dataset that underwent “HCP minimal preprocessing pipeline”, located in CHCP_ScienceDB_preproc folder (about 6.90 TB). In this folder, preprocessed MRI data includes T1W, T2W, rfMRI, tfMRI, and dMRI modalities for all young adulthood participants, as well as partial results for middle-aged and older adulthood participants in the CHCP dataset. Following the data sharing strategy of the HCP, we have eliminated some redundant preprocessed data, resulting in a final total size of the preprocessed CHCP dataset is about 6.90 TB in zip files. V4 Release (Date: December 4, 2024):In this update, we have fixed the issue with the corrupted compressed file of preprocessed data for subject 3011, and removed the incorrect preprocessed results for subject 3090. Additionally, we have updated the subject file information list. Additionally, this release includes the update of unprocessed "raw" images of the CHCP dataset in CHCP_ScienceDB_unpreproc folder (about 1.85 TB), addressing the previously insufficient anonymization of T1W and T2W modalities data for some older adulthood participants in versions V1 and V2. For more detailed information, please refer to the data descriptions in versions V1 and V2.CHCP Summary:Subjects:366 healthy adults (Chinese Han)Imaging Scanner:3T MR (Siemens Prisma)Institution:Peking University, Beijing, ChinaFunding Agencies:Beijing Municipal Science & Technology CommissionChinese Institute for Brain Research (Beijing)National Natural Science Foundation of ChinaMinistry of Science and Technology of China CHCP Citations:Papers, book chapters, books, posters, oral presentations, and all other printed and digital presentations of results derived from CHCP data should contain the following wording in the acknowledgments section: "Data were provided [in part] by the Chinese Human Connectome Project (CHCP, PI: Jia-Hong Gao) funded by the Beijing Municipal Science & Technology Commission, Chinese Institute for Brain Research (Beijing), National Natural Science Foundation of China, and the Ministry of Science and Technology of China."

  16. Music Genre fMRI Dataset - Derivatives

    • zenodo.org
    bin, text/x-python
    Updated Aug 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tomoya Nakai; Tomoya Nakai; Naoko Koide-Majima; Naoko Koide-Majima; Shinji Nishimoto; Shinji Nishimoto (2023). Music Genre fMRI Dataset - Derivatives [Dataset]. http://doi.org/10.5281/zenodo.8275363
    Explore at:
    bin, text/x-pythonAvailable download formats
    Dataset updated
    Aug 23, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tomoya Nakai; Tomoya Nakai; Naoko Koide-Majima; Naoko Koide-Majima; Shinji Nishimoto; Shinji Nishimoto
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains preprocessed data from the Music Genre fMRI Dataset (https://openneuro.org/datasets/ds003720/versions/1.0.0). Experimental stimuli can be generated using GTZAN_Preprocess.py.

    References:

    1. Nakai, Koide-Majima, and Nishimoto (2021). Correspondence of categorical and feature-based representations of music in the human brain. Brain and Behavior. 11(1), e01936. https://doi.org/10.1002/brb3.1936

    2. Nakai, Koide-Majima, and Nishimoto (2022). Music genre neuroimaging dataset. Data in Brief. 40, 107675. https://doi.org/10.1016/j.dib.2021.107675

  17. f

    COBRE preprocessed with NIAK 0.17 - lightweight release

    • figshare.com
    application/gzip
    Updated Nov 3, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pierre Bellec; Pierre Bellec (2016). COBRE preprocessed with NIAK 0.17 - lightweight release [Dataset]. http://doi.org/10.6084/m9.figshare.4197885.v1
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Nov 3, 2016
    Dataset provided by
    figshare
    Authors
    Pierre Bellec; Pierre Bellec
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ContentThis work is a derivative from the COBRE sample found in the International Neuroimaging Data-sharing Initiative (INDI), originally released under Creative Commons -- Attribution Non-Commercial. It includes preprocessed resting-state functional magnetic resonance images for 72 patients diagnosed with schizophrenia (58 males, age range = 18-65 yrs) and 74 healthy controls (51 males, age range = 18-65 yrs). The fMRI dataset for each subject are single nifti files (.nii.gz), featuring 150 EPI blood-oxygenation level dependent (BOLD) volumes were obtained in 5 mns (TR = 2 s, TE = 29 ms, FA = 75°, 32 slices, voxel size = 3x3x4 mm3 , matrix size = 64x64, FOV = mm2 ). The data processing as well as packaging was implemented by Pierre Bellec, CRIUGM, Department of Computer Science and Operations Research, University of Montreal, 2016.The COBRE preprocessed fMRI release more specifically contains the following files:README.md: a markdown (text) description of the release.phenotypic_data.tsv.gz: A gzipped tabular-separated value file, with each column representing a phenotypic variable as well as measures of data quality (related to motions). Each row corresponds to one participant, except the first row which contains the names of the variables (see file below for a description).keys_phenotypic_data.json: a json file describing each variable found in phenotypic_data.tsv.gz.fmri_XXXXXXX.tsv.gz: A gzipped tabular-separated value file, with each column representing a confounding variable for the time series of participant XXXXXXX (which is the same participant ID found in phenotypic_data.tsv.gz). Each row corresponds to a time frame, except for the first row, which contains the names of the variables (see file below for a definition).keys_confounds.json: a json file describing each variable found in the files fmri_XXXXXXX.tsv.gz.fmri_XXXXXXX.nii.gz: a 3D+t nifti volume at 6 mm isotropic resolution, stored as short (16 bits) integers, in the MNI non-linear 2009a symmetric space (http://www.bic.mni.mcgill.ca/ServicesAtlases/ICBM152NLin2009). Each fMRI data features 150 volumes.Usage recommendationsIndividual analyses: You may want to remove some time frames with excessive motion for each subject, see the confounding variable called scrub in fmri_XXXXXXX.tsv.gz. Also, after removing these time frames there may not be enough usable data. We recommend a minimum number of 60 time frames. A fairly large number of confounds have been made available as part of the release (slow time drifts, motion paramaters, frame displacement, scrubbing, average WM/Vent signal, COMPCOR, global signal). We strongly recommend regression of slow time drifts. Everything else is optional.Group analyses: There will also be some residuals effect of motion, which you may want to regress out from connectivity measures at the group level. The number of acceptable time frames as well as a measure of residual motion (called frame displacement, as described by Power et al., Neuroimage 2012), can be found in the variables Frames OK and FD scrubbed in phenotypic_data.tsv.gz. Finally, the simplest use case with these data is to predict the overall presence of a diagnosis of schizophrenia (values Control or Patient in the phenotypic variable Subject Type). You may want to try to match the control and patient samples in terms of amounts of motion, as well as age and sex. Note that more detailed diagnostic categories are available in the variable Diagnosis.PreprocessingThe datasets were analysed using the NeuroImaging Analysis Kit (NIAK https://github.com/SIMEXP/niak) version 0.17, under CentOS version 6.3 with Octave (http://gnu.octave.org) version 4.0.2 and the Minc toolkit (http://www.bic.mni.mcgill.ca/ServicesSoftware/ServicesSoftwareMincToolKit) version 0.3.18. Each fMRI dataset was corrected for inter-slice difference in acquisition time and the parameters of a rigid-body motion were estimated for each time frame. Rigid-body motion was estimated within as well as between runs, using the median volume of the first run as a target. The median volume of one selected fMRI run for each subject was coregistered with a T1 individual scan using Minctracc (Collins and Evans, 1998), which was itself non-linearly transformed to the Montreal Neurological Institute (MNI) template (Fonov et al., 2011) using the CIVET pipeline (Ad-Dabbagh et al., 2006). The MNI symmetric template was generated from the ICBM152 sample of 152 young adults, after 40 iterations of non-linear coregistration. The rigid-body transform, fMRI-to-T1 transform and T1-to-stereotaxic transform were all combined, and the functional volumes were resampled in the MNI space at a 6 mm isotropic resolution.Note that a number of confounding variables were estimated and are made available as part of the release. WARNING: no confounds were actually regressed from the data, so it can be done interactively by the user who will be able to explore different analytical paths easily. The “scrubbing” method of (Power et al., 2012), was used to identify the volumes with excessive motion (frame displacement greater than 0.5 mm). A minimum number of 60 unscrubbed volumes per run, corresponding to ~120 s of acquisition, is recommended for further analysis. The following nuisance parameters were estimated: slow time drifts (basis of discrete cosines with a 0.01 Hz high-pass cut-off), average signals in conservative masks of the white matter and the lateral ventricles as well as the six rigid-body motion parameters (Giove et al., 2009), anatomical COMPCOR signal in the ventricles and white matter (Chai et al., 2012), PCA-based estimator of the global signal (Carbonell et al., 2011). The fMRI volumes were not spatially smoothed.ReferencesAd-Dab’bagh, Y., Einarson, D., Lyttelton, O., Muehlboeck, J. S., Mok, K., Ivanov, O., Vincent, R. D., Lepage, C., Lerch, J., Fombonne, E., Evans, A. C., 2006. The CIVET Image-Processing Environment: A Fully Automated Comprehensive Pipeline for Anatomical Neuroimaging Research. In: Corbetta, M. (Ed.), Proceedings of the 12th Annual Meeting of the Human Brain Mapping Organization. Neuroimage, Florence, Italy.Bellec, P., Rosa-Neto, P., Lyttelton, O. C., Benali, H., Evans, A. C., Jul. 2010. Multi-level bootstrap analysis of stable clusters in resting-state fMRI. NeuroImage 51 (3), 1126–1139. URL http://dx.doi.org/10.1016/j.neuroimage.2010.02.082F. Carbonell, P. Bellec, A. Shmuel. Validation of a superposition model of global and system-specific resting state activity reveals anti-correlated networks. Brain Connectivity 2011 1(6): 496-510. doi:10.1089/brain.2011.0065Chai, X. J., Castan, A. N. N., Ongr, D., Whitfield-Gabrieli, S., Jan. 2012. Anticorrelations in resting state networks without global signal regression. NeuroImage 59 (2), 1420-1428. http://dx.doi.org/10.1016/j.neuroimage.2011.08.048 Collins, D. L., Evans, A. C., 1997. Animal: validation and applications of nonlinear registration-based segmentation. International Journal of Pattern Recognition and Artificial Intelligence 11, 1271–1294.Fonov, V., Evans, A. C., Botteron, K., Almli, C. R., McKinstry, R. C., Collins, D. L., Jan. 2011. Unbiased average age-appropriate atlases for pediatric studies. NeuroImage 54 (1), 313–327. URLhttp://dx.doi.org/10.1016/j.neuroimage.2010.07.033Giove, F., Gili, T., Iacovella, V., Macaluso, E., Maraviglia, B., Oct. 2009. Images-based suppression of unwanted global signals in resting-state functional connectivity studies. Magnetic resonance imaging 27 (8), 1058–1064. URLhttp://dx.doi.org/10.1016/j.mri.2009.06.004Power, J. D., Barnes, K. A., Snyder, A. Z., Schlaggar, B. L., Petersen, S. E., Feb. 2012. Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. NeuroImage 59 (3), 2142–2154. URLhttp://dx.doi.org/10.1016/j.neuroimage.2011.10.018

  18. s

    ADHD-200 Preprocessed Data

    • scicrunch.org
    • dknet.org
    • +2more
    Updated Apr 18, 2012
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2012). ADHD-200 Preprocessed Data [Dataset]. http://identifiers.org/RRID:SCR_000576
    Explore at:
    Dataset updated
    Apr 18, 2012
    Description

    Preprocessed versions of the ADHD-200 Global Competition data including both preprocessed versions of structural and functional datasets previously made available by the ADHD-200 consortium, as well as initial standard subject-level analyses. The ADHD-200 Sample is pleased to announce the unrestricted public release of 776 resting-state fMRI and anatomical datasets aggregated across 8 independent imaging sites, 491 of which were obtained from typically developing individuals and 285 in children and adolescents with ADHD (ages: 7-21 years old). Accompanying phenotypic information includes: diagnostic status, dimensional ADHD symptom measures, age, sex, intelligence quotient (IQ) and lifetime medication status. Preliminary quality control assessments (usable vs. questionable) based upon visual timeseries inspection are included for all resting state fMRI scans. In accordance with HIPAA guidelines and 1000 Functional Connectomes Project protocols, all datasets are anonymous, with no protected health information included. They hope this release will open collaborative possibilities and contributions from researchers not traditionally addressing brain data so for those whose specialties lay outside of MRI and fMRI data processing, the competition is now one step easier to join. The preprocessed data is being made freely available through efforts of The Neuro Bureau as well as the ADHD-200 consortium. They ask that you acknowledge both of these organizations in any publications (conference, journal, etc.) that make use of this data. None of the preprocessing would be possible without the freely available imaging analysis packages, so please also acknowledge the relevant packages and resources as well as any other specific release related acknowledgements. You must be logged into NITRC to download the ADHD-200 datasets, http://www.nitrc.org/projects/neurobureau

  19. Data from: A high resolution 7-Tesla resting-state fMRI test-retest dataset...

    • openneuro.org
    Updated Jan 7, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chris Gorgolewski; Natacha Mendes; Domenica Wilfling; Elisabeth Wladimirow; Claudine J. Gauthier; Tyler Bonnen; Florence J.M. Ruby; Robert Trampel; Pierre-Louis Bazin; Roberto Cozatl; Jonathan Smallwood; Daniel S. Margulies (2019). A high resolution 7-Tesla resting-state fMRI test-retest dataset with cognitive and physiological measures [Dataset]. http://doi.org/10.18112/openneuro.ds001168.v1.0.0
    Explore at:
    Dataset updated
    Jan 7, 2019
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Chris Gorgolewski; Natacha Mendes; Domenica Wilfling; Elisabeth Wladimirow; Claudine J. Gauthier; Tyler Bonnen; Florence J.M. Ruby; Robert Trampel; Pierre-Louis Bazin; Roberto Cozatl; Jonathan Smallwood; Daniel S. Margulies
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Here we present a test-retest dataset of functional magnetic resonance imaging (fMRI) data acquired at rest. 22 participants were scanned during two sessions spaced one week apart. Each session includes two 1.5 mm isotropic whole-brain scans and one 0.75 mm isotropic scan of the prefrontal cortex, giving a total of six time-points. Additionally, the dataset includes measures of mood, sustained attention, blood pressure, respiration, pulse, and the content of self-generated thoughts (mind wandering). This data enables the investigation of sources of both intra- and inter-session variability not only limited to physiological changes, but also including alterations in cognitive and affective states, at high spatial resolution. The dataset is accompanied by a detailed experimental protocol and source code of all stimuli used.

    Structural scan

    For structural images a 3D MP2RAGE29 sequence was used: 3D-acquisition with field of view 224×224×168 mm3 (H-F; A-P; R-L), imaging matrix 320×320×240, 0.7 mm3 isotropic voxel size, Time of Repetition (TR)=5.0 s, Time of Echo (TE)=2.45 ms, Time of Inversion (TI) 1/2=0.9 s/2.75 s, Flip Angle (FA) 1/2=5°/3°, Bandwidth (BW)=250 Hz/Px, Partial Fourier 6/8, and GRAPPA acceleration with iPAT factor of 2 (24 reference lines).

    Field map

    For estimating B0 inhomogeneities, a 2D gradient echo sequence was used. It was acquired in axial orientation with field of view 192×192 mm2 (R-L; A-P), imaging matrix 64×64, 35 slices with 3.0 mm thickness, 3.0 mm3 isotropic voxel size, TR=1.5 s, TE1/2=6.00 ms/7.02 ms (which gives delta TE=1.02 ms), FA=72°, and BW=256 Hz/Px.

    Whole-brain rs-fMRI

    Whole-brain rs-fMRI scans were acquired using a 2D sequence. It used axial orientation, field of view 192×192 mm2 (R-L; A-P), imaging matrix 128×128, 70 slices with 1.5 mm thickness, 1.5 mm3 isotropic voxel size, TR=3.0 s, TE=17 ms, FA=70°, BW=1,116 Hz/Px, Partial Fourier 6/8, GRAPPA acceleration with iPAT factor of 3 (36 reference lines), and 300 repetitions resulting in 15 min of scanning time. Before the scan subjects were instructed to stay awake, keep their eyes open and focus on a cross. In order to avoid a pronounced g-factor penalty30 when using a 24 channel receive coil, the acceleration factor was kept at a maximum of 3, preventing the acquisition of whole-brain data sets at submillimeter resolution. However, as 7 T provides the necessary SNR for such high spatial resolutions a second experiment was performed with only partial brain coverage but with an 0.75 mm isotropic resolution.

    Prefrontal cortex rs-fMRI

    The submillimeter rs-fMRI scan was acquired with a zoomed EPI31 2D acquisition sequence. It was acquired in axial orientation with skewed saturation pulse32 suppressing signal from posterior part of the brain (see Figure 2). The position of the field of view was motivated by the involvement of medial prefrontal cortex in the default mode network and mindwandering33. This location can also improve our understanding of functional anatomy of the prefrontal cortex which is understudied in comparison to primary sensory cortices. Field of view was 150×45 mm2 (R-L; A-P), imaging matrix=200×60, 40 slices with 0.75 mm thickness, 0.75 mm3 isotropic voxel size, TR=4.0 s, TE=26 ms, FA=70°, BW=1,042 Hz/Px, Partial Fourier 6/8. A total of 150 repetitions were acquired resulting in 10 min of scanning time. Before the scan subjects were instructed to stay awake, keep their eyes open and focus on a cross.

    Known issues

    • sub-07 Session 1 & 2 Shimming window was offset causing minor signal deterioration. The same shimming window was used for both sessions.
    • sub-08 Session 1 The lightbulb in the projector died during the first resting state scan. A projector from another scanner was used as a replacement. The replacement took approximately 30 min. The participant was in the scanner during this time.
    • sub-11 Session 1 & 2 All whole-brain scans were accidentally acquired with voxel size 3 mm instead of 1.5 mm.
    • sub-12 Session 1 & 2 Second fieldmap was run after second whole-brain not before, the same order was used for the second session.
    • sub-13 Session 1 Second magnitude image of the first fieldmap was damaged during transfer from the scanner (one slice is missing). The phase image is intact so the fieldmap can still be reconstructed.
    • sub-19 Session 1 Second physiological recording (corresponding to second whole-brain scan) was stopped before the scan finished, third physiological recording (corresponding to the prefrontal scan) was started after the scan started.
  20. o

    Combined fMRI-fMRS dataset in an inference task in humans

    • data.mrc.ox.ac.uk
    • ora.ox.ac.uk
    Updated 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Renée S Koolschijn; Anna Shpektor; U E Emir; H C Barron (2021). Combined fMRI-fMRS dataset in an inference task in humans [Dataset]. http://doi.org/10.5287/bodleian:vmJOOm7KD
    Explore at:
    Dataset updated
    2021
    Authors
    Renée S Koolschijn; Anna Shpektor; U E Emir; H C Barron
    Time period covered
    2021
    Dataset funded by
    Biotechnology and Biological Sciences Research Council, UKRI
    John Fell Oxford University Press Research Fund
    Royal Society Dorothy Hodgkin Research Fellowship
    EPSRC/MRC, UKRI
    Wellcome Centre for Integrative Neuroimaging
    Medical Research Council, UKRI
    Junior Research Fellowship from Merton College
    Wellcome Trust
    Description

    This dataset consists of the following components:

    fMRI data showing group maps for contrasts of interest (nifti)
    raw fMRS data of 19 subjects (dicom)
    preprocessed fMRS data of 19 subjects, preprocessed in MRspa (mat)
    behavioural data from inference task during MRI scan (mat)
    behavioural data from associative test post MRI scan (mat)
    

    Participants performed a three-stage inference task across three days. On day 1 participants learned up to 80 auditory-visual associations. On day 2, each visual cue was paired with either a rewarding (set 1, monetary reward) or neutral outcome (set 2, woodchip). On day 3, auditory cues were presented in isolation (‘inference test’), without visual cues or outcomes, and we measured evidence for inference from the auditory cues to the appropriate outcome. Participants performed the inference test in an MRI scanner where fMRI-fMRS data was acquired. After the MRI scan, participants completed a surprise associative test for auditory-visual associations learned on day 1.

    fMRI data:

    SPM group maps in MNI space showing:

    BOLD signal on inference test trials with a contrast between auditory cues where the associated visual cue was ‘remembered’ versus ‘forgotten’
    Correlation between the contrast described in (1) and V1 fMRS measures of glu/GABA ratio for ‘remembered’ versus ‘forgotten’ trials in the inference test
    BOLD signal on inference test trials contrasted with the BOLD signal on conditioning trials, smoothed using 5mm kernel prior to second level analysis
    BOLD signal on inference test trials contrasted with the BOLD signal on conditioning trials, smoothed using 5mm kernel at the first level analysis
    BOLD signal on inference test trials contrasted with the BOLD signal on conditioning trials, smoothed using 8mm kernel at the first level analysis
    BOLD signal on inference test trials with a contrast between auditory cues where the associated visual cue was remembered’ versus ‘forgotten’, smoothed using 8mm kernel at first level
    

    Regions of interest (ROI) in MNI space:

    Hippocampal ROI
    Parietal-occipital cortex ROI
    Brainstem ROI
    Cumulative map of MRS voxel position across participants
    

    fMRS data:

    The raw fMRS data is included in DICOM format. Preprocessed data is included as a MATLAB structure for each subject, containing the following fields:

    Arrayedmetab: preprocessed spectra
    ws: water signal
    procsteps: preprocessing information
    ntmetab: total number of spectra
    params: acquisition parameters
    TR of each acquisition
    Block length: number of spectra acquired in each block
    

    Behavioural data from the inference task performed in the MRI scanner:

    On each trial of the inference task, participants were presented with an auditory cue, before being asked if they would like to look in the wooden box (‘yes’ or ‘no’) where they had previously found the outcomes. The behavioural data from the inference test includes columns containing the following information:

    Auditory stimulus: 0 (none) for conditioning, 1-80 for inference test trials
    Visual stimulus associated with the presented auditory stimulus (1-4)
    Migrating visual stimulus (1: no 4: yes)
    Rewarded visual stimulus (0: no 1: yes)
    Set during learning (1-8)
    Video number for inference test trials (1-32)
    Video number for conditioning trials (1-16)
    Trial type: (2: conditioning, 3: inference)
    Trial start time
    Auditory stimulus/video play start time
    Inference trials: display question time, Conditioning trials: outcome presentation time
    Trial end time
    Reaction time for inference test trials
    0: incorrect response, 1: correct response
    Wall on which visual stimulus was presented for conditioning trials
    Inter trial interval
    Button pressed (0: no, 1: yes)
    

    Behavioural data from post MRI-scan associative test:

    On each trial of the associative test, participants were presented with an auditory cue and then asked which of the 4 visual stimuli was associated with it. The columns contain the following information:

    1. Auditory stimulus number (1-80, 3 repeats)

    2. Visual stimulus associated with the presented auditory stimulus (1-4)

    3. Migrating visual stimulus (1: no 4: yes)

    4. Rewarded visual stimulus (0: no 1: yes)

    5-8. Visual stimulus positions (top left/right, bottom left/right 1-4)

    9-12. Wall visual stimulus is presented on (1-4)

    13-16. Angle of visual stimulus still image (2-30)

    1. Background image presented during auditory stimulus (2-57)

    2. Chosen visual stimulus (1-4)

    3. Reaction time

    4. 0: incorrect response 1: correct response

    5. Overall performance on presented visual stimulus

    6. Overall performance on presented auditory stimulus (3 presentations)

    7. Set during learning (1-8)

    For a more detailed description of the scanning sequence and behavioural tasks, see the paper.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker (2024). THINGS-fMRI [Dataset]. http://doi.org/10.18112/openneuro.ds004192.v1.0.7
Organization logo

THINGS-fMRI

Explore at:
Dataset updated
Sep 9, 2024
Dataset provided by
OpenNeurohttps://openneuro.org/
Authors
Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

THINGS-fMRI

Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.

Dataset overview

We collected extensively sampled object representations using functional MRI (fMRI). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.

During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=3, 8740 unique images of 720 objects). Images were shown in fast succession (4.5s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=100) were shown repeatedly in each session.

Beyond the core functional imaging data in response to THINGS images, additional structural and functional imaging data were gathered. We collected high-resolution anatomical images (T1- and T2-weighted), measures of brain vasculature (Time-of-Flight angiography, T2*-weighted) and gradient-echo field maps. In addition, we ran a functional localizer to identify numerous functionally specific brain regions, a retinotopic localizer for estimating population receptive fields, and an additional run without external stimulation for estimating resting-state functional connectivity.

Besides raw data this datasets holds

  • brainmasks (fmriprep)
  • cortical flat maps (pycoretx_filestore)
  • single trial response estimations (ICA-betas)

More derivatives can be found on figshare.

Provenance

Provenance information is given in 'dataset_description.json' as well as in the paper and preprocessing and analysis code is shared on GitHub.

Search
Clear search
Close search
Google apps
Main menu