100+ datasets found
  1. THINGS-fMRI

    • openneuro.org
    Updated Sep 9, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker (2024). THINGS-fMRI [Dataset]. http://doi.org/10.18112/openneuro.ds004192.v1.0.7
    Explore at:
    Dataset updated
    Sep 9, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    THINGS-fMRI

    Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.

    Dataset overview

    We collected extensively sampled object representations using functional MRI (fMRI). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.

    During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=3, 8740 unique images of 720 objects). Images were shown in fast succession (4.5s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=100) were shown repeatedly in each session.

    Beyond the core functional imaging data in response to THINGS images, additional structural and functional imaging data were gathered. We collected high-resolution anatomical images (T1- and T2-weighted), measures of brain vasculature (Time-of-Flight angiography, T2*-weighted) and gradient-echo field maps. In addition, we ran a functional localizer to identify numerous functionally specific brain regions, a retinotopic localizer for estimating population receptive fields, and an additional run without external stimulation for estimating resting-state functional connectivity.

    Besides raw data this datasets holds

    • brainmasks (fmriprep)
    • cortical flat maps (pycoretx_filestore)
    • single trial response estimations (ICA-betas)

    More derivatives can be found on figshare.

    Provenance

    Provenance information is given in 'dataset_description.json' as well as in the paper and preprocessing and analysis code is shared on GitHub.

  2. Data from: A large-scale fMRI dataset for human action recognition

    • openneuro.org
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ming Zhou; Zhengxin Gong; Yuxuan Dai; Yushan Wen; Youyi Liu; Zonglei Zhen (2023). A large-scale fMRI dataset for human action recognition [Dataset]. http://doi.org/10.18112/openneuro.ds004488.v1.1.1
    Explore at:
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Ming Zhou; Zhengxin Gong; Yuxuan Dai; Yushan Wen; Youyi Liu; Zonglei Zhen
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Summary

    Human action recognition is one of our critical living abilities, allowing us to interact easily with the environment and others in everyday life. Although the neural basis of action recognition has been widely studied using a few categories of actions from simple contexts as stimuli, how the human brain recognizes diverse human actions in real-world environments still need to be explored. Here, we present the Human Action Dataset (HAD), a large-scale functional magnetic resonance imaging (fMRI) dataset for human action recognition. HAD contains fMRI responses to 21,600 video clips from 30 participants. The video clips encompass 180 human action categories and offer a comprehensive coverage of complex activities in daily life. We demonstrate that the data are reliable within and across participants and, notably, capture rich representation information of the observed human actions. This extensive dataset, with its vast number of action categories and exemplars, has the potential to deepen our understanding of human action recognition in natural environments.

    Data record

    The data were organized according to the Brain-Imaging-Data-Structure (BIDS) Specification version 1.7.0 and can be accessed from the OpenNeuro public repository (accession number: ds004488). The raw data of each subject were stored in "sub-< ID>" directories. The preprocessed volume data and the derived surface-based data were stored in “derivatives/fmriprep” and “derivatives/ciftify” directories, respectively. The video clips stimuli were stored in “stimuli” directory.

    Video clips stimuli The video clips stimuli selected from HACS are deposited in the "stimuli" folder. Each of the 180 action categories holds a folder in which 120 unique video clips are stored.

    Raw data The data for each participant are distributed in three sub-folders, including the “anat” folder for the T1 MRI data, the “fmap” folder for the field map data, and the “func” folder for functional MRI data. The events file in “func” folder contains the onset, duration, trial type (category index) in specific scanning run.

    Preprocessed volume data from fMRIprep The preprocessed volume-based fMRI data are in subject's native space, saved as “sub-

    Preprocessed surface data from ciftify Under the “results” folder, the preprocessed surface-based data are saved in standard fsLR space, named as “sub-

  3. f

    THINGS-data: fMRI Regions of Interest

    • plus.figshare.com
    zip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin Hebart; Oliver Contier; Lina Teichmann; Adam Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris Baker (2023). THINGS-data: fMRI Regions of Interest [Dataset]. http://doi.org/10.25452/figshare.plus.20492769.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figshare+
    Authors
    Martin Hebart; Oliver Contier; Lina Teichmann; Adam Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris Baker
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Subject specific category-selective and retinotopic regions of interest for the fMRI data.

    Part of THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior.

    See related materials in Collection at: https://doi.org/10.25452/figshare.plus.c.6161151

  4. raw fMRI data set

    • figshare.com
    application/gzip
    Updated Jan 19, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shuntaro Sasai (2016). raw fMRI data set [Dataset]. http://doi.org/10.6084/m9.figshare.1309742.v18
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jan 19, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Shuntaro Sasai
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    An fMRI data set used in "Boly et al. Stimulus set meaningfulness and neurophysiological differentiation: a functional magnetic resonance imaging study"

  5. N

    IAPS fMRI dataset - block design with three emotional valence conditions,...

    • neurovault.org
    nifti
    Updated Jan 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). IAPS fMRI dataset - block design with three emotional valence conditions, Hsiao et al., 2024, Brain Imaging and Behavior: sub005 positive [Dataset]. http://identifiers.org/neurovault.image:839685
    Explore at:
    niftiAvailable download formats
    Dataset updated
    Jan 27, 2024
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    sub005_positive.nii.gz (old ext: .nii.gz)

    Collection description

    This dataset is from Hsiao et al., 2024, Brain Imaging and Behavior. Please cite this paper if any researcher has used this dataset for their studies. This dataset contains neuroimaging data from 56 participants. Among these 56 participants, only 53 participants (sub001-sub053) completed STAI questionnaires. Each participant was instructed to complete an emotional reactivity task in the MRI scanner. A total of 90 emotional scenes were selected from the International Affective Picture System to be used as stimuli in the emotional reactivity task. Within this task, participants were instructed to judge each scene as either indoor or outdoor in a block design paradigm. Each block consisted of six scenes sharing the same valence (i.e., positive, negative, or neutral), with each scene displayed on the screen for a duration of 2.5 seconds, resulting in a total block duration of 15 seconds. Each emotional scene block was then alternated with a fixation block lasting for 15 seconds. Five positive, five neutral, and another five negative emotional blocks were presented in a counterbalanced order across participants. The data is preprocessed by SPM8. Each participant has a beta image for the positive (e.g., sub001_positive.nii.gz), negative (e.g., sub001_negative.nii.gz), and neutral condition (e.g., sub001_neutral.nii.gz). Paper doi: https://doi.org/10.1101/2023.07.29.551128

    Subject species

    homo sapiens

    Modality

    fMRI-BOLD

    Analysis level

    single-subject

    Cognitive paradigm (task)

    Positive and Negative Images Task

    Map type

    U

  6. D

    fMRI data

    • data.sfb1451.de
    Updated May 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stephan Bender; Kerstin Konrad; Julia Schmidgen; Theresa Heinen (2024). fMRI data [Dataset]. https://data.sfb1451.de/dataset/04bd9151-96b1-5912-9ecc-dc8c79be5fa8/0.1.0
    Explore at:
    Dataset updated
    May 6, 2024
    Authors
    Stephan Bender; Kerstin Konrad; Julia Schmidgen; Theresa Heinen
    Dataset funded by
    German Research Foundationhttp://www.dfg.de/
    Description

    fMRI dataset of healthy subjects and subjects with Tic-disorder aged five to sixteen years. Anatomical and functional images were acquired. Subjects performed a movement task (free movement, non-informative cue, informative cue) and a suppression/release task (blinking, yawning, tics)

  7. b

    Brain/MINDS Marmoset Brain MRI NA216 (In Vivo) and eNA91 (Ex Vivo) datasets

    • dataportal.brainminds.jp
    nifti-1
    Updated Jan 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Junichi Hata; Ken Nakae; Daisuke Yoshimaru; Hideyuki Okano (2024). Brain/MINDS Marmoset Brain MRI NA216 (In Vivo) and eNA91 (Ex Vivo) datasets [Dataset]. http://doi.org/10.24475/bminds.mri.thj.4624
    Explore at:
    nifti-1(102 GB)Available download formats
    Dataset updated
    Jan 30, 2024
    Dataset provided by
    Brain/MINDS — Brain Mapping by Integrated Neurotechnologies for Disease Studies
    RIKEN Center for Brain Science
    Authors
    Junichi Hata; Ken Nakae; Daisuke Yoshimaru; Hideyuki Okano
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    Japan Agency for Medical Research and Development (AMED)
    Description

    The Brain/MINDS Marmoset MRI NA216 and eNA91 datasets currently constitutes the largest public marmoset brain MRI resource (483 individuals), and includes in vivo and ex vivo data for large variety of image modalities covering a wide age range of marmoset subjects.
    * The in vivo part corresponds to a total of 455 individuals, ranging in age from 0.6 to 12.7 years (mean age: 3.86 ± 2.63), and standard brain data (NA216) from 216 of these individuals (mean age: 4.46 ± 2.62).
    T1WI, T2WI, T1WI/T2WI, DTI metrics (FA, FAc, MD, RD, AD), DWI, rs-fMRI in awake and anesthetized states, NIfTI files (.nii.gz) of label data, individual brain and population average connectome matrix (structural and functional) csv files are included.
    * The ex vivo part is ex vivo data, mainly from a subset of 91 individuals with a mean age of 5.27 ± 2.39 years.
    It includes NIfTI files (.nii.gz) of standard brain, T2WI, DTI metrics (FA, FAc, MD, RD, AD), DWI, and label data, and csv files of individual brain and population average structural connectome matrices.

  8. D

    Data from: A large single-participant fMRI dataset for probing brain...

    • data.ru.nl
    00082_134_v2
    Updated Mar 30, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    K. Seeliger; R. P. Sommers; U. Güçlü; S. E. Bosch; M. A. J. van Gerven (2021). A large single-participant fMRI dataset for probing brain responses to naturalistic stimuli in space and time [Dataset]. http://doi.org/10.34973/j05g-fr58
    Explore at:
    00082_134_v2(201882405002 bytes)Available download formats
    Dataset updated
    Mar 30, 2021
    Dataset provided by
    Radboud University
    Authors
    K. Seeliger; R. P. Sommers; U. Güçlü; S. E. Bosch; M. A. J. van Gerven
    Description

    Representations from convolutional neural networks have been used as explanatory models for hierarchical sensory brain activations. Visual and auditory representations in the human brain have been studied with encoding models, RSA, decoding and reconstruction. However, none of the functional MRI data sets that are currently available has adequate amounts of data for sufficiently sampling their representations, or for training complex neural network hierarchies end-to-end on brain data for uncovering such representations. We recorded a densely sampled large fMRI dataset (TR=700 ms) in a single individual exposed to spatio-temporal visual and auditory naturalistic stimuli (30 episodes of BBC’s Doctor Who). The data consists of approximately 118,000 whole-brain volumes (approx. 23 h) of single-presentation data (full episodes, training set) and approximately 500 volumes (5 min) of repeated short clips (test set, 22 repetitions), recorded with fixation over a period of six months. This rich dataset can be used widely to study the way the brain represents audiovisual and language input across its sensory hierarchies.

  9. f

    THINGS-data: fMRI Single Trial Responses (nifti format)

    • plus.figshare.com
    bin
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin Hebart; Oliver Contier; Lina Teichmann; Adam Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris Baker (2023). THINGS-data: fMRI Single Trial Responses (nifti format) [Dataset]. http://doi.org/10.25452/figshare.plus.20590140.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    Figshare+
    Authors
    Martin Hebart; Oliver Contier; Lina Teichmann; Adam Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris Baker
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Whole-brain single trial beta estimates of the THINGS-fMRI data.

    Part of THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior.

    See related materials in Collection at: https://doi.org/10.25452/figshare.plus.c.6161151

  10. The Alice Dataset: fMRI Dataset to Study Natural Language Comprehension in...

    • openneuro.org
    Updated Jun 22, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shohini Bhattasali; Jonathan R. Brennan; Wen-Ming Luh; Berta Franzluebbers; John T. Hale (2020). The Alice Dataset: fMRI Dataset to Study Natural Language Comprehension in the Brain [Dataset]. http://doi.org/10.18112/openneuro.ds002322.v1.0.4
    Explore at:
    Dataset updated
    Jun 22, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Shohini Bhattasali; Jonathan R. Brennan; Wen-Ming Luh; Berta Franzluebbers; John T. Hale
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Participants Twenty-nine college-age volunteers participated for pay (17 women and 12 men, 18–24 years old). All qualified as righthanded on the Edinburgh handedness inventory Oldfield, 1971. They self-identified as native English speakers and gave their informed consent. Data from one participant were excluded due to excessive head movement and data from two participants were excluded due to poor behavioral performance, leaving data from twenty-six particpants in the dataset (15 women, 11 men).

    Study Design The audio stimulus was Kristen McQuillan’s reading of the first chapter of Lewis Carroll’s Alice in Wonderland from librivox.org, available in the stimuli folder. To improve comprehensibility in the noisy scanner, the audio was normalized to 70 dB and slowed by 20% with the pitch-preserving PSOLA algorithm implemented in Praat software. This moderate amount of time-dilation did not introduce recognizable distortion and was judged by an independent rater to sound natural and to be easier to comprehend than the raw audio recording. The audio presentation lasted 12.4 min. After giving their informed consent, participants were familiarized with the MRI facility and assumed a supine position on the scanner gurney. Auditory stimuli were delivered through MRIsafe, high-fidelity headphones (Confon HP-VS01, MR Confon, Magdeburg, Germany) inside the head coil. The headphones were secured against the plastic frame of the coil using foam blocks. Using a spoken recitation of the US Constitution, an experimenter increased the volume stepwise until participants reported that they could hear clearly. Participants then listened passively to the audio storybook. Upon emerging from the scanner, participants completed a twelve-question multiple-choice questionnaire concerning events and situations described in the story. The entire session lasted less than an hour.

    Data collection and analysis Imaging was performed using a 3T MRI scanner (Discovery MR750, GE Healthcare, Milwaukee, WI) with a 32-channel head coil at the Cornell MRI Facility. Blood Oxygen Level Dependent (BOLD) signals were collected from twenty-nine participants. Thirteen participants were scanned using a T2-weighted echo planar imaging (EPI) sequence with: a repetition time of 2000 ms, echo time of 27 ms, flip angle of 77, image acceleration of 2X, field of view of 216 216 mm, and a matrix size of 72 72. Under these parameters we obtained 44 oblique slices with 3 mm isotropic voxels. Sixteen participants were scanned with a three-echo EPI sequence where the field of view was 240 240 mm resulting in 33 slices with an in-plane resolution of 3.75 mm2 and thickness 3.8 mm. Data from this second group provided are images from the second EPI echo, where the echo time was 27.5 ms. All other parameters were exactly the same. This selection of the second-echo images renders the two sets of functional images as comparable as possible.

    Preprocessing Preprocessing was done with SPM8. Data were spatially realigned based on 6-parameter rigid body transformation using the 2nd degree B-spline method. Functional (EPI) and structural (MP-RAGE) images were co-registered via mutual information and functional images were smoothed with a 3 mm isotropic gaussian filter. We used the ICBM template provided with SPM8 to put our data into MNI stereotaxic coordinates. The data were high pass filtered at 1/128 Hz and we discarded the first 10 functional volumes. These processed data are available in the derivatives directory and preprocessing.mat files are included to provide input parameters used.

  11. S

    Data of the REST-meta-MDD Project from DIRECT Consortium

    • scidb.cn
    Updated Jun 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chao-Gan Yan; Xiao Chen; Le Li; Francisco Xavier Castellanos; Tong-Jian Bai; Qi-Jing Bo; Jun Cao; Guan-Mao Chen; Ning-Xuan Chen; Wei Chen; Chang Cheng; Yu-Qi Cheng; Xi-Long Cui; Jia Duan; Yi-Ru Fang; Qi-Yong Gong; Wen-Bin Guo; Zheng-Hua Hou; Lan Hu; Li Kuang; Feng Li; Tao Li; Yan-Song Liu; Zhe-Ning Liu; Yi-Cheng Long; Qing-Hua Luo; Hua-Qing Meng; Dai-Hui Peng; Hai-Tang Qiu; Jiang Qiu; Yue-Di Shen; Yu-Shu Shi; Yan-Qing Tang; Chuan-Yue Wang; Fei Wang; Kai Wang; Li Wang; Xiang Wang; Ying Wang; Xiao-Ping Wu; Xin-Ran Wu; Chun-Ming Xie; Guang-Rong Xie; Hai-Yan Xie; Peng Xie; Xiu-Feng Xu; Hong Yang; Jian Yang; Jia-Shu Yao; Shu-Qiao Yao; Ying-Ying Yin; Yong-Gui Yuan; Ai-Xia Zhang; Hong Zhang; Ke-Rang Zhang; Lei Zhang; Zhi-Jun Zhang; Ru-Bai Zhou; Yi-Ting Zhou; Jun-Juan Zhu; Chao-Jie Zou; Tian-Mei Si; Xi-Nian Zuo; Jing-Ping Zhao; Yu-Feng Zang (2022). Data of the REST-meta-MDD Project from DIRECT Consortium [Dataset]. http://doi.org/10.57760/sciencedb.o00115.00013
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 20, 2022
    Dataset provided by
    Science Data Bank
    Authors
    Chao-Gan Yan; Xiao Chen; Le Li; Francisco Xavier Castellanos; Tong-Jian Bai; Qi-Jing Bo; Jun Cao; Guan-Mao Chen; Ning-Xuan Chen; Wei Chen; Chang Cheng; Yu-Qi Cheng; Xi-Long Cui; Jia Duan; Yi-Ru Fang; Qi-Yong Gong; Wen-Bin Guo; Zheng-Hua Hou; Lan Hu; Li Kuang; Feng Li; Tao Li; Yan-Song Liu; Zhe-Ning Liu; Yi-Cheng Long; Qing-Hua Luo; Hua-Qing Meng; Dai-Hui Peng; Hai-Tang Qiu; Jiang Qiu; Yue-Di Shen; Yu-Shu Shi; Yan-Qing Tang; Chuan-Yue Wang; Fei Wang; Kai Wang; Li Wang; Xiang Wang; Ying Wang; Xiao-Ping Wu; Xin-Ran Wu; Chun-Ming Xie; Guang-Rong Xie; Hai-Yan Xie; Peng Xie; Xiu-Feng Xu; Hong Yang; Jian Yang; Jia-Shu Yao; Shu-Qiao Yao; Ying-Ying Yin; Yong-Gui Yuan; Ai-Xia Zhang; Hong Zhang; Ke-Rang Zhang; Lei Zhang; Zhi-Jun Zhang; Ru-Bai Zhou; Yi-Ting Zhou; Jun-Juan Zhu; Chao-Jie Zou; Tian-Mei Si; Xi-Nian Zuo; Jing-Ping Zhao; Yu-Feng Zang
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    (Note: Part of the content of this post was adapted from the original DIRECT Psychoradiology paper (https://academic.oup.com/psyrad/article/2/1/32/6604754) and REST-meta-MDD PNAS paper (http://www.pnas.org/cgi/doi/10.1073/pnas.1900390116) under CC BY-NC-ND license.)Major Depressive Disorder (MDD) is the second leading cause of health burden worldwide (1). Unfortunately, objective biomarkers to assist in diagnosis are still lacking, and current first-line treatments are only modestly effective (2, 3), reflecting our incomplete understanding of the pathophysiology of MDD. Characterizing the neurobiological basis of MDD promises to support developing more effective diagnostic approaches and treatments.An increasingly used approach to reveal neurobiological substrates of clinical conditions is termed resting-state functional magnetic resonance imaging (R-fMRI) (4). Despite intensive efforts to characterize the pathophysiology of MDD with R-fMRI, clinical imaging markers of diagnosis and predictors of treatment outcomes have yet to be identified. Previous reports have been inconsistent, sometimes contradictory, impeding the endeavor to translate them into clinical practice (5). One reason for inconsistent results is low statistical power from small sample size studies (6). Low-powered studies are more prone to produce false positive results, reducing the reproducibility of findings in a given field (7, 8). Of note, one recent study demonstrate that sample size of thousands of subjects may be needed to identify reproducible brain-wide association findings (9), calling for larger datasets to boost effect size. Another reason could be the high analytic flexibility (10). Recently, Botvinik-Nezer and colleagues (11) demonstrated the divergence in results when independent research teams applied different workflows to process an identical fMRI dataset, highlighting the effects of “researcher degrees of freedom” (i.e., heterogeneity in (pre-)processing methods) in producing disparate fMRI findings.To address these critical issues, we initiated the Depression Imaging REsearch ConsorTium (DIRECT) in 2017. Through a series of meetings, a group of 17 participating hospitals in China agreed to establish the first project of the DIRECT consortium, the REST-meta-MDD Project, and share 25 study cohorts, including R-fMRI data from 1300 MDD patients and 1128 normal controls. Based on prior work, a standardized preprocessing pipeline adapted from Data Processing Assistant for Resting-State fMRI (DPARSF) (12, 13) was implemented at each local participating site to minimize heterogeneity in preprocessing methods. R-fMRI metrics can be vulnerable to physiological confounds such as head motion (14, 15). Based on our previous work examination of head motion impact on R-fMRI FC connectomes (16) and other recent benchmarking studies (15, 17), DPARSF implements a regression model (Friston-24 model) on the participant-level and group-level correction for mean frame displacements (FD) as the default setting.In the REST-meta-MDD Project of the DIRECT consortium, 25 research groups from 17 hospitals in China agreed to share final R-fMRI indices from patients with MDD and matched normal controls (see Supplementary Table; henceforth “site” refers to each cohort for convenience) from studies approved by local Institutional Review Boards. The consortium contributed 2428 previously collected datasets (1300 MDDs and 1128 NCs). On average, each site contributed 52.0±52.4 patients with MDD (range 13-282) and 45.1±46.9 NCs (range 6-251). Most MDD patients were female (826 vs. 474 males), as expected. The 562 patients with first episode MDD included 318 first episode drug-naïve (FEDN) MDD and 160 scanned while receiving antidepressants (medication status unavailable for 84). Of 282 with recurrent MDD, 121 were scanned while receiving antidepressants and 76 were not being treated with medication (medication status unavailable for 85). Episodicity (first or recurrent) and medication status were unavailable for 456 patients.To improve transparency and reproducibility, our analysis code has been openly shared at https://github.com/Chaogan-Yan/PaperScripts/tree/master/Yan_2019_PNAS. In addition, we would like to share the R-fMRI indices of the 1300 MDD patients and 1128 NCs through the R-fMRI Maps Project (http://rfmri.org/REST-meta-MDD). These data derivatives will allow replication, secondary analyses and discovery efforts while protecting participant privacy and confidentiality.According to the agreement of the REST-meta-MDD consortium, there would be 2 phases for sharing the brain imaging data and phenotypic data of the 1300 MDD patients and 1128 NCs. 1) Phase 1: coordinated sharing, before January 1, 2020. To reduce conflict of the researchers, the consortium will review and coordinate the proposals submitted by interested researchers. The interested researchers first send a letter of intent to rfmrilab@gmail.com. Then the consortium will send all the approved proposals to the applicant. The applicant should submit a new innovative proposal while avoiding conflict with approved proposals. This proposal would be reviewed and approved by the consortium if no conflict. Once approved, this proposal would enter the pool of approved proposals and prevent future conflict. 2) Phase 2: unrestricted sharing, after January 1, 2020. The researchers can perform any analyses of interest while not violating ethics.The REST-meta-MDD data entered unrestricted sharing phase since January 1, 2020. The researchers can perform any analyses of interest while not violating ethics. Please visit Psychological Science Data Bank to download the data, and then sign the Data Use Agreement and email the scanned signed copy to rfmrilab@gmail.com to get unzip password and phenotypic information. ACKNOWLEDGEMENTSThis work was supported by the National Key R&D Program of China (2017YFC1309902), the National Natural Science Foundation of China (81671774, 81630031, 81471740 and 81371488), the Hundred Talents Program and the 13th Five-year Informatization Plan (XXH13505) of Chinese Academy of Sciences, Beijing Municipal Science & Technology Commission (Z161100000216152, Z171100000117016, Z161100002616023 and Z171100000117012), Department of Science and Technology, Zhejiang Province (2015C03037) and the National Basic Research (973) Program (2015CB351702). REFERENCES1. A. J. Ferrari et al., Burden of Depressive Disorders by Country, Sex, Age, and Year: Findings from the Global Burden of Disease Study 2010. PLOS Medicine 10, e1001547 (2013).2. L. M. Williams et al., International Study to Predict Optimized Treatment for Depression (iSPOT-D), a randomized clinical trial: rationale and protocol. Trials 12, 4 (2011).3. S. J. Borowsky et al., Who is at risk of nondetection of mental health problems in primary care? J Gen Intern Med 15, 381-388 (2000).4. B. B. Biswal, Resting state fMRI: a personal history. Neuroimage 62, 938-944 (2012).5. C. G. Yan et al., Reduced default mode network functional connectivity in patients with recurrent major depressive disorder. Proc Natl Acad Sci U S A 116, 9078-9083 (2019).6. K. S. Button et al., Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci 14, 365-376 (2013).7. J. P. A. Ioannidis, Why Most Published Research Findings Are False. PLOS Medicine 2, e124 (2005).8. R. A. Poldrack et al., Scanning the horizon: towards transparent and reproducible neuroimaging research. Nat Rev Neurosci 10.1038/nrn.2016.167 (2017).9. S. Marek et al., Reproducible brain-wide association studies require thousands of individuals. Nature 603, 654-660 (2022).10. J. Carp, On the Plurality of (Methodological) Worlds: Estimating the Analytic Flexibility of fMRI Experiments. Frontiers in Neuroscience 6, 149 (2012).11. R. Botvinik-Nezer et al., Variability in the analysis of a single neuroimaging dataset by many teams. Nature 10.1038/s41586-020-2314-9 (2020).12. C.-G. Yan, X.-D. Wang, X.-N. Zuo, Y.-F. Zang, DPABI: Data Processing & Analysis for (Resting-State) Brain Imaging. Neuroinformatics 14, 339-351 (2016).13. C.-G. Yan, Y.-F. Zang, DPARSF: A MATLAB Toolbox for "Pipeline" Data Analysis of Resting-State fMRI. Frontiers in systems neuroscience 4, 13 (2010).14. R. Ciric et al., Mitigating head motion artifact in functional connectivity MRI. Nature protocols 13, 2801-2826 (2018).15. R. Ciric et al., Benchmarking of participant-level confound regression strategies for the control of motion artifact in studies of functional connectivity. NeuroImage 154, 174-187 (2017).16. C.-G. Yan et al., A comprehensive assessment of regional variation in the impact of head micromovements on functional connectomics. NeuroImage 76, 183-201 (2013).17. L. Parkes, B. Fulcher, M. Yücel, A. Fornito, An evaluation of the efficacy, reliability, and sensitivity of motion correction strategies for resting-state functional MRI. NeuroImage 171, 415-436 (2018).18. L. Wang et al., Interhemispheric functional connectivity and its relationships with clinical characteristics in major depressive disorder: a resting state fMRI study. PLoS One 8, e60191 (2013).19. L. Wang et al., The effects of antidepressant treatment on resting-state functional brain networks in patients with major depressive disorder. Hum Brain Mapp 36, 768-778 (2015).20. Y. Liu et al., Regional homogeneity associated with overgeneral autobiographical memory of first-episode treatment-naive patients with major depressive disorder in the orbitofrontal cortex: A resting-state fMRI study. J Affect Disord 209, 163-168 (2017).21. X. Zhu et al., Evidence of a dissociation pattern in resting-state default mode network connectivity in first-episode, treatment-naive major depression patients. Biological psychiatry 71, 611-617 (2012).22. W. Guo et al., Abnormal default-mode

  12. d

    fMRI Data Center

    • dknet.org
    Updated Sep 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). fMRI Data Center [Dataset]. http://identifiers.org/RRID:SCR_007278/resolver/mentions
    Explore at:
    Dataset updated
    Sep 12, 2024
    Description

    THIS RESOURCE IS NO LONGER IN SERVICE, documented August 25, 2013 Public curated repository of peer reviewed fMRI studies and their underlying data. This Web-accessible database has data mining capabilities and the means to deliver requested data to the user (via Web, CD, or digital tape). Datasets available: 107 NOTE: The fMRIDC is down temporarily while it moves to a new home at UCLA. Check back again in late Jan 2013! The goal of the Center is to help speed the progress and the understanding of cognitive processes and the neural substrates that underlie them by: * Providing a publicly accessible repository of peer-reviewed fMRI studies. * Providing all data necessary to interpret, analyze, and replicate these fMRI studies. * Provide training for both the academic and professional communities. The Center will accept data from those researchers who are publishing fMRI imaging articles in peer-reviewed journals. The goal is to serve the entire fMRI community.

  13. P

    CWL EEG/fMRI Dataset Dataset

    • paperswithcode.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johanvan der Meera; André Pampeld Eusvan, CWL EEG/fMRI Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/cwl-eeg-fmri-data-set
    Explore at:
    Authors
    Johanvan der Meera; André Pampeld Eusvan
    Description

    EEG/fMRI Data from 8 subject doing a simple eyes open/eyes closed task is provided on this webpage.

    The EEG/fMRI data are six files for each subject, with two basic factors: recording during Helium pump On and Helium pump Off, and recording during MRI scanning and without MRI scanning. In addition 'outside' EEG data is provided, before as well as after the MRI session.

    There are 30 EEG channels, 1 EOG channel, 1 ECG channel, as well as 6 CWL signals.

  14. N

    Data from: A test-retest fMRI dataset for motor, language and spatial...

    • neurovault.org
    zip
    Updated Jun 30, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). A test-retest fMRI dataset for motor, language and spatial attention functions [Dataset]. http://identifiers.org/neurovault.collection:63
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 30, 2018
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    A collection of 11 brain maps. Each brain map is a 3D array of values representing properties of the brain at different locations.

    Collection description

    OpenfMRI ds000114

  15. c

    EEG-fMRI Dataset for A Whole-Brain EEG-Informed fMRI Analysis Across...

    • kilthub.cmu.edu
    zip
    Updated Jun 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elena Bondi; Yidan Ding; Bin He (2025). EEG-fMRI Dataset for A Whole-Brain EEG-Informed fMRI Analysis Across Multiple Motor Conditions [Dataset]. http://doi.org/10.1184/R1/29264621.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 12, 2025
    Dataset provided by
    Carnegie Mellon University
    Authors
    Elena Bondi; Yidan Ding; Bin He
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    In this study, subjects performed motor execution or motor imagery of their left hand, right hand, or right foot, in which EEG and fMRI were recorded simultaneously. Seventeen participants completed a single EEG-fMRI session. The dataset includes the preprocessed fMRI recordings and the preprocessed EEG recordings after MR-induced artifact removal, including gradient artifact (GA) and ballistocardiogram (BCG) artifact correction, for each subject. The detailed description of the study can be found in the following publication:Bondi, E., Ding, Y., Zhang, Y., Maggioni, E., & He, B. (2025). Investigating the Neurovascular Coupling Across Multiple Motor Execution and Imagery Conditions: A Whole-Brain EEG-Informed fMRI Analysis. NeuroImage, 121311. https://doi.org/10.1016/j.neuroimage.2025.121311If you use a part of this dataset in your work, please cite the above publication.This dataset was collected under support from the National Institutes of Health via grants NS124564, NS131069, NS127849, and NS096761 to Dr. Bin He.Correspondence about the dataset: Dr. Bin He, Carnegie Mellon University, Department of Biomedical Engineering, Pittsburgh, PA 15213. E-mail: bhe1@andrew.cmu.edu

  16. g

    Data from: Gallant Lab Natural Short Clips 3T fMRI Data

    • doi.gin.g-node.org
    Updated May 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander G. Huth; Shinji Nishimoto; An T. Vu; Tom Dupre la Tour; Jack L. Gallant (2022). Gallant Lab Natural Short Clips 3T fMRI Data [Dataset]. http://doi.org/10.12751/g-node.vy1zjd
    Explore at:
    Dataset updated
    May 3, 2022
    Dataset provided by
    University of California, Berkeley
    Authors
    Alexander G. Huth; Shinji Nishimoto; An T. Vu; Tom Dupre la Tour; Jack L. Gallant
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Dataset funded by
    NEI
    CSoI
    Description

    This data set contains BOLD fMRI responses in human subjects viewing a set of natural short clips. The functional data were collected for five subjects, in three sessions over three separate days for each subject. Details of the experiment are described in the original publication.

  17. h

    fMRI-Shape

    • huggingface.co
    Updated Mar 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fudan-fMRI-yanwei (2024). fMRI-Shape [Dataset]. https://huggingface.co/datasets/Fudan-fMRI/fMRI-Shape
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 23, 2024
    Dataset authored and provided by
    Fudan-fMRI-yanwei
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    [ECCV 2024] MinD-3D: Reconstruct High-quality 3D objects in Human Brain

      Overview
    

    MinD-3D aims to reconstruct high-quality 3D objects based on fMRI data.

      Repository Structure
    

    annotations: Contains metadata and annotations related to the fMRI data for each subject. sub-00xx: Each folder corresponds to a specific subject and includes their respective raw and processed fMRI data. stimuli.zip: A ZIP archive of all videos shown to subjects during the fMRI scans.… See the full description on the dataset page: https://huggingface.co/datasets/Fudan-fMRI/fMRI-Shape.

  18. S

    RMP Rumination fMRI Dataset

    • scidb.cn
    Updated Apr 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiao Chen; Chao-Gan Yan (2022). RMP Rumination fMRI Dataset [Dataset]. http://doi.org/10.57760/sciencedb.o00115.00002
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 29, 2022
    Dataset provided by
    Science Data Bank
    Authors
    Xiao Chen; Chao-Gan Yan
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    This dataset was used to investigate the brain mechanism underlying rumination state (Chen et al., 2020, NeuroImage). The data was shared through the R-fMRI Maps Project (RMP) and Psychological Science Data Bank.Investigators and AffiliationsXiao Chen, Ph. D. 1, 2, 3, 4, Chao-Gan Yan, Ph. D. 1, 2, 3, 41. CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing 100101, China;2. International Big-Data Center for Depression Research, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;3. Magnetic Resonance Imaging Research Center, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;4. Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China. AcknowledgmentsWe would like to thank the National Center for Protein Sciences at Peking University in Beijing, China, for assistance with data acquisition at PKU and Dr. Men Weiwei for his technical support during data collection. FundingNational Key R&D Program of China (2017YFC1309902);National Natural Science Foundation of China (81671774 and 81630031);13th Five-year Informatization Plan of Chinese Academy of Sciences (XXH13505);Key Research Program of the Chinese Academy of Sciences (ZDBS-SSW-JSC006);Beijing Nova Program of Science and Technology (Z191100001119104);Scientific Foundation of Institute of Psychology, Chinese Academy of Sciences (Y9CX422005);China Postdoctoral Science Foundation (2019M660847). Publication Related to This DatasetThe following publication include the data shared in this data collection:Chen, X., Chen, N.X., Shen, Y.Q., Li, H.X., Li, L., Lu, B., Zhu, Z.C., Fan, Z., Yan, C.G. (2020). The subsystem mechanism of default mode network underlying rumination: A reproducible neuroimaging study. Neuroimage, 221, 117185, doi:10.1016/j.neuroimage.2020.117185. Sample SizeTotal: 41 (22 females; mean age = 22.7 ± 4.1 years).Exclusion criteria: Any MRI contraindications, current psychiatric or neurological disorders, clinical diagnosis of neurologic trauma, use of psychotropic medication and any history of substance or alcohol abuse. Scan procedures and ParametersMRI scanningSeveral days prior to scanning, participants were interviewed and briefed on the purpose of the study and the mental states to be induced in the scanner. Subjects also generated key words of 4 individual negative autobiographical events as the stimuli for the sad memory phase. We measured participants’ rumination tendency with the Ruminative Response Scale (RRS) (Nolen-Hoeksema and Morrow, 1991), which can be further divided into a more unconstructive subtype, brooding and a more adaptive subtype, reflection (Treynor, 2003). All participants completed identical fMRI tasks on 3 different MRI scanners (order was counter-balanced across participants). Time elapsed between 2 sequential visits were 22.0 ± 14.6 days. The fMRI session included 4 runs: resting state, sad memory, rumination state and distraction state. An 8-minute resting state came first as a baseline. Participants were prompted to look at a fixation cross on the screen, not to think anything in particular and stay awake. Then participants would recall negative autobiographical events prompted by individualized keywords from the prior interview. Participants were asked to recall as vividly as they could and imagine they were re-experiencing those negative events. In the rumination state, questions such as “Think: Analyze your personality to understand why you feel so depressed in the events you just remembered” were presented to help participants think about themselves, while in the distraction state, prompts like “Think: The layout of a typical classroom” were presented to help participants focus on an objective and concrete scene. All mental states (sad memory, rumination and distraction) except for the resting state contained four randomly sequentially presented stimuli (keywords or prompts). Each stimulus lasted for 2 minutes, and then was switched to the next without any inter-stimuli intervals (ISI), forming an 8-minute continuous mental state. The resting state and negative autobiographical events recall were sequenced first and second while the order of rumination and distraction states was counter-balanced across participants. Before the resting state and after each mental state, we assessed participants’ subjective affect with a scale (item score ranged from 1 = very unhappy to 9 = very happy). Thinking contents and the phenomenology during each mental state were assessed with a series of items which were derived from a factor analysis (Gorgolewski et al., 2014) regarding self-generated thoughts (item scores ranged from 1 = not at all to 9 = almost all). Image AcquisitionImages were acquired on 3 Tesla GE MR750 scanners at the Magnetic Resonance Imaging Research Center, Institute of Psychology, Chinese Academy of Sciences (henceforth IPCAS) and Peking University (henceforth PKUGE) with 8-channel head-coils. Another 3 Tesla SIEMENS PRISMA scanner (henceforth PKUSIEMENS) with an 8-channel head-coil in Peking University was also used. Before functional image acquisitions, all participants underwent a 3D T1-weighted scan first (IPCAS/PKUGE: 192 sagittal slices, TR = 6.7 ms, TE = 2.90 ms, slice thickness/gap = 1/0mm, in-plane resolution = 256 × 256, inversion time (IT) = 450ms, FOV = 256 × 256 mm, flip angle = 7º, average = 1; PKUSIEMENS: 192 sagittal slices, TR = 2530 ms, TE = 2.98 ms, slice thickness/gap = 1/0 mm, in-plane resolution = 256 × 224, inversion time (TI) = 1100 ms, FOV = 256 × 224 mm, flip angle = 7º, average=1). After T1 image acquisition, functional images were obtained for the resting state and all three mental states (sad memory, rumination and distraction) (IPCAS/PKUGE: 33 axial slices, TR = 2000 ms, TE = 30 ms, FA = 90º, thickness/gap = 3.5/0.6 mm, FOV = 220 × 220 mm, matrix = 64 × 64; PKUSIEMENS: 62 axial slices, TR = 2000 ms, TE = 30 ms, FA = 90º, thickness = 2 mm, multiband factor = 2, FOV = 224 × 224 mm). Code availabilityAnalysis codes and other behavioral data are openly shared at https://github.com/Chaogan-Yan/PaperScripts/tree/master/Chen_2020_NeuroImage. ReferencesGorgolewski, K.J., Lurie, D., Urchs, S., Kipping, J.A., Craddock, R.C., Milham, M.P., Margulies, D.S., Smallwood, J., 2014. A correspondence between individual differences in the brain's intrinsic functional architecture and the content and form of self-generated thoughts. PLoS One 9, e97176-e97176.Nolen-Hoeksema, S., Morrow, J., 1991. A Prospective Study of Depression and Posttraumatic Stress Symptoms After a Natural Disaster: The 1989 Loma Prieta Earthquake.Treynor, W., 2003. Rumination Reconsidered: A Psychometric Analysis.(Note: Part of the content of this post was adapted from the original NeuroImage paper)

  19. Music Genre fMRI Dataset - Derivatives

    • zenodo.org
    bin, text/x-python
    Updated Aug 23, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tomoya Nakai; Tomoya Nakai; Naoko Koide-Majima; Naoko Koide-Majima; Shinji Nishimoto; Shinji Nishimoto (2023). Music Genre fMRI Dataset - Derivatives [Dataset]. http://doi.org/10.5281/zenodo.8275363
    Explore at:
    bin, text/x-pythonAvailable download formats
    Dataset updated
    Aug 23, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tomoya Nakai; Tomoya Nakai; Naoko Koide-Majima; Naoko Koide-Majima; Shinji Nishimoto; Shinji Nishimoto
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains preprocessed data from the Music Genre fMRI Dataset (https://openneuro.org/datasets/ds003720/versions/1.0.0). Experimental stimuli can be generated using GTZAN_Preprocess.py.

    References:

    1. Nakai, Koide-Majima, and Nishimoto (2021). Correspondence of categorical and feature-based representations of music in the human brain. Brain and Behavior. 11(1), e01936. https://doi.org/10.1002/brb3.1936

    2. Nakai, Koide-Majima, and Nishimoto (2022). Music genre neuroimaging dataset. Data in Brief. 40, 107675. https://doi.org/10.1016/j.dib.2021.107675

  20. n

    ADHD-200 Preprocessed Data

    • neuinfo.org
    • dknet.org
    • +2more
    Updated Apr 18, 2012
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2012). ADHD-200 Preprocessed Data [Dataset]. http://identifiers.org/RRID:SCR_000576
    Explore at:
    Dataset updated
    Apr 18, 2012
    Description

    Preprocessed versions of the ADHD-200 Global Competition data including both preprocessed versions of structural and functional datasets previously made available by the ADHD-200 consortium, as well as initial standard subject-level analyses. The ADHD-200 Sample is pleased to announce the unrestricted public release of 776 resting-state fMRI and anatomical datasets aggregated across 8 independent imaging sites, 491 of which were obtained from typically developing individuals and 285 in children and adolescents with ADHD (ages: 7-21 years old). Accompanying phenotypic information includes: diagnostic status, dimensional ADHD symptom measures, age, sex, intelligence quotient (IQ) and lifetime medication status. Preliminary quality control assessments (usable vs. questionable) based upon visual timeseries inspection are included for all resting state fMRI scans. In accordance with HIPAA guidelines and 1000 Functional Connectomes Project protocols, all datasets are anonymous, with no protected health information included. They hope this release will open collaborative possibilities and contributions from researchers not traditionally addressing brain data so for those whose specialties lay outside of MRI and fMRI data processing, the competition is now one step easier to join. The preprocessed data is being made freely available through efforts of The Neuro Bureau as well as the ADHD-200 consortium. They ask that you acknowledge both of these organizations in any publications (conference, journal, etc.) that make use of this data. None of the preprocessing would be possible without the freely available imaging analysis packages, so please also acknowledge the relevant packages and resources as well as any other specific release related acknowledgements. You must be logged into NITRC to download the ADHD-200 datasets, http://www.nitrc.org/projects/neurobureau

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker (2024). THINGS-fMRI [Dataset]. http://doi.org/10.18112/openneuro.ds004192.v1.0.7
Organization logo

THINGS-fMRI

Explore at:
Dataset updated
Sep 9, 2024
Dataset provided by
OpenNeurohttps://openneuro.org/
Authors
Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

THINGS-fMRI

Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.

Dataset overview

We collected extensively sampled object representations using functional MRI (fMRI). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.

During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=3, 8740 unique images of 720 objects). Images were shown in fast succession (4.5s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=100) were shown repeatedly in each session.

Beyond the core functional imaging data in response to THINGS images, additional structural and functional imaging data were gathered. We collected high-resolution anatomical images (T1- and T2-weighted), measures of brain vasculature (Time-of-Flight angiography, T2*-weighted) and gradient-echo field maps. In addition, we ran a functional localizer to identify numerous functionally specific brain regions, a retinotopic localizer for estimating population receptive fields, and an additional run without external stimulation for estimating resting-state functional connectivity.

Besides raw data this datasets holds

  • brainmasks (fmriprep)
  • cortical flat maps (pycoretx_filestore)
  • single trial response estimations (ICA-betas)

More derivatives can be found on figshare.

Provenance

Provenance information is given in 'dataset_description.json' as well as in the paper and preprocessing and analysis code is shared on GitHub.

Search
Clear search
Close search
Google apps
Main menu