CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.
We collected extensively sampled object representations using functional MRI (fMRI). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.
During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=3, 8740 unique images of 720 objects). Images were shown in fast succession (4.5s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=100) were shown repeatedly in each session.
Beyond the core functional imaging data in response to THINGS images, additional structural and functional imaging data were gathered. We collected high-resolution anatomical images (T1- and T2-weighted), measures of brain vasculature (Time-of-Flight angiography, T2*-weighted) and gradient-echo field maps. In addition, we ran a functional localizer to identify numerous functionally specific brain regions, a retinotopic localizer for estimating population receptive fields, and an additional run without external stimulation for estimating resting-state functional connectivity.
Besides raw data this datasets holds
More derivatives can be found on figshare.
Provenance information is given in 'dataset_description.json' as well as in the paper and preprocessing and analysis code is shared on GitHub.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Summary
Human action recognition is one of our critical living abilities, allowing us to interact easily with the environment and others in everyday life. Although the neural basis of action recognition has been widely studied using a few categories of actions from simple contexts as stimuli, how the human brain recognizes diverse human actions in real-world environments still need to be explored. Here, we present the Human Action Dataset (HAD), a large-scale functional magnetic resonance imaging (fMRI) dataset for human action recognition. HAD contains fMRI responses to 21,600 video clips from 30 participants. The video clips encompass 180 human action categories and offer a comprehensive coverage of complex activities in daily life. We demonstrate that the data are reliable within and across participants and, notably, capture rich representation information of the observed human actions. This extensive dataset, with its vast number of action categories and exemplars, has the potential to deepen our understanding of human action recognition in natural environments.
Data record
The data were organized according to the Brain-Imaging-Data-Structure (BIDS) Specification version 1.7.0 and can be accessed from the OpenNeuro public repository (accession number: ds004488). The raw data of each subject were stored in "sub-< ID>" directories. The preprocessed volume data and the derived surface-based data were stored in “derivatives/fmriprep” and “derivatives/ciftify” directories, respectively. The video clips stimuli were stored in “stimuli” directory.
Video clips stimuli The video clips stimuli selected from HACS are deposited in the "stimuli" folder. Each of the 180 action categories holds a folder in which 120 unique video clips are stored.
Raw data The data for each participant are distributed in three sub-folders, including the “anat” folder for the T1 MRI data, the “fmap” folder for the field map data, and the “func” folder for functional MRI data. The events file in “func” folder contains the onset, duration, trial type (category index) in specific scanning run.
Preprocessed volume data from fMRIprep The preprocessed volume-based fMRI data are in subject's native space, saved as “sub-
Preprocessed surface data from ciftify Under the “results” folder, the preprocessed surface-based data are saved in standard fsLR space, named as “sub-
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Subject specific category-selective and retinotopic regions of interest for the fMRI data.
Part of THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior.
See related materials in Collection at: https://doi.org/10.25452/figshare.plus.c.6161151
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An fMRI data set used in "Boly et al. Stimulus set meaningfulness and neurophysiological differentiation: a functional magnetic resonance imaging study"
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
sub005_positive.nii.gz (old ext: .nii.gz)
This dataset is from Hsiao et al., 2024, Brain Imaging and Behavior. Please cite this paper if any researcher has used this dataset for their studies. This dataset contains neuroimaging data from 56 participants. Among these 56 participants, only 53 participants (sub001-sub053) completed STAI questionnaires. Each participant was instructed to complete an emotional reactivity task in the MRI scanner. A total of 90 emotional scenes were selected from the International Affective Picture System to be used as stimuli in the emotional reactivity task. Within this task, participants were instructed to judge each scene as either indoor or outdoor in a block design paradigm. Each block consisted of six scenes sharing the same valence (i.e., positive, negative, or neutral), with each scene displayed on the screen for a duration of 2.5 seconds, resulting in a total block duration of 15 seconds. Each emotional scene block was then alternated with a fixation block lasting for 15 seconds. Five positive, five neutral, and another five negative emotional blocks were presented in a counterbalanced order across participants. The data is preprocessed by SPM8. Each participant has a beta image for the positive (e.g., sub001_positive.nii.gz), negative (e.g., sub001_negative.nii.gz), and neutral condition (e.g., sub001_neutral.nii.gz). Paper doi: https://doi.org/10.1101/2023.07.29.551128
homo sapiens
fMRI-BOLD
single-subject
Positive and Negative Images Task
U
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Brain/MINDS Marmoset MRI NA216 and eNA91 datasets currently constitutes the largest public marmoset brain MRI resource (483 individuals), and includes in vivo and ex vivo data for large variety of image modalities covering a wide age range of marmoset subjects.
* The in vivo part corresponds to a total of 455 individuals, ranging in age from 0.6 to 12.7 years (mean age: 3.86 ± 2.63), and standard brain data (NA216) from 216 of these individuals (mean age: 4.46 ± 2.62).
T1WI, T2WI, T1WI/T2WI, DTI metrics (FA, FAc, MD, RD, AD), DWI, rs-fMRI in awake and anesthetized states, NIfTI files (.nii.gz) of label data, individual brain and population average connectome matrix (structural and functional) csv files are included.
* The ex vivo part is ex vivo data, mainly from a subset of 91 individuals with a mean age of 5.27 ± 2.39 years.
It includes NIfTI files (.nii.gz) of standard brain, T2WI, DTI metrics (FA, FAc, MD, RD, AD), DWI, and label data, and csv files of individual brain and population average structural connectome matrices.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
In this study, subjects performed motor execution or motor imagery of their left hand, right hand, or right foot, in which EEG and fMRI were recorded simultaneously. Seventeen participants completed a single EEG-fMRI session. The dataset includes the preprocessed fMRI recordings and the preprocessed EEG recordings after MR-induced artifact removal, including gradient artifact (GA) and ballistocardiogram (BCG) artifact correction, for each subject. The detailed description of the study can be found in the following publication:Bondi, E., Ding, Y., Zhang, Y., Maggioni, E., & He, B. (2025). Investigating the Neurovascular Coupling Across Multiple Motor Execution and Imagery Conditions: A Whole-Brain EEG-Informed fMRI Analysis. NeuroImage, 121311. https://doi.org/10.1016/j.neuroimage.2025.121311If you use a part of this dataset in your work, please cite the above publication.This dataset was collected under support from the National Institutes of Health via grants NS124564, NS131069, NS127849, and NS096761 to Dr. Bin He.Correspondence about the dataset: Dr. Bin He, Carnegie Mellon University, Department of Biomedical Engineering, Pittsburgh, PA 15213. E-mail: bhe1@andrew.cmu.edu
fMRI dataset of healthy subjects and subjects with Tic-disorder aged five to sixteen years. Anatomical and functional images were acquired. Subjects performed a movement task (free movement, non-informative cue, informative cue) and a suppression/release task (blinking, yawning, tics)
Representations from convolutional neural networks have been used as explanatory models for hierarchical sensory brain activations. Visual and auditory representations in the human brain have been studied with encoding models, RSA, decoding and reconstruction. However, none of the functional MRI data sets that are currently available has adequate amounts of data for sufficiently sampling their representations, or for training complex neural network hierarchies end-to-end on brain data for uncovering such representations. We recorded a densely sampled large fMRI dataset (TR=700 ms) in a single individual exposed to spatio-temporal visual and auditory naturalistic stimuli (30 episodes of BBC’s Doctor Who). The data consists of approximately 118,000 whole-brain volumes (approx. 23 h) of single-presentation data (full episodes, training set) and approximately 500 volumes (5 min) of repeated short clips (test set, 22 repetitions), recorded with fixation over a period of six months. This rich dataset can be used widely to study the way the brain represents audiovisual and language input across its sensory hierarchies.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This work is a derivative from the Atlanta sample (Liu et al., 2009) found in the 1000 functional connectome project (Biswal et al., 2010), originally released under Creative Commons -- Attribution Non-Commercial. It includes preprocessed resting-state functional magnetic resonance images for 19 healthy subjects. Time series are packaged in a series of .mat matlab/octave (HDF5) files, one per subject. For each subject, an array featuring about 200 time points for 116 brain regions from the AAL template is available. The Atlanta AAL preprocessed time series release more specifically contains the following files: * README.md: a markdown (text) description of the release. * brain_rois.nii.gz: a 3D nifti volume of the AAL template at 3 mm isotropic resolution, in the MNI non-linear 2009a symmetric space (http://www.bic.mni.mcgill.ca/ServicesAtlases/ICBM152NLin2009). Region number I is filled with Is (background is filled with 0s). * labels_aal.mat: a .mat file with two variables: rois_aal(i) is the numrical ID of the i-th region in the AAL template (e.g. 2001, 2002, 2101, etc). labels_all{i} is a string label for the i-th region (e.g. 'Precentral_L', 'Precentral_R', etc). * tseries_rois_SUBJECT_session1_run1.mat: a matlab/octave file for each subject. Each tseries file contains the following variables: * confounds: a TxK array. Each row corresponds to a time sample, and each column to one confound that was regressed out from the time series during preprocessing. * labels_confounds: cell of strings. Each entry is the label of a confound that was regressed out from the time series. * mask_suppressed: a T2x1 vector. T2 is the number of time samples in the raw time series (before preprocessing), T2=205. Each entry corresponds to a time sample, and is 1 if the corresponding sample was removed due to excessive motion (or to wait for magnetic equilibrium at the beginning of the series). Samples that were kept are tagged with 0s. * time_frames: a Tx1 vector. Each entry is the time of acquisition (in s) of the corresponding volume. * tseries: a TxN array, where each row is a time sample and each column a region (N=483, numbered as in brain_rois.nii.gz). Note that the number of time samples may vary, as some samples have been removed if tagged with excessive motion.
The datasets were analysed using the NeuroImaging Analysis Kit (NIAK https://github.com/SIMEXP/niak) version 0.6.5c. The parameters of a rigid body motion were first estimated at each time frame of the fMRI dataset (no correction of inter-slice difference in acquisition time was applied). The median volume of the fMRI time series was coregistered with a T1 individual scan using Minctracc9 (Collins et al., 1994), which was itself transformed to the Montreal Neurological Institute (MNI) non-linear template (Fonov et al., 2011) using the CIVET10 pipeline (Zijdenbos et al., 2002). The rigid-body transform, fMRI-to-T1 transform and T1-to-stereotaxic transform were all combined, and the functional volumes were resampled in the MNI space at a 3 mm isotropic resolution. The “scrubbing” method of Power et al. (2012) was used to remove the volumes with excessive motion (frame displacement greater than 0.5). The following nuisance parameters were regressed out from the time series at each voxel: slow time drifts (basis of discrete cosines with a 0.01 Hz high-pass cut-off), average signals in conservative masks of the white matter and the lateral ventricles as well as the first principal components (95% energy) of the six rigid-body motion parameters and their squares (Giove et al., 2009). The fMRI volumes were then spatially smoothed with a 6 mm isotropic Gaussian blurring kernel. The fMRI time series were spatially averaged on each of the areas of the AAL brain template (Tzourio-Mazoyer et al., 2002). To further reduce the spatial dimension, only the 81 cortical AAL areas were included in the analysis (excluding the cerebellum, the basal ganglia and the thalamus). The clustering methods were applied to these regional time series. Note that 8 subjects were excluded because there was not enough time points left after scrubbing (a minimum number of 190 volumes was selected as acceptable), and one additional subject had to be excluded because the quality of the T1-fMRI coregistration was substandard (by visual inspection). A total of 19 subjects was thus actually released.
Bellec, P., Rosa-Neto, P., Lyttelton, O. C., Benali, H., Evans, A. C., Jul. 2010.Multi-level bootstrap analysis of stable clusters in resting-state fMRI. Neu-roImage 51 (3), 1126–1139.URL http://dx.doi.org/10.1016/j.neuroimage.2010.02.082 Biswal, B. B. et al., 2010. Toward discoveryscience of human brain function. Proceedings of the National Academy ofSciences of the U.S.A. 107 (10), 4734–4739. Collins, D. L., Evans, A. C., 1997. Animal: validation and applications of nonlin-ear registration-based segmentation. International Journal of Pattern Recog-nition and Artificial Intelligence 11, 1271–1294. Fonov, V., Evans, A. C., Botteron, K., Almli, C. R., McKinstry, R. C., Collins,D. L., Jan. 2011. Unbiased average age-appropriate atlases for pediatric stud-ies. NeuroImage 54 (1), 313–327.URL http://dx.doi.org/10.1016/j.neuroimage.2010.07.033 Giove, F., Gili, T., Iacovella, V., Macaluso, E., Maraviglia, B., Oct. 2009.Images-based suppression of unwanted global signals in resting-state func-tional connectivity studies. Magnetic resonance imaging 27 (8), 1058–1064.URL http://dx.doi.org/10.1016/j.mri.2009.06.004 Liu, H., Stufflebeam, S. M., Sepulcre, J., Hedden, T., Buckner, R. L., 2009.Evidence from intrinsic activity that asymmetry of the human brain iscontrolled by multiple factors. Proceedings of the National Academy ofSciences of the U.S.A. 106 (48), 20499–20503. Power, J. D., Barnes, K. A., Snyder, A. Z., Schlaggar, B. L., Petersen, S. E.,Feb. 2012. Spurious but systematic correlations in functional connectivityMRI networks arise from subject motion. NeuroImage 59 (3), 2142–2154.URL http://dx.doi.org/10.1016/j.neuroimage.2011.10.018 Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., Crivello, F., Etard,O., Delcroix, N., Mazoyer, B., Joliot, M., 2002. Automated anatomicallabeling of activations in SPM using a macroscopic anatomical parcellationof the MNI MRI single-subject brain. NeuroImage 15, 273–289. Zijdenbos, A. P., Forghani, R., Evans, A. C., 2002. Automatic ”pipeline”analysis of 3-D MRI data for clinical trials: application to multiple sclerosis.IEEE Transactions on Medical Imaging 21 (10), 1280–1291.
This dataset was used in a publication:http://arxiv.org/abs/1501.05194
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Whole-brain single trial beta estimates of the THINGS-fMRI data.
Part of THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior.
See related materials in Collection at: https://doi.org/10.25452/figshare.plus.c.6161151
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This database contains the Paris and HCP datasets used in Marrelec et al. (2016). It includes the following files:* empirical_Paris.mat: preprocessed resting-state fMRI time series (TS) and associated diffusion MRI structural connectivity matrices (MAP) for 21 subjects from Paris using the Freesurfer parcellation. The healthy volunteers (right-handed) were recruited within Paris local community. All participants gave written informed consent and the protocol was approved by the local ethics committee. Data were acquired using a 3T Siemens Trio TIM MRI scanner (CENIR, Paris, France). Resting-state fMRI series were recorded during ~11 minutes with a repetition time (TR) of 3.29 s.* empirical_HCP: preprocessed resting-state fMRI time series (TS) and associated diffusion MRI structural connectivity matrices (MAP) for 40 subjects from the Human Connectome Porject (HCP) using the Freesurfer parcellation. Data from healthy, unrelated adults were obtained from the second quarter release (Q2, June 2013) of the HCP database (http://www.humanconnectome.org/documentation/Q2/). Data were collected on a custom 3T Siemens Skyra MRI scanner (Washington University, Saint Louis, United States). Resting-state fMRI data were acquired in four runs of approximately 15 minutes each with a TR of 0.72 s. The four runs were concatenated in time.* freesurferlabels.txt: Freesurfer labels of the 160 regions used for the parcellation.* simulations_individuals_Paris.mat: simulated functional connectivity (FC) matrices generated using an abstract model of brain activity (the SAR model) and simulated resting-state fMRI time series (TS) generated using 6 mainstream computational models of brain activity (models), all using as input the structural connectivity of each individual subject belonging to the Paris dataset. Simulated resting-state fMRI data were simulated during ~8 minutes at a sampling frequency of 2 Hz.* simulations_average_Paris.mat: simulated functional connectivity (FC) matrices generated using an abstract model of brain activity (the SAR model) and simulated resting-state fMRI time series (TS) generated using 6 mainstream computational models of brain activity (models), all using as input the average structural connectivity of all subjects belonging to the Paris dataset. Simulated resting-state fMRI data were simulated during ~8 minutes at a sampling frequency of 2 Hz.* simulations_average_homotopic.mat: simulated functional connectivity (FC) matrices generated using an abstract model of brain activity (the SAR model) and simulated resting-state fMRI time series (TS) generated using 6 mainstream computational models of brain activity (models), all using as input the average structural connectivity of all subjects belonging to the Paris dataset and an artificial addition of homotopic structural connections. Simulated resting-state fMRI data were simulated during ~8 minutes at a sampling frequency of 2 Hz.Reference:Marrelec G, Messé A, Giron A, Rudrauf D (2016) Functional Connectivity’s Degenerate View of Brain Computation. PLoS Comput Biol 12(10): e1005031. doi:10.1371/journal.pcbi.1005031
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The included archive of .csv files has one .csv file for each time index. Each .csv file contains a single matrix, which has one column for each of the spatial coordinates x, y, z and one column for the fMRI signal amplitude at voxel (x, y, z) at that time index. This is normalized scan data for a single patient from our study, described in the paper, with an ACC (anterior cingulate cortex) mask applied. This is the data we used, together with our software implementation of our workflow (available at https://github.com/regalski/Wayne-State-TDA), to produce the persistence diagrams, vineyards, and loop locations pictured in the figures and described in the Workflow section of our paper. (ZIP)
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
(Note: Part of the content of this post was adapted from the original DIRECT Psychoradiology paper (https://academic.oup.com/psyrad/article/2/1/32/6604754) and REST-meta-MDD PNAS paper (http://www.pnas.org/cgi/doi/10.1073/pnas.1900390116) under CC BY-NC-ND license.)Major Depressive Disorder (MDD) is the second leading cause of health burden worldwide (1). Unfortunately, objective biomarkers to assist in diagnosis are still lacking, and current first-line treatments are only modestly effective (2, 3), reflecting our incomplete understanding of the pathophysiology of MDD. Characterizing the neurobiological basis of MDD promises to support developing more effective diagnostic approaches and treatments.An increasingly used approach to reveal neurobiological substrates of clinical conditions is termed resting-state functional magnetic resonance imaging (R-fMRI) (4). Despite intensive efforts to characterize the pathophysiology of MDD with R-fMRI, clinical imaging markers of diagnosis and predictors of treatment outcomes have yet to be identified. Previous reports have been inconsistent, sometimes contradictory, impeding the endeavor to translate them into clinical practice (5). One reason for inconsistent results is low statistical power from small sample size studies (6). Low-powered studies are more prone to produce false positive results, reducing the reproducibility of findings in a given field (7, 8). Of note, one recent study demonstrate that sample size of thousands of subjects may be needed to identify reproducible brain-wide association findings (9), calling for larger datasets to boost effect size. Another reason could be the high analytic flexibility (10). Recently, Botvinik-Nezer and colleagues (11) demonstrated the divergence in results when independent research teams applied different workflows to process an identical fMRI dataset, highlighting the effects of “researcher degrees of freedom” (i.e., heterogeneity in (pre-)processing methods) in producing disparate fMRI findings.To address these critical issues, we initiated the Depression Imaging REsearch ConsorTium (DIRECT) in 2017. Through a series of meetings, a group of 17 participating hospitals in China agreed to establish the first project of the DIRECT consortium, the REST-meta-MDD Project, and share 25 study cohorts, including R-fMRI data from 1300 MDD patients and 1128 normal controls. Based on prior work, a standardized preprocessing pipeline adapted from Data Processing Assistant for Resting-State fMRI (DPARSF) (12, 13) was implemented at each local participating site to minimize heterogeneity in preprocessing methods. R-fMRI metrics can be vulnerable to physiological confounds such as head motion (14, 15). Based on our previous work examination of head motion impact on R-fMRI FC connectomes (16) and other recent benchmarking studies (15, 17), DPARSF implements a regression model (Friston-24 model) on the participant-level and group-level correction for mean frame displacements (FD) as the default setting.In the REST-meta-MDD Project of the DIRECT consortium, 25 research groups from 17 hospitals in China agreed to share final R-fMRI indices from patients with MDD and matched normal controls (see Supplementary Table; henceforth “site” refers to each cohort for convenience) from studies approved by local Institutional Review Boards. The consortium contributed 2428 previously collected datasets (1300 MDDs and 1128 NCs). On average, each site contributed 52.0±52.4 patients with MDD (range 13-282) and 45.1±46.9 NCs (range 6-251). Most MDD patients were female (826 vs. 474 males), as expected. The 562 patients with first episode MDD included 318 first episode drug-naïve (FEDN) MDD and 160 scanned while receiving antidepressants (medication status unavailable for 84). Of 282 with recurrent MDD, 121 were scanned while receiving antidepressants and 76 were not being treated with medication (medication status unavailable for 85). Episodicity (first or recurrent) and medication status were unavailable for 456 patients.To improve transparency and reproducibility, our analysis code has been openly shared at https://github.com/Chaogan-Yan/PaperScripts/tree/master/Yan_2019_PNAS. In addition, we would like to share the R-fMRI indices of the 1300 MDD patients and 1128 NCs through the R-fMRI Maps Project (http://rfmri.org/REST-meta-MDD). These data derivatives will allow replication, secondary analyses and discovery efforts while protecting participant privacy and confidentiality.According to the agreement of the REST-meta-MDD consortium, there would be 2 phases for sharing the brain imaging data and phenotypic data of the 1300 MDD patients and 1128 NCs. 1) Phase 1: coordinated sharing, before January 1, 2020. To reduce conflict of the researchers, the consortium will review and coordinate the proposals submitted by interested researchers. The interested researchers first send a letter of intent to rfmrilab@gmail.com. Then the consortium will send all the approved proposals to the applicant. The applicant should submit a new innovative proposal while avoiding conflict with approved proposals. This proposal would be reviewed and approved by the consortium if no conflict. Once approved, this proposal would enter the pool of approved proposals and prevent future conflict. 2) Phase 2: unrestricted sharing, after January 1, 2020. The researchers can perform any analyses of interest while not violating ethics.The REST-meta-MDD data entered unrestricted sharing phase since January 1, 2020. The researchers can perform any analyses of interest while not violating ethics. Please visit Psychological Science Data Bank to download the data, and then sign the Data Use Agreement and email the scanned signed copy to rfmrilab@gmail.com to get unzip password and phenotypic information. ACKNOWLEDGEMENTSThis work was supported by the National Key R&D Program of China (2017YFC1309902), the National Natural Science Foundation of China (81671774, 81630031, 81471740 and 81371488), the Hundred Talents Program and the 13th Five-year Informatization Plan (XXH13505) of Chinese Academy of Sciences, Beijing Municipal Science & Technology Commission (Z161100000216152, Z171100000117016, Z161100002616023 and Z171100000117012), Department of Science and Technology, Zhejiang Province (2015C03037) and the National Basic Research (973) Program (2015CB351702). REFERENCES1. A. J. Ferrari et al., Burden of Depressive Disorders by Country, Sex, Age, and Year: Findings from the Global Burden of Disease Study 2010. PLOS Medicine 10, e1001547 (2013).2. L. M. Williams et al., International Study to Predict Optimized Treatment for Depression (iSPOT-D), a randomized clinical trial: rationale and protocol. Trials 12, 4 (2011).3. S. J. Borowsky et al., Who is at risk of nondetection of mental health problems in primary care? J Gen Intern Med 15, 381-388 (2000).4. B. B. Biswal, Resting state fMRI: a personal history. Neuroimage 62, 938-944 (2012).5. C. G. Yan et al., Reduced default mode network functional connectivity in patients with recurrent major depressive disorder. Proc Natl Acad Sci U S A 116, 9078-9083 (2019).6. K. S. Button et al., Power failure: why small sample size undermines the reliability of neuroscience. Nat Rev Neurosci 14, 365-376 (2013).7. J. P. A. Ioannidis, Why Most Published Research Findings Are False. PLOS Medicine 2, e124 (2005).8. R. A. Poldrack et al., Scanning the horizon: towards transparent and reproducible neuroimaging research. Nat Rev Neurosci 10.1038/nrn.2016.167 (2017).9. S. Marek et al., Reproducible brain-wide association studies require thousands of individuals. Nature 603, 654-660 (2022).10. J. Carp, On the Plurality of (Methodological) Worlds: Estimating the Analytic Flexibility of fMRI Experiments. Frontiers in Neuroscience 6, 149 (2012).11. R. Botvinik-Nezer et al., Variability in the analysis of a single neuroimaging dataset by many teams. Nature 10.1038/s41586-020-2314-9 (2020).12. C.-G. Yan, X.-D. Wang, X.-N. Zuo, Y.-F. Zang, DPABI: Data Processing & Analysis for (Resting-State) Brain Imaging. Neuroinformatics 14, 339-351 (2016).13. C.-G. Yan, Y.-F. Zang, DPARSF: A MATLAB Toolbox for "Pipeline" Data Analysis of Resting-State fMRI. Frontiers in systems neuroscience 4, 13 (2010).14. R. Ciric et al., Mitigating head motion artifact in functional connectivity MRI. Nature protocols 13, 2801-2826 (2018).15. R. Ciric et al., Benchmarking of participant-level confound regression strategies for the control of motion artifact in studies of functional connectivity. NeuroImage 154, 174-187 (2017).16. C.-G. Yan et al., A comprehensive assessment of regional variation in the impact of head micromovements on functional connectomics. NeuroImage 76, 183-201 (2013).17. L. Parkes, B. Fulcher, M. Yücel, A. Fornito, An evaluation of the efficacy, reliability, and sensitivity of motion correction strategies for resting-state functional MRI. NeuroImage 171, 415-436 (2018).18. L. Wang et al., Interhemispheric functional connectivity and its relationships with clinical characteristics in major depressive disorder: a resting state fMRI study. PLoS One 8, e60191 (2013).19. L. Wang et al., The effects of antidepressant treatment on resting-state functional brain networks in patients with major depressive disorder. Hum Brain Mapp 36, 768-778 (2015).20. Y. Liu et al., Regional homogeneity associated with overgeneral autobiographical memory of first-episode treatment-naive patients with major depressive disorder in the orbitofrontal cortex: A resting-state fMRI study. J Affect Disord 209, 163-168 (2017).21. X. Zhu et al., Evidence of a dissociation pattern in resting-state default mode network connectivity in first-episode, treatment-naive major depression patients. Biological psychiatry 71, 611-617 (2012).22. W. Guo et al., Abnormal default-mode
THIS RESOURCE IS NO LONGER IN SERVICE, documented August 25, 2013 Public curated repository of peer reviewed fMRI studies and their underlying data. This Web-accessible database has data mining capabilities and the means to deliver requested data to the user (via Web, CD, or digital tape). Datasets available: 107 NOTE: The fMRIDC is down temporarily while it moves to a new home at UCLA. Check back again in late Jan 2013! The goal of the Center is to help speed the progress and the understanding of cognitive processes and the neural substrates that underlie them by: * Providing a publicly accessible repository of peer-reviewed fMRI studies. * Providing all data necessary to interpret, analyze, and replicate these fMRI studies. * Provide training for both the academic and professional communities. The Center will accept data from those researchers who are publishing fMRI imaging articles in peer-reviewed journals. The goal is to serve the entire fMRI community.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Participants Twenty-nine college-age volunteers participated for pay (17 women and 12 men, 18–24 years old). All qualified as righthanded on the Edinburgh handedness inventory Oldfield, 1971. They self-identified as native English speakers and gave their informed consent. Data from one participant were excluded due to excessive head movement and data from two participants were excluded due to poor behavioral performance, leaving data from twenty-six particpants in the dataset (15 women, 11 men).
Study Design The audio stimulus was Kristen McQuillan’s reading of the first chapter of Lewis Carroll’s Alice in Wonderland from librivox.org, available in the stimuli folder. To improve comprehensibility in the noisy scanner, the audio was normalized to 70 dB and slowed by 20% with the pitch-preserving PSOLA algorithm implemented in Praat software. This moderate amount of time-dilation did not introduce recognizable distortion and was judged by an independent rater to sound natural and to be easier to comprehend than the raw audio recording. The audio presentation lasted 12.4 min. After giving their informed consent, participants were familiarized with the MRI facility and assumed a supine position on the scanner gurney. Auditory stimuli were delivered through MRIsafe, high-fidelity headphones (Confon HP-VS01, MR Confon, Magdeburg, Germany) inside the head coil. The headphones were secured against the plastic frame of the coil using foam blocks. Using a spoken recitation of the US Constitution, an experimenter increased the volume stepwise until participants reported that they could hear clearly. Participants then listened passively to the audio storybook. Upon emerging from the scanner, participants completed a twelve-question multiple-choice questionnaire concerning events and situations described in the story. The entire session lasted less than an hour.
Data collection and analysis Imaging was performed using a 3T MRI scanner (Discovery MR750, GE Healthcare, Milwaukee, WI) with a 32-channel head coil at the Cornell MRI Facility. Blood Oxygen Level Dependent (BOLD) signals were collected from twenty-nine participants. Thirteen participants were scanned using a T2-weighted echo planar imaging (EPI) sequence with: a repetition time of 2000 ms, echo time of 27 ms, flip angle of 77, image acceleration of 2X, field of view of 216 216 mm, and a matrix size of 72 72. Under these parameters we obtained 44 oblique slices with 3 mm isotropic voxels. Sixteen participants were scanned with a three-echo EPI sequence where the field of view was 240 240 mm resulting in 33 slices with an in-plane resolution of 3.75 mm2 and thickness 3.8 mm. Data from this second group provided are images from the second EPI echo, where the echo time was 27.5 ms. All other parameters were exactly the same. This selection of the second-echo images renders the two sets of functional images as comparable as possible.
Preprocessing Preprocessing was done with SPM8. Data were spatially realigned based on 6-parameter rigid body transformation using the 2nd degree B-spline method. Functional (EPI) and structural (MP-RAGE) images were co-registered via mutual information and functional images were smoothed with a 3 mm isotropic gaussian filter. We used the ICBM template provided with SPM8 to put our data into MNI stereotaxic coordinates. The data were high pass filtered at 1/128 Hz and we discarded the first 10 functional volumes. These processed data are available in the derivatives directory and preprocessing.mat files are included to provide input parameters used.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains files produced by fMRIPrep that allow to transform the fMRI data between different spaces. For instance, any results obtained in the subjects' individual anatomical space could be transformed into the MNI standard space, allowing to compare results between subjects or even with other datasets.Part of THINGS-data: A multimodal collection of large-scale datasets for investigating object representations in brain and behavior.See related materials in Collection at: https://doi.org/10.25452/figshare.plus.c.6161151
EEG/fMRI Data from 8 subject doing a simple eyes open/eyes closed task is provided on this webpage.
The EEG/fMRI data are six files for each subject, with two basic factors: recording during Helium pump On and Helium pump Off, and recording during MRI scanning and without MRI scanning. In addition 'outside' EEG data is provided, before as well as after the MRI session.
There are 30 EEG channels, 1 EOG channel, 1 ECG channel, as well as 6 CWL signals.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A collection of 11 brain maps. Each brain map is a 3D array of values representing properties of the brain at different locations.
OpenfMRI ds000114
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Smoothed normalised images, condition onsets and motion parameters for 15 subjects
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.
We collected extensively sampled object representations using functional MRI (fMRI). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.
During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=3, 8740 unique images of 720 objects). Images were shown in fast succession (4.5s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=100) were shown repeatedly in each session.
Beyond the core functional imaging data in response to THINGS images, additional structural and functional imaging data were gathered. We collected high-resolution anatomical images (T1- and T2-weighted), measures of brain vasculature (Time-of-Flight angiography, T2*-weighted) and gradient-echo field maps. In addition, we ran a functional localizer to identify numerous functionally specific brain regions, a retinotopic localizer for estimating population receptive fields, and an additional run without external stimulation for estimating resting-state functional connectivity.
Besides raw data this datasets holds
More derivatives can be found on figshare.
Provenance information is given in 'dataset_description.json' as well as in the paper and preprocessing and analysis code is shared on GitHub.