CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.
We collected extensively sampled object representations using magnetoencephalography (MEG). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.
During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=4, 22,448 unique images of 1,854 objects). Images were shown in fast succession (1.5±0.2s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=200) were shown repeatedly in each session.
Beyond the core functional imaging data in response to THINGS images, we acquired T1-weighted MRI scans to allow for cortical source localization. Eye movements were monitored in the MEG to ensure participants maintained central fixation.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.
We collected extensively sampled object representations using functional MRI (fMRI). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.
During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=3, 8740 unique images of 720 objects). Images were shown in fast succession (4.5s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=100) were shown repeatedly in each session.
Beyond the core functional imaging data in response to THINGS images, additional structural and functional imaging data were gathered. We collected high-resolution anatomical images (T1- and T2-weighted), measures of brain vasculature (Time-of-Flight angiography, T2*-weighted) and gradient-echo field maps. In addition, we ran a functional localizer to identify numerous functionally specific brain regions, a retinotopic localizer for estimating population receptive fields, and an additional run without external stimulation for estimating resting-state functional connectivity.
Besides raw data this datasets holds
More derivatives can be found on figshare.
Provenance information is given in 'dataset_description.json' as well as in the paper and preprocessing and analysis code is shared on GitHub.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.
We collected extensively sampled object representations using magnetoencephalography (MEG). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.
During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=4, 22,448 unique images of 1,854 objects). Images were shown in fast succession (1.5±0.2s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=200) were shown repeatedly in each session.
Beyond the core functional imaging data in response to THINGS images, we acquired T1-weighted MRI scans to allow for cortical source localization. Eye movements were monitored in the MEG to ensure participants maintained central fixation.