2 datasets found
  1. THINGS-MEG

    • openneuro.org
    Updated May 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker (2025). THINGS-MEG [Dataset]. http://doi.org/10.18112/openneuro.ds004212.v3.0.0
    Explore at:
    Dataset updated
    May 29, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    THINGS-MEG

    Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.

    Dataset overview

    We collected extensively sampled object representations using magnetoencephalography (MEG). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.

    During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=4, 22,448 unique images of 1,854 objects). Images were shown in fast succession (1.5±0.2s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=200) were shown repeatedly in each session.

    Beyond the core functional imaging data in response to THINGS images, we acquired T1-weighted MRI scans to allow for cortical source localization. Eye movements were monitored in the MEG to ensure participants maintained central fixation.

  2. THINGS-fMRI

    • openneuro.org
    Updated Sep 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker (2024). THINGS-fMRI [Dataset]. http://doi.org/10.18112/openneuro.ds004192.v1.0.7
    Explore at:
    Dataset updated
    Sep 9, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    THINGS-fMRI

    Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.

    Dataset overview

    We collected extensively sampled object representations using functional MRI (fMRI). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.

    During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=3, 8740 unique images of 720 objects). Images were shown in fast succession (4.5s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=100) were shown repeatedly in each session.

    Beyond the core functional imaging data in response to THINGS images, additional structural and functional imaging data were gathered. We collected high-resolution anatomical images (T1- and T2-weighted), measures of brain vasculature (Time-of-Flight angiography, T2*-weighted) and gradient-echo field maps. In addition, we ran a functional localizer to identify numerous functionally specific brain regions, a retinotopic localizer for estimating population receptive fields, and an additional run without external stimulation for estimating resting-state functional connectivity.

    Besides raw data this datasets holds

    • brainmasks (fmriprep)
    • cortical flat maps (pycoretx_filestore)
    • single trial response estimations (ICA-betas)

    More derivatives can be found on figshare.

    Provenance

    Provenance information is given in 'dataset_description.json' as well as in the paper and preprocessing and analysis code is shared on GitHub.

  3. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker (2025). THINGS-MEG [Dataset]. http://doi.org/10.18112/openneuro.ds004212.v3.0.0
Organization logo

THINGS-MEG

Explore at:
308 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
May 29, 2025
Dataset provided by
OpenNeurohttps://openneuro.org/
Authors
Martin N. Hebart; Oliver Contier; Lina Teichmann; Adam H. Rockter; Charles Zheng; Alexis Kidder; Anna Corriveau; Maryam Vaziri-Pashkam; Chris I. Baker
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

THINGS-MEG

Understanding object representations visual and semantic processing of objects requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. This densely sampled fMRI dataset is part of THINGS-data, a multimodal collection of large-scale datasets comprising functional MRI, magnetoencephalographic recordings, and 4.70 million behavioral judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless novel hypotheses at scale while assessing the reproducibility of previous findings. The multimodal data allows for studying both the temporal and spatial dynamics of object representations and their relationship to behavior and additionally provides the means for combining these datasets for novel insights into object processing. THINGS-data constitutes the core release of the THINGS initiative for bridging the gap between disciplines and the advancement of cognitive neuroscience.

Dataset overview

We collected extensively sampled object representations using magnetoencephalography (MEG). To this end, we drew on the THINGS database (Hebart et al., 2019), a richly-annotated database of 1,854 object concepts representative of the American English language which contains 26,107 manually-curated naturalistic object images.

During the fMRI experiment, participants were shown a representative subset of THINGS images, spread across 12 separate sessions (N=4, 22,448 unique images of 1,854 objects). Images were shown in fast succession (1.5±0.2s), and participants were instructed to maintain central fixation. To ensure engagement, participants performed an oddball detection task responding to occasional artificially-generated images. A subset of images (n=200) were shown repeatedly in each session.

Beyond the core functional imaging data in response to THINGS images, we acquired T1-weighted MRI scans to allow for cortical source localization. Eye movements were monitored in the MEG to ensure participants maintained central fixation.

Search
Clear search
Close search
Google apps
Main menu