26 datasets found
  1. ds000234

    • openneuro.org
    Updated Jul 17, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marta Vidorreta; Ze Wang; Yulin V. Chang; David A Wolk; Maria A. Fernandez-Seara; John A. Detre (2018). ds000234 [Dataset]. https://openneuro.org/datasets/ds000234/versions/00001
    Explore at:
    Dataset updated
    Jul 17, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Marta Vidorreta; Ze Wang; Yulin V. Chang; David A Wolk; Maria A. Fernandez-Seara; John A. Detre
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Description of the ASL sequence A sequence with pseudo-continuous labeling, background suppression and 3D RARE Stack-Of-Spirals readout with optional through-plane acceleration was implemented for this study. At the beginning of the sequence, gradients were rapidly played with alternating polarity to correct for their delay in the spiral trajectories, followed by two preparation TRs, to allow the signal to reach the steady state. A non-accelerated readout was played during the preparation TRs, in order to obtain a fully sampled k-space dataset, used for calibration of the parallel imaging reconstruction kernel, needed to reconstruct the skipped kz partitions in the accelerated images.

    Description of study Non-accelerated and accelerated versions of the sequence were compared during the execution of a functional activation paradigm. For each participant, first a high-resolution anatomical T1-weighted image was acquired with a magnetization prepared rapid gradient echo (MPRAGE) sequence. Subjects underwent two perfusion runs, in which functional data were acquired with the non-accelerated and the accelerated version of the sequence, in pseudo-randomized order, during a visual-motor activation paradigm. During each run, 3 resting blocks alternated with 3 task blocks, with each block comprising 8 label-control pairs (72 s and 64 s for the non-accelerated and accelerated sequence versions, respectively). During the resting blocks, subjects were instructed to remain still while looking at a fixation cross. During the task blocks, a flashing checkerboard was displayed and subjects were asked to tap their right-hand fingers while looking at the center of the board. Labeling and PLD times were 1.5 and 1.5 s. In addition, four M0 images with long TR and no magnetization preparation were acquired per perfusion run for CBF quantification purposes.

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Defacing

    Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Quality Control

    Mriqc was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/latest/

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds000234/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000234. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.

    Known Issues

    N/A

    Bids-validator Output

  2. Route Learning

    • openneuro.org
    Updated Sep 24, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Avi J H Chanales; Ashima Oza; Serra E Favila; Brice A Kuhl (2020). Route Learning [Dataset]. http://doi.org/10.18112/openneuro.ds000217.v1.0.0
    Explore at:
    Dataset updated
    Sep 24, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Avi J H Chanales; Ashima Oza; Serra E Favila; Brice A Kuhl
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Comments added by Openfmri Curators

    ===========================================

    Defacing

    Defacing was performed by the submitter.

    Quality Control

    Mriqc was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/latest/

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds000217/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000217 accession number. 3) Send an email to submissions@openfmri.org. Please include the ds000217 accession number in your email.

    Bids-validator Output

      Summary:          Available Tasks:    Available Modalities:
      3095 Files, 102.19GB    localizer        T1w
      41 - Subjects        route learning     inplaneT1
      1 - Session                     inplaneT2
                                bold
                                fieldmap
    
  3. MPI-Leipzig_Mind-Brain-Body

    • openneuro.org
    • search.kg.ebrains.eu
    Updated Jul 22, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anahit Babayan; Blazeij Baczkowski; Roberto Cozatl; Maria Dreyer; Haakon Engen; Miray Erbey; Marcel Falkiewicz; Nicolas Farrugia; Michael Gaebler; Johannes Golchert; Laura Golz; Krzysztof Gorgolewski; Philipp Haueis; Julia Huntenburg; Rebecca Jost; Yelyzaveta Kramarenko; Sarah Krause; Deniz Kumral; Mark Lauckner; Daniel S. Margulies; Natacha Mendes; Katharina Ohrnberger; Sabine Oligschläger; Anastasia Osoianu; Jared Pool; Janis Reichelt; Andrea Reiter; Josefin Röbbig; Lina Schaare; Jonathan Smallwood; Arno Villringer (2020). MPI-Leipzig_Mind-Brain-Body [Dataset]. http://doi.org/10.18112/openneuro.ds000221.v1.0.0
    Explore at:
    Dataset updated
    Jul 22, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Anahit Babayan; Blazeij Baczkowski; Roberto Cozatl; Maria Dreyer; Haakon Engen; Miray Erbey; Marcel Falkiewicz; Nicolas Farrugia; Michael Gaebler; Johannes Golchert; Laura Golz; Krzysztof Gorgolewski; Philipp Haueis; Julia Huntenburg; Rebecca Jost; Yelyzaveta Kramarenko; Sarah Krause; Deniz Kumral; Mark Lauckner; Daniel S. Margulies; Natacha Mendes; Katharina Ohrnberger; Sabine Oligschläger; Anastasia Osoianu; Jared Pool; Janis Reichelt; Andrea Reiter; Josefin Röbbig; Lina Schaare; Jonathan Smallwood; Arno Villringer
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Leipzig
    Description

    The MPI-Leipzig Mind-Brain-Body dataset contains MRI and behavioral data from 318 participants. Datasets for all participants include at least a structural quantitative T1-weighted image and a single 15-minute eyes-open resting-state fMRI session.

    The participants took part in one or two extended protocols: Leipzig Mind-Body-Brain Interactions (LEMON) and Neuroanatomy & Connectivity Protocol (N&C). The data from LEMON protocol is included in the ‘ses-01’ subfolder; the data from N&C protocol in ‘ses-02’ subfolder.

    LEMON focuses on structural imaging. 228 participants were scanned. In addition to the quantitative T1-weighted image, the participants also have a structural T2-weighted image (226 participants), a diffusion-weighted image with 64 directions (228) and a 15-minute eyes-open resting-state session (228). New imaging sequences were introduced into the LEMON protocol after data acquisition for approximately 110 participants. Before the change, a low-resolution 2D FLAIR images were acquired for clinical purposes (110). After the change, 2D FLAIR was replaced with high-resolution 3D FLAIR (117). The second addition was the acquisition of gradient-echo images (112) that can be used for Susceptibility-Weighted Imaging (SWI) and Quantitative Susceptibility Mapping (QSM).

    The N&C protocol focuses on resting-state fMRI data. 199 participants were scanned with this protocol; 109 participants also took part in the LEMON protocol. Structural data was not acquired for the overlapping LEMON participants. For the unique N&C participants, only a T1-weighted and a low-resolution FLAIR image were acquired. Four 15-minute runs of eyes-open resting-state are the main component of N&C; they are complete for 194 participants, three participants have 3 runs, one participant has 2 runs and one participant has a single run. Due to a bug in multiband sequence used in this protocol, the echo time for N&C resting-state is longer than in LEMON — 39.4 ms vs 30 ms.

    Forty-five participants have complete imaging data: quantitative T1-weighted, T2-weighted, high-resolution 3D FLAIR, DWI, GRE and 75 minutes of resting-state. Both gradient-echo and spin-echo field maps are available in both datasets for all EPI-based sequences (rsfMRI and DWI).

    Extensive behavioral data was acquired in both protocols. They include trait and state questionnaires, as well as behavioral tasks. Here we only list the tasks; more extenstive descriptions are available in the manuscripts.

    LEMON QUESTIONNAIRES/TASKS [not yet released]

    California Verbal Learning Test (CVLT) Testbatterie zur Aufmerksamkeitsprüfung (TAP Alertness, Incompatibility, Working Memory) Trail Marking Test (TMT) Wortschatztest (WST) Leistungsprüfungssystem 2 (LPS-2) Regensburger Wortflüssigkeitstest (RWT)

    NEO Five-Factor Inventory (NEO-FFI) Impulsive Behavior Scale (UPPS) Behavioral Inhibition and Approach System (BISBAS) Cognitive Emotion Regulation Questionnaire (CERQ) Measure of Affective Style (MARS) Fragebogen zur Sozialen Unterstützung (F-SozU K) The Multidimensional Scale of Perceived Social Support (MSPSS) Coping Orientations to Problems Experienced (COPE) Life Orientation Test-Revised (LOT-R) Perceived Stress Questionnaire (PSQ) the Trier Inventory of Chronic Stress (TICS) The three-factor eating questionnaire (TFEQ) Yale Food Addiction Scale (YFAS) The Trait Emotional Intelligence Questionnaire (TEIQue-SF) Trait Scale of the State-Trait Anxiety Inventory (STAI) State-Trait Anger expression Inventory (STAXI) Toronto-Alexithymia Scale (TAS) Multidimensional Mood Questionnaire (MDMQ) New York Cognition Questionnaire (NYC-Q)

    N&C QUESTIONNAIRES

    Adult Self Report (ASR) Goldsmiths Musical Sophistication Index (Gold-MSI) Internet Addiction Test (IAT) Involuntary Musical Imagery Scale (IMIS) Multi-Gender Identity Questionnaire (MGIQ) Brief Self-Control Scale (SCS) Short Dark Triad (SD3) Social Desirability Scale-17 (SDS) Self-Esteem Scale (SE) Tuckman Procrastination Scale (TPS) Varieties of Inner Speech (VISQ) UPPS-P Impulsive Behavior Scale (UPPS-P) Attention Control Scale (ACS) Beck's Depression Inventory-II (BDI) Boredom Proneness Scale (BP) Esworth Sleepiness Scale (ESS) Hospital Anxiety and Depression Scale (HADS) Multimedia Multitasking Index (MMI) Mobile Phone Usage (MPU) Personality Style and Disorder Inventory (PSSI) Spontaneous and Deliberate Mind-Wandering (S-D-MW) Short New York Cognition Scale (Short-NYC-Q) New York Cognition Scale (NYC-Q) Abbreviated Math Anxiety Scale (AMAS) Behavioral Inhibition and Approach System (BIS/BAS) NEO Personality Inventory Revised (NEO-PI-R) Body Consciousness Questionnaire (BCQ) Creative achievement questionnaire (CAQ) Five facets of mindfulness (FFMQ) Metacognition (MCQ-30)

    N&C TASKS

    Conjunctive continuous performance task (CCPT) Emotional task switching (ETS) Adaptive visual and auditory oddball target detection task (Oddball) Alternative uses task (AUT) Remote associates test (RAT) Synesthesia color picker test (SYN) Test of creative imagery abilities (TCIA)

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Defacing

    Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds000221/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000221. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.

    Known Issues

    N/A

    Bids-validator Output

    A verbose bids-validator output is under '/derivatives/bidsvalidatorOutput_long'. Short version of BIDS output is as follows:

    1: This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. (code: 1 - NOT_INCLUDED)
      /sub-010001/ses-02/anat/sub-010001_ses-02_inv-1_mp2rage.json
        Evidence: sub-010001_ses-02_inv-1_mp2rage.json
      /sub-010001/ses-02/anat/sub-010001_ses-02_inv-1_mp2rage.nii.gz
        Evidence: sub-010001_ses-02_inv-1_mp2rage.nii.gz
      /sub-010001/ses-02/anat/sub-010001_ses-02_inv-2_mp2rage.json
        Evidence: sub-010001_ses-02_inv-2_mp2rage.json
      /sub-010001/ses-02/anat/sub-010001_ses-02_inv-2_mp2rage.nii.gz
        Evidence: sub-010001_ses-02_inv-2_mp2rage.nii.gz
      /sub-010002/ses-01/anat/sub-010002_ses-01_inv-1_mp2rage.json
        Evidence: sub-010002_ses-01_inv-1_mp2rage.json
      /sub-010002/ses-01/anat/sub-010002_ses-01_inv-1_mp2rage.nii.gz
        Evidence: sub-010002_ses-01_inv-1_mp2rage.nii.gz
      /sub-010002/ses-01/anat/sub-010002_ses-01_inv-2_mp2rage.json
        Evidence: sub-010002_ses-01_inv-2_mp2rage.json
      /sub-010002/ses-01/anat/sub-010002_ses-01_inv-2_mp2rage.nii.gz
        Evidence: sub-010002_ses-01_inv-2_mp2rage.nii.gz
      /sub-010003/ses-01/anat/sub-010003_ses-01_inv-1_mp2rage.json
        Evidence: sub-010003_ses-01_inv-1_mp2rage.json
      /sub-010003/ses-01/anat/sub-010003_ses-01_inv-1_mp2rage.nii.gz
        Evidence: sub-010003_ses-01_inv-1_mp2rage.nii.gz
      ... and 1710 more files having this issue (Use --verbose to see them all).
    
    2: Not all subjects contain the same files. Each subject should contain the same number of files with the same naming unless some files are known to be missing. (code: 38 - INCONSISTENT_SUBJECTS)
      /sub-010001/ses-01/anat/sub-010001_ses-01_T2w.json
      /sub-010001/ses-01/anat/sub-010001_ses-01_T2w.nii.gz
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-highres_FLAIR.json
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-highres_FLAIR.nii.gz
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-lowres_FLAIR.json
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-lowres_FLAIR.nii.gz
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-mp2rage_T1map.nii.gz
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-mp2rage_T1w.nii.gz
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-mp2rage_defacemask.nii.gz
      /sub-010001/ses-01/dwi/sub-010001_ses-01_dwi.bval
      ... and 8624 more files having this issue (Use --verbose to see them all).
    
    3: Not all subjects/sessions/runs have the same scanning parameters. (code: 39 - INCONSISTENT_PARAMETERS)
      /sub-010007/ses-02/anat/sub-010007_ses-02_acq-mp2rage_T1map.nii.gz
      /sub-010007/ses-02/anat/sub-010007_ses-02_acq-mp2rage_T1w.nii.gz
      /sub-010007/ses-02/anat/sub-010007_ses-02_acq-mp2rage_defacemask.nii.gz
      /sub-010045/ses-01/dwi/sub-010045_ses-01_dwi.nii.gz
      /sub-010087/ses-02/func/sub-010087_ses-02_task-rest_acq-PA_run-01_bold.nii.gz
      /sub-010189/ses-02/anat/sub-010189_ses-02_acq-lowres_FLAIR.nii.gz
      /sub-010201/ses-02/func/sub-010201_ses-02_task-rest_acq-PA_run-02_bold.nii.gz
    
      Summary:           Available Tasks:    Available Modalities:
      14714 Files, 390.74GB    Rest          FLAIR
      318 - Subjects                    T1map
      2 - Sessions                     T1w
                                 defacemask
                                 bold
                                 T2w
                                 dwi
                                 fieldmap
                                 fieldmap
    
  4. ds000213_R1.0.2

    • openneuro.org
    Updated Jul 18, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiao Gao; Xiao Deng; Xin Wen; Ying She; Petra Corianne Vinke; Hong Chen (2018). ds000213_R1.0.2 [Dataset]. https://openneuro.org/datasets/ds000213/versions/00001
    Explore at:
    Dataset updated
    Jul 18, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Xiao Gao; Xiao Deng; Xin Wen; Ying She; Petra Corianne Vinke; Hong Chen
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Comments added by Openfmri Curators

    ===========================================

    Defacing

    Quality Control

    Mriqc output was not run on this dataset due to issues we are having with the software. It will be included in the next revision.

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds******/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and dsXXXXXX accession number. 3) Send an email to submissions@openfmri.org. Please include the dsXXXXXX accession number in your email.

    Bids-validator Output

    1: You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. (code: 13 - SLICE_TIMING_NOT_DEFINED)
      /sub-01/func/sub-01_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-01/sub-01_task-socialcomparison_bold.json, /sub-01/func/sub-01_task-socialcomparison_bold.json
      /sub-02/func/sub-02_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-02/sub-02_task-socialcomparison_bold.json, /sub-02/func/sub-02_task-socialcomparison_bold.json
      /sub-03/func/sub-03_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-03/sub-03_task-socialcomparison_bold.json, /sub-03/func/sub-03_task-socialcomparison_bold.json
      /sub-04/func/sub-04_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-04/sub-04_task-socialcomparison_bold.json, /sub-04/func/sub-04_task-socialcomparison_bold.json
      /sub-05/func/sub-05_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-05/sub-05_task-socialcomparison_bold.json, /sub-05/func/sub-05_task-socialcomparison_bold.json
      /sub-06/func/sub-06_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-06/sub-06_task-socialcomparison_bold.json, /sub-06/func/sub-06_task-socialcomparison_bold.json
      /sub-07/func/sub-07_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-07/sub-07_task-socialcomparison_bold.json, /sub-07/func/sub-07_task-socialcomparison_bold.json
      /sub-08/func/sub-08_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-08/sub-08_task-socialcomparison_bold.json, /sub-08/func/sub-08_task-socialcomparison_bold.json
      /sub-09/func/sub-09_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-09/sub-09_task-socialcomparison_bold.json, /sub-09/func/sub-09_task-socialcomparison_bold.json
      /sub-10/func/sub-10_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-10/sub-10_task-socialcomparison_bold.json, /sub-10/func/sub-10_task-socialcomparison_bold.json
      /sub-11/func/sub-11_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-11/sub-11_task-socialcomparison_bold.json, /sub-11/func/sub-11_task-socialcomparison_bold.json
      /sub-12/func/sub-12_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-12/sub-12_task-socialcomparison_bold.json, /sub-12/func/sub-12_task-socialcomparison_bold.json
      /sub-13/func/sub-13_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-13/sub-13_task-socialcomparison_bold.json, /sub-13/func/sub-13_task-socialcomparison_bold.json
      /sub-14/func/sub-14_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-14/sub-14_task-socialcomparison_bold.json, /sub-14/func/sub-14_task-socialcomparison_bold.json
      /sub-15/func/sub-15_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-15/sub-15_task-socialcomparison_bold.json, /sub-15/func/sub-15_task-socialcomparison_bold.json
      /sub-16/func/sub-16_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-16/sub-16_task-socialcomparison_bold.json, /sub-16/func/sub-16_task-socialcomparison_bold.json
      /sub-19/func/sub-19_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-19/sub-19_task-socialcomparison_bold.json, /sub-19/func/sub-19_task-socialcomparison_bold.json
      /sub-20/func/sub-20_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-20/sub-20_task-socialcomparison_bold.json, /sub-20/func/sub-20_task-socialcomparison_bold.json
      /sub-21/func/sub-21_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-21/sub-21_task-socialcomparison_bold.json, /sub-21/func/sub-21_task-socialcomparison_bold.json
      /sub-22/func/sub-22_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-22/sub-22_task-socialcomparison_bold.json, /sub-22/func/sub-22_task-socialcomparison_bold.json
      /sub-23/func/sub-23_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-23/sub-23_task-socialcomparison_bold.json, /sub-23/func/sub-23_task-socialcomparison_bold.json
      /sub-24/func/sub-24_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-24/sub-24_task-socialcomparison_bold.json, /sub-24/func/sub-24_task-socialcomparison_bold.json
      /sub-25/func/sub-25_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-socialcomparison_bold.json, /sub-25/sub-25_task-socialcomparison_bold.json, /sub-25/func/sub-25_task-socialcomparison_bold.json
      /sub-26/func/sub-26_task-socialcomparison_bold.nii.gz
        You should define 'SliceTiming' for this file. If you don't
    
  5. ds000249_R1.0.0

    • openneuro.org
    Updated Jul 17, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agnes Norbury; Ben Seymour (2018). ds000249_R1.0.0 [Dataset]. https://openneuro.org/datasets/ds000249/versions/00001
    Explore at:
    Dataset updated
    Jul 17, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Agnes Norbury; Ben Seymour
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Defacing

    Pydeface was used on all anatomical images to ensure de-identification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Quality Control

    MRIQC was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/stable/

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds000249/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000249. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.

    BIDS validator output:

    1: Not all subjects/sessions/runs have the same scanning parameters. (code: 39 - INCONSISTENT_PARAMETERS) ./sub-02/func/sub-02_task-genInstrAv_run-01_bold.nii.gz ./sub-22/anat/sub-22_T1w.nii.gz

      Summary:         Available Tasks:    Available Modalities:
      577 Files, 3.45GB    genInstrAv       T1w
      26 - Subjects                  bold
      1 - Session                   fieldmap
    

    Known Issues:

  6. Data from: Prediction Error

    • openneuro.org
    Updated Apr 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lukas Gehrke; Sezen Akman; Albert Chen; Pedro Lopes; Klaus Gramann (2025). Prediction Error [Dataset]. http://doi.org/10.18112/openneuro.ds003846.v2.0.0
    Explore at:
    Dataset updated
    Apr 9, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Lukas Gehrke; Sezen Akman; Albert Chen; Pedro Lopes; Klaus Gramann
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Readme

    In case of any questions, please contact: Lukas Gehrke, lukas.gehrke@tu-berlin.de, orcid: 0000-0003-3661-1973

    Overview

    Cyber-Physical Systems: Prediction Error

    These data were collected at https://www.tu.berlin/bpn. Data collection occurred either between 10:00 and 12:00 or between 14:00 and 18:00.

    To learn about the task, independent-, dependent-, and control variables, please consult the methods sections of the following two publications:

    https://dl.acm.org/doi/abs/10.1145/3290605.3300657 https://iopscience.iop.org/article/10.1088/1741-2552/ac69bc/meta

    • Contents of the dataset: Output from BIDS-validator

    Summary 324 Files, 9.76GB 19 - Subjects 5 - Sessions

    Available Tasks PredictionError

    Available Modalities EEG

    • [ ] Quality assessment of the data: Link to data paper, once done

    Methods

    Subjects

    The study sample consists of 19 participants (participant_id 1 to 19) with ages ranging from 18 to 34 years and varying cap sizes from 54 to 60. Stimulation is delivered in three blocks: Block_1, Block_2, and Block_3, utilizing different combinations of Visual, Vibro, and EMS.

    Participant Information: Age: Ranges from 18 to 34 years. Cap Size: Varied, with sizes ranging from 54 to 60. Stimulation Blocks: Block_1 and Block_2 include Visual, Visual + Vibro, and Visual + Vibro + EMS. Block_3 primarily involves Visual + Vibro + EMS. Usage of Stimulation Blocks: Most participants experience Visual stimulation in all blocks. Visual + Vibro is common in Block_1 and Block_2. Visual + Vibro + EMS is prevalent in Block_3. Some participants did not experience certain blocks (indicated by "0"). Other Observations: Cap size variation doesn't show a clear pattern in relation to stimulation blocks. Participants exhibit diverse stimulation patterns, showcasing individualized experiences.

    Task, Environment and Variables

    This set of variables outlines key parameters in a neuroscience experiment involving a haptic task. Here's a summary:

    box: Description: Represents the target object to be touched following its spawn. Units: String (presumably indicating the characteristics or identity of the object). normal_or_conflict: Description: Describes the behavior of the target object in the current trial, distinguishing between oddball and non-oddball conditions. Units: String (presumably indicating the nature of the trial). condition: Description: Indicates the level of haptic realism in the experiment. Units: String (presumably representing different levels of realism). cube: Description: Specifies the position of the target object, whether it is located on the left, right, or center. Units: String (presumably indicating spatial orientation). trial_nr: Description: Denotes the number of the current trial in the experiment. Units: Integer.

    Apparatus

    Here's a summary of the recording environment:

    • EEG Stream Name: BrainVision
    • EEG Reference and Ground: FCz and AFz, respectively
    • EEG Channel Locations: 63 channels with specific names (e.g., Fp1, Fz, Pz) and types (EEG)
    • Additional Channels: 1 EOG (Electrooculogram)
    • Power Line Frequency: 50 Hz
    • Manufacturer: Brain Products
    • Manufacturer's Model Name: BrainAmp DC
    • Cap Manufacturer: EasyCap
    • Cap Model Name: actiCap 64ch CACS-64
    • EEG Placement Scheme: Positions chosen from a 10% system
    • Channel Counts:
      • EEG Channels: 63
      • EOG Channels: 1
      • ECG Channels: 0
      • EMG Channels: 0
      • Miscellaneous Channels: 0
      • Trigger Channels: 0

    This configuration indicates a high-density EEG setup with specific electrode placements, utilizing Brain Products' BrainAmp DC model. The electrode cap is manufactured by EasyCap, with the specific model name actiCap 64ch CACS-64. The EEG data is sampled at an unspecified frequency, and the system is designed to capture electrical brain activity across a comprehensive set of channels. The recording includes an additional channel for recording eye movements (EOG). Overall, the setup appears suitable for detailed EEG investigations in neurophysiological research.

    The motion capture recording environment uses two devices: "rigid_head" and "rigid_handr," which correspond to "HTCViveHead" and "HTCViveRightHand" in the BIDS (Brain Imaging Data Structure) naming convention. The tracked points include "Head" and "handR." The motion data is captured using quaternions with channels named "quat_X," "quat_Y," "quat_Z," and "quat_W." Positional data includes channels "_X," "_Y," and "_Z." The system is manufactured by HTC, with the model name "Vive," and the recording has a sampling frequency of 90 Hz. Additional information such as software versions is not provided.

  7. Data from: A longitudinal neuroimaging dataset on language processing in...

    • openneuro.org
    Updated Apr 26, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jin Wang; Marisa N. Lytle; Yael Weiss; Brianna L. Yamasaki; James R. Booth (2021). A longitudinal neuroimaging dataset on language processing in children ages 5, 7, and 9 years old [Dataset]. http://doi.org/10.18112/openneuro.ds003604.v1.0.1
    Explore at:
    Dataset updated
    Apr 26, 2021
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Jin Wang; Marisa N. Lytle; Yael Weiss; Brianna L. Yamasaki; James R. Booth
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Known Issues

    • BIDS validator warns that some stimuli files are not included in events.tsv. This is due to the extra 2 circle figures, which were used to keep participants' attention and remind them to respond timely.
    • BIDS validator warns that inconsistent subjects and missing sessions because not all subjects could attend all tasks and all sessions.
    • BIDS validator warns that we missed magnitude files. This is due to the fact that we did not collect magnitude images during the fieldmap scan. We only collected phasediff images.
    • BIDS validator warns that not all subjects/sessions/runs have the same scanning parameters. This is mainly due to the fact that we added 6 volumes to the end of the functional runs after a small subset of initial participants. There are a few t1 weighted images with varied resolution, but they were all within +-0.05mm. A few diffusion weighted images have 69 or 65 volumes in contrast to standard 70 volumes. These differences are likely due to occasional scanner updates at the scanning center.
    • We manually created json files for 29 functional runs and 4 anatomical runs because we converted them using spm8 but then the original DICOMs were lost. **./sub-5085/ses-5/anat/sub-5085_ses-5_acq-D1S1_T1w.json **./sub-5085/ses-5/anat/sub-5085_ses-5_acq-D1S3_T1w.json **./sub-5085/ses-5/anat/sub-5085_ses-5_acq-D1S2_T1w.json **./sub-5347/ses-5/anat/sub-5347_ses-5_acq-D1S1_T1w.json **./sub-5085/ses-5/func/sub-5085_ses-5_task-Phon_acq-D1S4_run-01_bold.json **./sub-5085/ses-5/func/sub-5085_ses-5_task-Phon_acq-D1S5_run-02_bold.json **./sub-5032/ses-7/func/sub-5032_ses-7_task-Phon_acq-D2S5_run-01_bold.json **./sub-5365/ses-9/func/sub-5365_ses-9_task-Phon_acq-D1S4_run-01_bold.json **./sub-5365/ses-9/func/sub-5365_ses-9_task-Phon_acq-D1S3_run-02_bold.json **./sub-5211/ses-9/func/sub-5211_ses-9_task-Phon_acq-D1S5_run-02_bold.json **./sub-5211/ses-9/func/sub-5211_ses-9_task-Phon_acq-D1S6_run-01_bold.json **./sub-5085/ses-5/func/sub-5085_ses-5_task-Sem_acq-D3S4_run-02_bold.json **./sub-5085/ses-5/func/sub-5085_ses-5_task-Sem_acq-D3S3_run-01_bold.json **./sub-5061/ses-5/func/sub-5061_ses-5_task-Sem_acq-D3S4_run-02_bold.json **./sub-5061/ses-5/func/sub-5061_ses-5_task-Sem_acq-D3S3_run-01_bold.json **./sub-5061/ses-5/func/sub-5061_ses-5_task-Sem_acq-D3S7_run-02_bold.json **./sub-5347/ses-5/func/sub-5347_ses-5_task-Sem_acq-D1S6_run-01_bold.json **./sub-5347/ses-5/func/sub-5347_ses-5_task-Sem_acq-D1S5_run-02_bold.json **./sub-5085/ses-5/func/sub-5085_ses-5_task-Gram_acq-D1S7_run-02_bold.json **./sub-5085/ses-5/func/sub-5085_ses-5_task-Gram_acq-D1S6_run-01_bold.json **./sub-5085/ses-5/func/sub-5085_ses-5_task-Gram_acq-D3S5_run-02_bold.json **./sub-5032/ses-7/func/sub-5032_ses-7_task-Gram_acq-D2S4_run-02_bold.json **./sub-5032/ses-7/func/sub-5032_ses-7_task-Gram_acq-D2S3_run-01_bold.json **./sub-5211/ses-9/func/sub-5211_ses-9_task-Gram_acq-D1S3_run-02_bold.json **./sub-5211/ses-9/func/sub-5211_ses-9_task-Gram_acq-D1S4_run-01_bold.json **./sub-5365/ses-9/func/sub-5365_ses-9_task-Gram_acq-D1S6_run-01_bold.json **./sub-5365/ses-9/func/sub-5365_ses-9_task-Gram_acq-D1S5_run-02_bold.json **./sub-5085/ses-5/func/sub-5085_ses-5_task-Plaus_acq-D2S3_run-01_bold.json **./sub-5085/ses-5/func/sub-5085_ses-5_task-Plaus_acq-D2S4_run-02_bold.json **./sub-5061/ses-5/func/sub-5061_ses-5_task-Plaus_acq-D3S5_run-01_bold.json **./sub-5061/ses-5/func/sub-5061_ses-5_task-Plaus_acq-D3S6_run-02_bold.json **./sub-5347/ses-5/func/sub-5347_ses-5_task-Plaus_acq-D1S4_run-01_bold.json **./sub-5347/ses-5/func/sub-5347_ses-5_task-Plaus_acq-D1S3_run-02_bold.json

    • The calculation of reaction time (rt) and accuracy (acc) for each condition within each run for each participant is documented in ./derivatives/func_mv_acc_rt/Acc_RT_Calculation.doc

  8. Value generalization in human avoidance learning

    • openneuro.org
    Updated Jul 17, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agnes Norbury; Ben Seymour (2018). Value generalization in human avoidance learning [Dataset]. https://openneuro.org/datasets/ds000249/versions/00002
    Explore at:
    Dataset updated
    Jul 17, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Agnes Norbury; Ben Seymour
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Defacing

    Pydeface was used on all anatomical images to ensure de-identification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Quality Control

    MRIQC was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/stable/

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds000249/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000249. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.

    BIDS validator output:

    1: Not all subjects/sessions/runs have the same scanning parameters. (code: 39 - INCONSISTENT_PARAMETERS) ./sub-02/func/sub-02_task-genInstrAv_run-01_bold.nii.gz ./sub-22/anat/sub-22_T1w.nii.gz

      Summary:         Available Tasks:    Available Modalities:
      577 Files, 3.45GB    genInstrAv       T1w
      26 - Subjects                  bold
      1 - Session                   fieldmap
    

    Known Issues:

  9. Chisco

    • openneuro.org
    Updated Dec 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zihan Zhang; Yi Zhao; Yu Bao; Xiao Ding (2024). Chisco [Dataset]. http://doi.org/10.18112/openneuro.ds005170.v1.1.2
    Explore at:
    Dataset updated
    Dec 12, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Zihan Zhang; Yi Zhao; Yu Bao; Xiao Ding
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Chisco Dataset

    This dataset is a Chinese imagined speech dataset with five participants, identified as sub-01 to sub-05. The dataset includes raw data and preprocessed data in both fif and pkl formats. Information also can be found in https://github.com/zhangzihan-is-good/Chisco

    Supplementary Information

    The initial dataset release encompassed data from three participants (sub-01 to sub-03) as detailed in related Chisco publications. Subsequently, data from two additional subjects (sub-04 and sub-05) were incorporated. During the interval between the original dataset release and the addition of the new data, the BIDS protocol underwent updates. To preserve the integrity of the data processing code presented in our publications, the supplementary data continue to adhere to the previous version of the BIDS protocol. Consequently, the BIDS validator on our website may report errors; however, these do not compromise the usability of the dataset.

    Future releases will include data from sub-06 and sub-07, who participated under a new experimental paradigm. These will be published as part of a new dataset, Chisco 2.0. We invite you to stay tuned for further updates.

    Dataset Structure

    Root Directory

    • dataset_description.json
    • participants.tsv
    • README
    • derivatives/
    • sub-01/ to sub-05/
    • textdataset/
    • json/

    Raw Data

    The root directory contains folders sub-01 to sub-05 with raw data. Each participant's folder contains 5-6 session folders, corresponding to data collected over 5-6 days.

    Preprocessed Data

    Preprocessed data is stored in the derivatives folder in both fif and pkl formats.

    Text Data

    The textdataset folder and json folder contain text data used to stimulate the participants.

    File Structure

    /Chisco
      /sub-01
        /ses-01
          /eeg
            sub-01_ses-01_task-imagine_eeg.edf
        ...
      /sub-02
        ...
      /sub-03
        ...
      /derivatives
        /fif
          /sub-01
            ...
          /sub-02
            ...
          /sub-03
            ...
        /pkl
          /sub-01
            ...
          /sub-02
            ...
          /sub-03
            ...
      /textdataset
        ...
      /json
        ...
      dataset_description.json
      README
      participants.tsv
    
    

    License

    This dataset is licensed under the CC0 license. You are free to use the dataset for non-commercial purposes, but the original author needs to be properly indicated.

    Citation

    If you use this dataset in your research, please cite the following link:

    https://github.com/zhangzihan-is-good/Chisco

    Contact Information

    For any questions, please contact the dataset authors. Thank you for using the Chisco!

  10. Whole-brain background-suppressed pCASL MRI with 1D-accelerated 3D RARE...

    • openneuro.org
    Updated Dec 4, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marta Vidorreta; Ze Wang; Yulin V. Chang; Maria A. Fernandez-Seara; John A. Detre (2019). Whole-brain background-suppressed pCASL MRI with 1D-accelerated 3D RARE Stack-Of-Spirals Readout- Dataset 3 [Dataset]. http://doi.org/10.18112/openneuro.ds000236.v2.0.1
    Explore at:
    Dataset updated
    Dec 4, 2019
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Marta Vidorreta; Ze Wang; Yulin V. Chang; Maria A. Fernandez-Seara; John A. Detre
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Description of the ASL sequence A sequence with pseudo-continuous labeling, background suppression and 3D RARE Stack-Of-Spirals readout with optional through-plane acceleration was implemented for this study. At the beginning of the sequence, gradients were rapidly played with alternating polarity to correct for their delay in the spiral trajectories, followed by two preparation TRs, to allow the signal to reach the steady state. A non-accelerated readout was played during the preparation TRs, in order to obtain a fully sampled k-space dataset, used for calibration of the parallel imaging reconstruction kernel, needed to reconstruct the skipped kz partitions in the accelerated images.

    Description of study Perfusion data were acquired on an elderly cohort using the single-shot, accelerated sequence. For each participant, first a high-resolution anatomical T1-weighted image was acquired with a magnetization prepared rapid gradient echo (MPRAGE) sequence. Resting perfusion data were acquired with a 1-shot 1D-accelerated readout for a total scan duration of 5 min, with labeling and PLD times of 1.5 and 1.5 s. Two M0 images with long TR and no magnetization preparation were acquired per run for CBF quantification purposes.

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Defacing

    Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Quality Control

    Mriqc was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/latest/

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds000236/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000236. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.

    Known Issues

    N/A

    Bids-validator Output

    1: This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. (code: 1 - NOT_INCLUDED) /sub-01/func/sub-01_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-01_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-02/func/sub-02_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-02_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-03/func/sub-03_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-03_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-04/func/sub-04_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-04_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-05/func/sub-05_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-05_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-06/func/sub-06_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-06_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-07/func/sub-07_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-07_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-08/func/sub-08_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-08_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-09/func/sub-09_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-09_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-10/func/sub-10_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-10_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-11/func/sub-11_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-11_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-12/func/sub-12_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-12_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-13/func/sub-13_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-13_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-14/func/sub-14_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-14_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-15/func/sub-15_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-15_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-16/func/sub-16_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-16_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-17/func/sub-17_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-17_task-rest_acq-1Daccel1shot_asl.nii.gz /sub-18/func/sub-18_task-rest_acq-1Daccel1shot_asl.nii.gz This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: sub-18_task-rest_acq-1Daccel1shot_asl.nii.gz /task-rest_asl.json This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. Evidence: task-rest_asl.json

      Summary:         Available Tasks:    Available Modalities:
      61 Files, 915.87MB                T1w
      18 - Subjects
      1 - Session
    
  11. Data from: Collaborations and deceptions in strategic interactions revealed...

    • openneuro.org
    Updated Apr 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Siao-Shan Shen; Jen-Tang Cheng; I-Jeng Hsu; Der-Yow Chen; Ming-Hung Weng; Chun-Chia Kung (2022). Collaborations and deceptions in strategic interactions revealed by hyperscanning fMRI [Dataset]. http://doi.org/10.18112/openneuro.ds004103.v1.0.0
    Explore at:
    Dataset updated
    Apr 20, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Siao-Shan Shen; Jen-Tang Cheng; I-Jeng Hsu; Der-Yow Chen; Ming-Hung Weng; Chun-Chia Kung
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Collaborations and deceptions in strategic interactions revealed by hyperscanning fMRI

    Aims:

    The current study aims to investigate the neural mechanisms of interpersonal collaborations and deceptions, with an Opening Treasure Chest (OTC) game under the fMRI hyperscanning setup.

    Methods

    fMRI: In this hyperscanning fMRI study, the participant pairs (n=33) from Taipei and Tainan joined an opening-treasure-chest (OTC) game, where the dyads took alternative turns as senders (to inform) and receivers (to decide) for guessing the right chest. The cooperation condition was achieved by, upon successful guessing, splitting the $200NTD trial reward, thereby promoting mutual trust. The competition condition, in contrast, was done by, also upon winning, the latter receivers taking all the $150NTD reward, thereby encouraging strategic interactions.

    General findings and importance:

    For fMRI, the GLM contrasts reaffirmed the three documented sub-networks related to social deception: theory-of-mind (ToM), executive control, and reward processing. Another key finding was the negative correlations between the connectivity of right temporo-parietal junction (rTPJ, known as the ToM region) and emotion-related regions, including amygdala, parahippocampal gyrus, and rostral anterior cingulate (rACC), and senders’ lying rates. Furthermore, the Multi-Voxel Pattern Analysis (MVPA) over multiple searchlight-unearthed Region-Of-Interests (ROIs) in classifying either the “truth-telling vs. lying in $150” or the “truthful in $200 vs. truthful in $150” conditions achieved 61% and 84.5%, respectively. Lastly, principal component analysis (PCA) could reduce these high dimensional fMRI data in above-mentioned comparisons to the same level of accuracy with less than 200 or less than 10 components, respectively, suggesting that it may be due more to the individual difference in explaining the suboptimal results. To sum up, these results reveal the neural substrates underpinning the idiosyncratic social deceptions in dyadic interactions.

    Sample Size

    Sixty-six (33 pairs) participants, between 20 and 30 years of age (M=23.4, SD=2.9), participated in the study.

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds******/ See the comments section at the bottom of the dataset page.

    Known Issues

    Bids-validator Output

  12. Female action video game players.

    • openneuro.org
    Updated Jul 17, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Diana Gorbet; Lauren Sergio (2018). Female action video game players. [Dataset]. https://openneuro.org/datasets/ds000253/versions/00002
    Explore at:
    Dataset updated
    Jul 17, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Diana Gorbet; Lauren Sergio
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Defacing

    Pydeface was used on all anatomical images to ensure de-identification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Quality Control

    MRIQC was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/stable/

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds******/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and dsXXXXXX. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.

    BIDS-Validator output:

    1: Not all subjects/sessions/runs have the same scanning parameters. (code: 39 - INCONSISTENT_PARAMETERS) ./sub-01/func/sub-01_task-localizer_bold.nii.gz The most common set of dimensions is: 96,96,39,316 (voxels), This file has the dimensions: 96,96,39,260 (voxels). ./sub-02/func/sub-02_task-localizer_bold.nii.gz The most common set of dimensions is: 96,96,39,316 (voxels), This file has the dimensions: 96,96,39,388 (voxels).

      Summary:         Available Tasks:     Available Modalities:
      327 Files, 10.44GB    Experimental Run 1    T1w
      20 - Subjects       Experimental Run 2    bold
      1 - Session        Experimental Run 3
                   Experimental Run 4
                   Localizer
    

    Known Issues

  13. Data from: Adjudicating between face-coding models with individual-face fMRI...

    • openneuro.org
    Updated Jul 16, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johan D Carlin; Nikolaus Kriegeskorte (2018). Adjudicating between face-coding models with individual-face fMRI responses [Dataset]. https://openneuro.org/datasets/ds000232/versions/00001
    Explore at:
    Dataset updated
    Jul 16, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Johan D Carlin; Nikolaus Kriegeskorte
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is a human fMRI dataset that investigates coding of individual faces in the visual cortex of healthy human volunteers. We also include behavioral data from a similarity judgment task, and computational models that can be used to fit both data modalities.

    See the associated reference (Carlin & Kriegeskorte, in press, PLOS CB) for details about the experimental protocol. In this README we focus on technical details that may be useful for re-analysing the data.

    The behavioral and fMRI distance matrices as well as the computational modeling efforts have been shared previously at OSF (https://osf.io/5g9rv) and zenodo (https://doi.org/10.5281/zenodo.242666), so this may be an easier way to go if you don't want to re-run the entire fMRI preprocessing pipeline.

    DICOM TO BIDS CONVERSION DCM files were converted to nifti using dcm2niix v1.0.20170130 (https://github.com/rordenlab/dcm2niix), and dcm2bids (https://github.com/jooh/Dcm2Bids). Anatomical images were de-faced using pydeface (https://github.com/poldracklab/pydeface).

    DATA ANALYSIS SETUP We analysed data using Matlab R2013A, SPM, FSL, and various custom software developed in Matlab. The following packages (and their associated dependencies) are necessary to get the included analysis code to run:

    In general, the AA pipeline generates all fMRI results and figures (Figs 1-2, S3-S4 Figs in the manuscript). We then extracted fMRI distance matrices from cortical regions of interests for further computational modeling (remaining figures in the manuscript).

    KEY FILES facedist_aa_frombids.m The master function for running the AA fMRI analysis pipeline facedist_aa_frombids_tasklist.xml Specifies which AA modules to run - note that the roiroot flag specifies an absolute path that will need updating for your file system facedist_doit_facepairs The master function for running the behavioral similarity judgment analysis facedist_doit_modeling The master function for running the computational model fits (you will need to run through facedist_aa_frombids and facedist_doit_facepairs first to generate intermediate results) derivatives/aa/aap_prov.png Nice visualisation of fMRI result provenance derivatives/rois ROI masks for fROI analysis (if you want to re-define ROIs from the localiser data you can do so using https://github.com/jooh/roitools/blob/master/spm2roi.m) misc/data_perceptual_judgment_task.mat data from behavioral similarity judgment task misc/stimuli_mri.mat video stimuli used during MRI scanning misc/stimuli_perceptual_judgments.mat video stimuli used during behavioral task

    A NOTE ON REPRODUCIBILITY If you run the above pipeline you will obtain results that are very similar to those in the manuscript (which, again, are publicly available on OSF/Zenodo), but not identical. This is because of the following differences with regard to the analysis in the paper:

    • The paper analysis used SPM8, not 12
    • The paper analysis used SPM dicom conversion, not dcm2niix
    • The paper analysis included a super-extraneous conversion of the floating point precision on the niftis during preprocessing, which did nothing but blow up file size it turns out.
    • The paper analysis used a bastardised version of AA4, not 5, which probably introduces lots of subtle differences in the preprocessing parameters (for this AA version, see https://github.com/jooh/automaticanalysis/tree/v4-master)
    • Lots of other dependencies (including pilab) continued to be developed and improved.

    Note that in particular, the ROI masks were generated using the old analysis, so the results could definitely be improved by re-running ROI definition, if someone has a few days to spare... But again, discrepancies are very small and do not qualitatively change any conclusions made in the paper. Exact reproducibility in neuroimaging is hard. If you want to inspect the AA analysis that is reported in the paper, please get in touch and we will see if there is a way to convince the MRC to let you have access to non-anonymous data.

    REFERENCE

    Carlin, J.D & Kriegeskorte, N. (in press). Adjudicating between face-coding models with individual-face fMRI responses. PLOS Computational Biology. See BioRXiv for a preprint (2017, original version 2015): https://doi.org/10.1101/029603

    CONTACT

    Johan Carlin, MRC CBU, Cambridge, UK. johan.carlin@gmail.com

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Defacing

    Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Quality Control

    Mriqc was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/latest/

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds000232/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000232. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.

    Known Issues

    N/A

    Bids-validator Output

      1: Not all subjects/sessions/runs have the same scanning parameters. (code: 39 - INCONSISTENT_PARAMETERS)
        /sub-02/ses-01/anat/sub-02_ses-01_T1w.nii.gz
        /sub-02/ses-01/func/sub-02_ses-01_task-localizer_run-01_bold.nii.gz
        /sub-02/ses-01/func/sub-02_ses-01_task-localizer_run-02_bold.nii.gz
        /sub-02/ses-01/func/sub-02_ses-01_task-main_run-01_bold.nii.gz
        /sub-02/ses-01/func/sub-02_ses-01_task-main_run-02_bold.nii.gz
        /sub-10/ses-01/anat/sub-10_ses-01_T1w.nii.gz
    
      Summary:          Available Tasks:    Available Modalities:
      1120 Files, 34.38GB    localizer        T1w
      10 - Subjects       main          bold
      4 - Sessions
    
  14. EUPD Cyberball

    • openneuro.org
    Updated Jul 17, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stephen Giles; Jeremy Hall; Merrick Pope; Katie Nicol; Liana Romaniuk (2018). EUPD Cyberball [Dataset]. https://openneuro.org/datasets/ds000214/versions/00001
    Explore at:
    Dataset updated
    Jul 17, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Stephen Giles; Jeremy Hall; Merrick Pope; Katie Nicol; Liana Romaniuk
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Participants Twenty people with borderline personality disorder were recruited from outpatient and support services from around Edinburgh, Scotland. Diagnoses were confirmed using the Structured Clinical Interview for DSM-IV (SCID-II). Current symptoms were assessed using the Zanarini Rating Scale for Borderline Personality Disorder (ZAN-BPD [1]). Adverse childhood events were assessed using the Childhood Trauma Questionnaire (CTQ [2]). Fifteen BPD participants were receiving antidepressant medication and twelve were taking antipsychotic medication. Twenty age- and sex-matched controls were recruited from the community, however four were excluded due to technical issues during scanning, leaving sixteen controls. Exclusion criteria for all participants included pregnancy, MRI contraindications, diagnosis of a psychotic disorder, previous head injury or current illicit substance dependence. Controls met the additional criteria of no personal or familial history of major mental illness. Ethical approval was obtained from the Lothian National Health Service Research Ethics Committee, and all participants provided written informed consent before taking part.

    Experimental task Participants performed the Cyberball social exclusion task [3] during functional magnetic resonance imaging (fMRI), adapted from a previous implementation by Kumar et al 2009 [4]. The task involves playing “catch” with two computer-controlled players, during which the participant can be systematically included or excluded from the game. We used this task as it assesses neural responses to social exclusion, is known to activate a range of social brain regions [5] and is amenable to reinforcement learning modelling [4]. The task was modified such that inclusion was varied parametrically over four levels: 0%, 33%, 66% and 100%, achieved by arranging the task into blocks of nine throws, respectively involving zero, one, two or three throws to the participant. Here, 100% inclusion means the degree to which the participant was included was equal to that of the other two players, with each receiving three throws per nine-throw block. Participants were asked to imagine that the other players were real, as exclusion by both human or simulated players has been previously reported to be similarly distressing [6-8]. When the participant received the ball, they indicated which computer player they wished to throw the ball to with a button press. There were four repetitions of each inclusion level, providing 16 experimental blocks in total, with the first block being 100% inclusion, and all subsequent blocks being randomised. Each throwing event had a mean duration of 2700ms, with each being preceded by randomised jitter that was in part adjusted to accommodate the participant’s reaction time from the previous trial, when applicable. This was achieved by comparing the total duration of the previous trial, including reaction time, with the ideal trial time of 2700ms: if this value was exceeded, a random jitter between 0 and 1000ms was subtracted from the mean jitter time of 1500ms; otherwise, the random jitter was added to 1500ms. Jitter therefore varied between 500ms and 2500ms. Mean block duration was 24s, with onsets denoted by the appearance of the cartoon figures following rest, and offsets by the conclusion of the final throw animation. Blocks were randomized, and interleaved with 13s rest blocks. Within blocks, throwing events were jittered to permit event disambiguation for reinforcement learning analysis.

    Neuroimaging Scanning took place at the Clinical Research Imaging Centre in Edinburgh, using a 3T Siemens Magnetom Verio scanner. Echo Planar Blood Oxygen Level Dependent images were acquired axially with a TR 1560ms, TE 26ms, flip angle 66’, field of view 220 x 220mm, in-plane resolution 64 x 64, 26 interleaved slices, 347 volumes, resolution 3.4 x 3.4 x 5mm. A high resolution T1 MPRAGE structural image was acquired with TR 2300ms, TE 2.98ms, flip angle 90’, field of view 256 x 256mm, in-plane resolution 256 x 256, 160 interleaved slices, resolution 1 x 1 x 1mm.

    References 1 Zanarini MC, Vujanovic AA, Parachini EA, Boulanger JL, Frankenburg FR, Hennen J. Zanarini Rating Scale for Borderline Personality Disorder (ZAN-BPD): a continuous measure of DSM-IV borderline psychopathology. J Pers Disord 2003; 17: 233–242 2 Bernstein DP, Fink L. Childhood trauma questionnaire: A retrospective self-report: Manual. Psychological Corporation, 1998. 3 Williams KD, Cheung CK, Choi W. Cyberostracism: effects of being ignored over the Internet. J Pers Soc Psychol 2000; 79: 748–762. 4 Kumar P, Waiter G, Ahearn TS, Milders M, Reid I, Steele JD. Frontal operculum temporal difference signals and social motor response learning. Hum Brain Mapp 2009; 30: 1421–1430. 5 Eisenberger NI, Lieberman MD, Williams KD. Does rejection hurt? An FMRI study of social exclusion. Science 2003; 302: 290–292. 6 Zadro L, Williams KD, Richardson R. How low can you go? Ostracism by a computer is sufficient to lower self-reported levels of belonging, control, self-esteem, and meaningful existence. Journal of Experimental Social Psychology 2004; 40: 560–567. 7 Sebastian CL, Tan GCY, Roiser JP, Viding E, Dumontheil I, Blakemore S-J. Developmental influences on the neural bases of responses to social rejection: implications of social neuroscience for education. NeuroImage 2011; 57: 686–694. 8 Gradin VB, Waiter G, Kumar P, Stickle C, Milders M, Matthews K et al. Abnormal neural responses to social exclusion in schizophrenia. PLoS ONE 2012; 7: e42608.

    Comments added by Openfmri Curators

    ===========================================

    Defacing

    Defacing was performed by the submitter.

    Quality Control

    Mriqc output was not run on this dataset due to issues we are having with the software. It will be included in the next revision.

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds******/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and dsXXXXXX accession number. 3) Send an email to submissions@openfmri.org. Please include the dsXXXXXX accession number in your email.

    Bids-validator Output

    1: This file is not part of the BIDS specification, make sure it isn't included in 
    

    the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. (code: 1 - NOT_INCLUDED) /participants.json Evidence: participants.json

      Summary:         Available Tasks:    Available Modalities:
      116 Files, 2.01GB    Cyberball        T1w
      36 - Subjects                  bold
      1 - Session
    
  15. Magnitude Effect

    • openneuro.org
    Updated Jul 17, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ian Ballard; Bokyung Kim; Anthony Liatsis; Gökhan Aydogan; Jonathan D. Cohen; Samuel M. McClure (2018). Magnitude Effect [Dataset]. https://openneuro.org/datasets/ds000223/versions/00001
    Explore at:
    Dataset updated
    Jul 17, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Ian Ballard; Bokyung Kim; Anthony Liatsis; Gökhan Aydogan; Jonathan D. Cohen; Samuel M. McClure
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Description: Magnitude Effect Temporal Discounting Experiment

    Please cite the following references if you use these data:

    Ballard, I.B, Kim, B., Liatsis, A., Gökhan, A., Cohen, J.D., McClure, S.M. More is meaningful: The magnitude effect in intertemporal choice depends on self-control. Psychological Science.

    This dataset is made available under the Public Domain Dedication and License

    v1.0, whose full text can be found at

    http://www.opendatacommons.org/licenses/pddl/1.0/.

    We hope that all users will follow the ODC Attribution/Share-Alike

    Community Norms (http://www.opendatacommons.org/norms/odc-by-sa/);

    in particular, while not legally required, we hope that all users

    of the data will acknowledge the OpenfMRI project and Ian Ballard in any publications.

    The data was acquired from two different sites using two different scanning protocols. Images with swapped phase encoding for field map correction with topup were only collected from one of the sites. The JSON data for each nifi file explains the parameters used at that site.

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Slice acquisition order was interleaved Bottom/UP for both sites. Site 1: sub-01 through sub-06 Site 2: sub-07 through sub-19

    We collected data from two different sites. The first set were collected on a 3.0 Tesla Siemens Allegra scanner located at Princeton University. The second set were collected on a 3.0 Tesla GE Discovery scanner located at the Banner Alzheimer Institute (BAI) in Phoenix, Arizona. High-resolution T1-weighted images were first acquired (Princeton: 1×1×1 mm resolution, BAI: .9×.9×.9 mm resolution, MP-RAGE sequence). Whole-brain blood oxygenation level-dependent (BOLD) weighted echo-planar images were acquired using an interleaved acquisition (TR = 2000 ms; TE=30 ms; flip angle=90° (Princeton) 77.2° (BAI), slices: 30 total, 4mm thickness (Princeton), 36 total, 3.4 mm thickness (BAI); FOV: 192 mm (Princeton), 222 mm (BAI) matrix = 64×64 (Princeton), 74x74 (BAI), prescription: 30° (Princeton) or 0° (BAI) off the anterior commissure-posterior commissure line). The data sets were acquired several years apart and differences arose from an evolving understanding of the best parameters for acquiring fMRI data.

    Defacing

    Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Quality Control

    Mriqc was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/latest/

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds000223/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000223 accession number. 3) Send an email to submissions@openfmri.org. Please include the ds000223 accession number in your email.

    Known Issues

    • Only the subjects from one of the sites have field map data hence for few subjects fieldmap data is missing
    • Several parameters are not the same across the sites

      Bids-validator Output

      1: You should define 'EffectiveEchoSpacing' for this file. If you don't provide this information field map correction will not be possible. (code: 8 - EFFECTIVE_ECHO_SPACING_NOT_DEFINED)
        /sub-07/func/sub-07_task-mag_run-01_bold.nii.gz
        /sub-07/func/sub-07_task-mag_run-02_bold.nii.gz
        /sub-07/func/sub-07_task-mag_run-03_bold.nii.gz
        /sub-07/func/sub-07_task-mag_run-04_bold.nii.gz
        /sub-08/func/sub-08_task-mag_run-01_bold.nii.gz
        /sub-08/func/sub-08_task-mag_run-02_bold.nii.gz
        /sub-08/func/sub-08_task-mag_run-03_bold.nii.gz
        /sub-08/func/sub-08_task-mag_run-04_bold.nii.gz
        /sub-09/func/sub-09_task-mag_run-01_bold.nii.gz
        /sub-09/func/sub-09_task-mag_run-02_bold.nii.gz
        ... and 44 more files having this issue (Use --verbose to see them all).
      
      2: You should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. (code: 13 - SLICE_TIMING_NOT_DEFINED)
        /sub-01/func/sub-01_task-mag_run-01_bold.nii.gz
        /sub-01/func/sub-01_task-mag_run-02_bold.nii.gz
        /sub-01/func/sub-01_task-mag_run-03_bold.nii.gz
        /sub-01/func/sub-01_task-mag_run-04_bold.nii.gz
        /sub-02/func/sub-02_task-mag_run-01_bold.nii.gz
        /sub-02/func/sub-02_task-mag_run-02_bold.nii.gz
        /sub-02/func/sub-02_task-mag_run-03_bold.nii.gz
        /sub-02/func/sub-02_task-mag_run-04_bold.nii.gz
        /sub-03/func/sub-03_task-mag_run-01_bold.nii.gz
        /sub-03/func/sub-03_task-mag_run-02_bold.nii.gz
        ... and 68 more files having this issue (Use --verbose to see them all).
      
      3: Not all subjects contain the same files. Each subject should contain the same number of files with the same naming unless some files are known to be missing. (code: 38 - INCONSISTENT_SUBJECTS)
        /sub-01/func/sub-01_task-mag_run-05_bold.json
        /sub-01/func/sub-01_task-mag_run-05_bold.nii.gz
        /sub-01/func/sub-01_task-mag_run-05_events.tsv
        /sub-01/func/sub-01_task-mag_run-06_bold.json
        /sub-01/func/sub-01_task-mag_run-06_bold.nii.gz
        /sub-01/func/sub-01_task-mag_run-06_events.tsv
        /sub-02/func/sub-02_task-mag_run-05_bold.json
        /sub-02/func/sub-02_task-mag_run-05_bold.nii.gz
        /sub-02/func/sub-02_task-mag_run-05_events.tsv
        /sub-02/func/sub-02_task-mag_run-06_bold.json
        ... and 150 more files having this issue (Use --verbose to see them all).
      
      4: Not all subjects/sessions/runs have the same scanning parameters. (code: 39 - INCONSISTENT_PARAMETERS)
        /sub-01/anat/sub-01_T1w.nii.gz
        /sub-01/func/sub-01_task-mag_run-01_bold.nii.gz
        /sub-01/func/sub-01_task-mag_run-02_bold.nii.gz
        /sub-01/func/sub-01_task-mag_run-03_bold.nii.gz
        /sub-01/func/sub-01_task-mag_run-04_bold.nii.gz
        /sub-02/anat/sub-02_T1w.nii.gz
        /sub-02/func/sub-02_task-mag_run-01_bold.nii.gz
        /sub-02/func/sub-02_task-mag_run-02_bold.nii.gz
        /sub-02/func/sub-02_task-mag_run-03_bold.nii.gz
        /sub-02/func/sub-02_task-mag_run-04_bold.nii.gz
        ... and 69 more files having this issue (Use --verbose to see them all).
      
      Summary:         Available Tasks:    Available Modalities:
      380 Files, 5.52GB    mag           T1w
      19 - Subjects                  bold
      1 - Session                   fieldmap
      

    login2.ls5(14)$ bids-validator . --verbose 1: You should define 'EffectiveEchoSpacing' for this file. If you don't provide this information field map correction will not be possible. (code: 8 - EFFECTIVE_ECHO_SPACING_NOT_DEFINED) /sub-07/func/sub-07_task-mag_run-01_bold.nii.gz You should define 'EffectiveEchoSpacing' for this file. If you don't provide this information field map correction will not be possible. It can be included one of the following locations: /task-mag_bold.json, /sub-07/sub-07_task-mag_bold.json, /sub-07/func/sub-07_task-mag_run-01_bold.json /sub-07/func/sub-07_task-mag_run-02_bold.nii.gz You should define 'EffectiveEchoSpacing' for this file. If you don't provide this information field map correction will not be possible. It can be included one of the following locations: /task-mag_bold.json, /sub-07/sub-07_task-mag_bold.json, /sub-07/func/sub-07_task-mag_run-02_bold.json /sub-07/func/sub-07_task-mag_run-03_bold.nii.gz You should define 'EffectiveEchoSpacing' for this file. If you don't provide this information field map correction will not be possible. It can be included one of the following locations: /task-mag_bold.json, /sub-07/sub-07_task-mag_bold.json, /sub-07/func/sub-07_task-mag_run-03_bold.json /sub-07/func/sub-07_task-mag_run-04_bold.nii.gz You should define 'EffectiveEchoSpacing' for this file. If you don't provide this information field map correction will not be possible. It can be included one of the following locations: /task-mag_bold.json, /sub-07/sub-07_task-mag_bold.json, /sub-07/func/sub-07_task-mag_run-04_bold.json /sub-08/func/sub-08_task-mag_run-01_bold.nii.gz You should define 'EffectiveEchoSpacing' for this file. If you don't provide this information field map correction will not be possible. It can be included one of the following locations: /task-mag_bold.json, /sub-08/sub-08_task-mag_bold.json, /sub-08/func/sub-08_task-mag_run-01_bold.json /sub-08/func/sub-08_task-mag_run-02_bold.nii.gz You should define 'EffectiveEchoSpacing' for this file. If you don't provide this information field map correction will not be possible. It can be included one of the following locations: /task-mag_bold.json, /sub-08/sub-08_task-mag_bold.json, /sub-08/func/sub-08_task-mag_run-02_bold.json /sub-08/func/sub-08_task-mag_run-03_bold.nii.gz You should define 'EffectiveEchoSpacing' for this file. If you don't provide this information field map correction will not be possible. It can be included one of the following locations: /task-mag_bold.json, /sub-08/sub-08_task-mag_bold.json, /sub-08/func/sub-08_task-mag_run-03_bold.json /sub-08/func/sub-08_task-mag_run-04_bold.nii.gz You should define 'EffectiveEchoSpacing' for this file. If you don't provide this information field map correction will not be possible. It can be included one of the following locations: /task-mag_bold.json, /sub-08/sub-08_task-mag_bold.json, /sub-08/func/sub-08_task-mag_run-04_bold.json /sub-09/func/sub-09_task-mag_run-01_bold.nii.gz You should define

  16. Cross-Sectional Multidomain Lexical Processing

    • openneuro.org
    Updated Apr 4, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jordan Bigio; Tali Bitan; Douglas Bolger; Douglas Burman; Fan Cao; Tai-Li Chou; Nadia Cone; Jessica Gayda; Dong Lu; Marisa Lytle; Jenni Minas; James R. Booth (2022). Cross-Sectional Multidomain Lexical Processing [Dataset]. http://doi.org/10.18112/openneuro.ds002236.v1.1.1
    Explore at:
    Dataset updated
    Apr 4, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Jordan Bigio; Tali Bitan; Douglas Bolger; Douglas Burman; Fan Cao; Tai-Li Chou; Nadia Cone; Jessica Gayda; Dong Lu; Marisa Lytle; Jenni Minas; James R. Booth
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Defacing

    Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Known Issues

    • BIDS validator warns that some stimuli files are not included in events.tsv. This is due to listing stimuli as stim1_file and stim2_file in events.tsv files as each trial contained two stimuli. All stimuli were confirmed to exist in stimuli/ directory and be used in events.tsv files
    • Slices were acquired interleaved from bottom to top, odd first.
    • Age at phenotype / standardized testing and shifted(deidentified) date of birth is provided in participant.tsv. Shifted(deidentified) acquisition date of each imagining file is provided in sidecar json file for each image. Date of standardized testing is not available.
  17. Training of loss aversion modulates neural sensitivity toward potential...

    • openneuro.org
    Updated Jul 17, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mei-Yen Chen; Corey N. White; Nathan Giles; Albert Elumn; Sagar Parikh; Ungi Kim; W. Todd Maddox; Russell A. Poldrack (2018). Training of loss aversion modulates neural sensitivity toward potential gains [Dataset]. https://openneuro.org/datasets/ds000053/versions/00001
    Explore at:
    Dataset updated
    Jul 17, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Mei-Yen Chen; Corey N. White; Nathan Giles; Albert Elumn; Sagar Parikh; Ungi Kim; W. Todd Maddox; Russell A. Poldrack
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Training of loss aversion modulates neural sensitivity toward potential gains

    Aims:

    We investigated behavioral and neural mechanisms for modulating loss aversion.

    Methods

    Behavior task: We adapted the gambling task (Tom et al., 2007) by introducing contexts and feedback that encourage participants to take more or less loss averse choices.

    fMRI: We used general linear model to find brain activation that correlates with magnitude of potential gains or potential losses during the learning and post-learning probe. We also used psychophysiological interaction analysis (independent seeded at vmPFC) to identified the brain areas showing interaction with vmPFC over the course of training.

    General findings and importance:

    Training primarily modulated behavioral and neural sensitivity toward potential gains, and was reflected in connectivity between regions involved in cognitive control and those involved in value representation. These findings highlight the importance of experience in development of biases in decision-making.

    Sample Size

    Sixty human participants completed the behavioral paradigm in the MRI scanner (31 females, 29 males; age range: 18 - 30 with mean 22.9-year-old). Two participants were discarded from the brain imaging analyses; one due to a missing anatomical image, and the other due to excessive head movement (more than one-third of the volumes were considered “bad time points” according to the motion correction procedures detailed in the Preprocessing section).

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Defacing

    Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Quality Control

    Mriqc was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/latest/

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds******/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and dsXXXXXX accession number. 3) Send an email to submissions@openfmri.org. Please include the dsXXXXXX accession number in your email.

    Known Issues

    Data for sub-055 M 22 is missing.

    Bids-validator Output

  18. Data from: Consensus-seeking and conflict-resolving: an fMRI study on...

    • openneuro.org
    Updated Sep 1, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    HanShin Jo; Chiu-Yueh Chen; Der-Yow Chen; Ming-Heng Weng; Chun-Chia Kung (2020). Consensus-seeking and conflict-resolving: an fMRI study on college couples’ shopping interaction [Dataset]. http://doi.org/10.18112/openneuro.ds003103.v1.0.1
    Explore at:
    Dataset updated
    Sep 1, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    HanShin Jo; Chiu-Yueh Chen; Der-Yow Chen; Ming-Heng Weng; Chun-Chia Kung
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Consensus-seeking and conflict-resolving: an fMRI study on college couples’ shopping interaction

    Aims:

    We investigated how couples interacted to do the online-shopping task in the fMRI.

    Methods

    fMRI: This fMRI study investigated the shopping interactions of 30 college couples, one lying inside and the other outside the scanner, beholding the same item from two connected PCs, making preference ratings and subsequent buy/not-buy decisions.

    General findings and importance:

    The behavioral results showed the clear modulation of significant others’ preferences onto one’s own decisions, and the contrast of the “shop-together vs. shop-alone”, and the “congruent (both liked or disliked the item, 68%) vs. incongruent (one liked but the other disliked, and vice versa)” together trials, both revealed bilateral temporal parietal junction (TPJ) among other reward-related regions, likely reflecting mentalizing during preference harmony. Moreover, when contrasting “own-high/other-low vs. own-low/other-high” incongruent trials, left anterior inferior parietal lobule (l-aIPL) was parametrically mapped, and the “yield (e.g., own-high/not-buy) vs. insist (e.g., own-low/not-buy)” modulation further revealed left lateral-IPL (l-lIPL), together with left TPJ forming a local social decision network that was further constrained by the mediation analysis among left TPJ-lIPL-aIPL.

    Sample Size

    Thirty human participants completed the behavioral paradigm in the MRI scanner (16 males; mean age=22.7±2.57 yrs, out of the 19 participating couples).

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds******/ See the comments section at the bottom of the dataset page.

    Known Issues

    Bids-validator Output

  19. The Nencki-Symfonia EEG/ERP dataset

    • openneuro.org
    Updated Jun 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dzianok Patrycja; Antonova Ingrida; Wojciechowski Jakub; Dreszer Joanna; Kublik Ewa (2025). The Nencki-Symfonia EEG/ERP dataset [Dataset]. http://doi.org/10.18112/openneuro.ds004621.v1.0.4
    Explore at:
    Dataset updated
    Jun 21, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Dzianok Patrycja; Antonova Ingrida; Wojciechowski Jakub; Dreszer Joanna; Kublik Ewa
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The Nencki-Symfonia EEG/ERP dataset (dataset DOI: doi.org/10.5524/100990)

    IMPORTANT NOTE: The dataset contains no errors (BIDS-1). The numerous warnings currently displayed are a result of OpenNeuro updating its validator to BIDS-2. The OpenNeuro team is actively working on refining the validator to display only meaningful warnings (more information on OpenNeuro GitHub page). At this time, as dataset owners, we are unable to take any action to resolve these warnings.

    Description: mixed cognitive tasks [(i) an extended multi-source interference task, MSIT+; (ii) a 3-stimuli oddball task; (iii) a control, simple reaction task, SRT; and (iv) a resting-state protocol]

    Please cite the following references if you use these data: 1. Dzianok P, Antonova I, Wojciechowski J, Dreszer J, Kublik E. The Nencki-Symfonia electroencephalography/event-related potential dataset: Multiple cognitive tasks and resting-state data collected in a sample of healthy adults. Gigascience. 2022 Mar 7;11:giac015. doi: 10.1093/gigascience/giac015. 2. Dzianok P, Antonova I, Wojciechowski J, Dreszer J, Kublik E. Supporting data for "The Nencki-Symfonia EEG/ERP dataset: Multiple cognitive tasks and resting-state data collected in a sample of healthy adults" GigaScience Database, 2022. http://doi.org/10.5524/100990

    Release history:

    26/01/2022: Initial release (GigaDB)

    15/06/2023: Added to OpenNeuro; updated README and dataset_description.json; minor updated to .json files related with BIDS errors/warnings. Updated events files (ms changed to s).

    12/10/2023: public release on OpenNeuro after deleting some additional, not needed system information from raw logfiles

    10/2024: minor correction of logfiles in the /sourcedata directory (MSIT and SRT) for sub-01 to sub-03

    02/2025 (v1.0.3): corrections to REST files for subjects sub-20 and sub-23 (EEG and .tsv files) – corrected marker names and removed redundant markers

  20. ds000218

    • openneuro.org
    Updated Jul 17, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jelle R. Dalenberg; Liselore Weitkamp; Remco J. Renken; L. Nanetti; Gert J. ter Horst (2018). ds000218 [Dataset]. https://openneuro.org/datasets/ds000218/versions/00001
    Explore at:
    Dataset updated
    Jul 17, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Jelle R. Dalenberg; Liselore Weitkamp; Remco J. Renken; L. Nanetti; Gert J. ter Horst
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Data acquisition methods have been described in detail in Dalenberg et al. (2017, Plos One)

    ———————————————————————————————— IMPORTANT NOTES ———————————————————————————————— Due to technical difficulties with the PRESTO sequence several volumes were missing or broken. To fix the timing of the data, missing volumes need to be filled and broken volumes need replacement.

    Missing/borken volumes:

    Participant_id run volumes_missing volumes_broken sub-02 r1 [1 2 3 4 5 6 7 8 9 10 11] [561] sub-08 r2 [1 2 3 4 5 6 7 8 9 10] [561] sub-08 r3 [1 2 3 4 5 6 7 8 9 10 11] [559] sub-10 r2 [1 2 3 4 5 6 7 8 9 10] sub-18 r2 [1 2 3 4 5 6 7 8 9 10] [559]

    Further missing data due to difficulties with the gustometer: sub-03 sub-06 sub-16

    Stimulus information

    Flavour_Products Description fc_cap Oral Nutritional Supplement Product brand: Forticare, flavour Cappucino fc_ole Oral Nutritional Supplement Product brand: Forticare, flavour Orange Lemon fc_peg Oral Nutritional Supplement Product brand: Forticare, flavour Peach Ginger fm_apr Oral Nutritional Supplement Product brand: Fortimel, flavour Apricot fm_neu Oral Nutritional Supplement Product brand: Fortimel, flavour Neutral fm_van Oral Nutritional Supplement Product brand: Fortimel, flavour Vanilla

    Product compositions

    Fortimel: Milk protein concentrate, water, maltodextrin, vegetable oils, sucrose, acidity regulator (citric acid), emulsifier (soy lecithin), cocoa, flavoring (vanilla/apricot), tri-potassium citrate, choline chloride, calcium hydroxide, sodium L-ascorbate, potassium hydroxide, trisodium citrate, DL-α-tocopherol, ferrous lactate, nicotinamide, retinyl acetate, copper gluconate, manganese sulfate, zinc sulfate, sodium selenite, chromium chloride, D-calcium pantothenate, D-biotin, cholecalciferol, pyridoxine hydrochloride, pteroylmonoglutamic acid, thiamine hydrochloride, sodium fluoride, sodium molybdate, riboflavin, potassium iodide, phytomenadione.

    Forticare: Demineralised water, glucose syrup, sodium molybdate, milk protein isolate, sodium fluoride, trehalose, sucrose, vegetable oils, dietary fibres (oligofructose, inulin, cellulose, resistant starch), fish oil, whey protein concentrate (from milk), tri potassium citrate, flavour, sodium chloride, tri sodium citrate, colour (E150d), flavour, magnesium hydrogen phosphate, choline chloride, carotenoids (contains soy) (b-carotene, lutein, lycopene), sodium L-ascorbate, magnesium carbonate, potassium hydroxide, taurine, DL-a-tocopheryl acetate, L-carnitine, ferrous lactate, zinc sulphate, nicotinamide, retinyl aceteate, sodium selenite, manganese sulphate, copper gluconate, pyridoxine hydrochloride, calcium D-pantothenate, pteroylmonoglutamic acid, D-biotin, chromium chloride, cholecalciferol, cyanocobalamin, thiamin hydrochloride, sodium molybdate, sodium fluoride, riboflavin, potassium iodide, phytomenadione.

    General procedure per trial

    Period Description warning_star Warning for upcomming flavour stimulus using a star on the screen (2s) taste_and_swallow Instruction to first taste (3.5s) and then swallow (4s) the 2ml flavour stimulus judge Passivily judge the stimulus (22.5s) while watching a fixation cross. rate Actively rate the stimulus on a 7 point liking scale (self paced) rinse Rinsing procedure to rinse the palate using 2ml of a 5% artificial saliva solution (29s)

    Stimulus labels

    All stimulus screens (except "warning_star" and "rinse") are coded as a combination of the general procedure and Flavour product as follows:

    taste_and_swallow_fm_neu judge_fm_neu rate_fm_neu

    taste_and_swallow_fc_ole judge_fc_ole rate_fc_ole

    taste_and_swallow_fm_van judge_fm_van rate_fm_van

    taste_and_swallow_fc_peg judge_fc_peg rate_fc_peg

    taste_and_swallow_fc_cap judge_fc_cap rate_fc_cap

    taste_and_swallow_fm_apr judge_fm_apr rate_fm_apr

    ———————————————————————————————— SCANNER & PRESTO SEQUENCE DETAILS ————————————————————————————————

    3T Philips Intera MRI Scanner 32 channel head coil Variable number of TRs per run (depending on answering times)

    ———————————————————————————————— SCANNER PRESTO EXAM CARD ———————————————————————————————— Coil selection 1 = "SENSE-Head-32P"; element selection = "selection 1"; Coil selection 2 = "SENSE-Head-32AH"; element selection = "selection 1"; Dual coil = "yes"; CLEAR = "yes"; body tuned = "no"; FOV FH (mm) = 230; AP (mm) = 230; RL (mm) = 153; Voxel size FH (mm) = 3; AP (mm) = 3; RL (mm) = 3; Recon voxel size (mm) = 2.875; Fold-over suppression = "no"; Slice oversampling = "default"; Reconstruction matrix = 80; SENSE = "yes"; P reduction (AP) = 1.89999998; P os factor = 1; S reduction (RL) = 1.89999998; Overcontiguous slices = "no"; Stacks = 1; slices = 51; slice orientation = "sagittal"; fold-over direction = "AP"; fat shift direction = "P"; Stack Offc. AP (P=+mm) = -8.07165337; RL (L=+mm) = -2.93776155; FH (H=+mm) = 21.0553951; Ang. AP (deg) = 0.122744754; RL (deg) = 1.3835777; FH (deg) = -0.273202777; Chunks = 1; Large table movement = "no"; PlanAlign = "no"; REST slabs = 0; Interactive positioning = "no"; Patient position = "head first"; orientation = "supine"; Scan type = "Imaging"; Scan mode = "3D"; technique = "FFE"; Contrast enhancement = "T1"; Acquisition mode = "cartesian"; Fast Imaging mode = "EPI"; 3D non-selective = "no"; shot mode = "multishot"; EPI factor = 17; Echoes = 1; partial echo = "no"; shifted echo = "yes"; TE>TR shift = 1; M add factor = -4; P add factor = -4; S add factor = -4; TE = "user defined"; (ms) = 10; Flip angle (deg) = 7; TR = "user defined"; (ms) = 20; Halfscan = "no"; Water-fat shift = "minimum"; Shim = "auto"; Fat suppression = "no"; Water suppression = "no"; MTC = "no"; Research prepulse = "no"; Diffusion mode = "no"; SAR mode = "high"; B1 mode = "default"; PNS mode = "low"; Gradient mode = "default"; SofTone mode = "no"; Cardiac synchronization = "no"; Respiratory compensation = "no"; Navigator respiratory comp = "no"; Flow compensation = "no"; fMRI echo stabilisation = "no"; NSA = 1; Angio / Contrast enh. = "no"; Quantitative flow = "no"; Manual start = "yes"; Dynamic study = "individual"; dyn scans = 700; dyn scan times = "shortest"; FOV time mode = "default"; dummy scans = 2; immediate subtraction = "no"; fast next scan = "no"; synch. ext. device = "yes"; start at dyn. = 1; interval (dyn) = 1; dyn stabilization = "yes"; prospect. motion corr. = "no"; Keyhole = "no"; Arterial Spin labeling = "no"; Preparation phases = "full"; Interactive F0 = "no"; B0 field map = "no"; B1 field map = "no"; MIP/MPR = "no"; Images = " M", (3) " no"; Autoview image = " M"; Calculated images = (4) " no"; Reference tissue = "Grey matter"; Preset window contrast = "soft"; Reconstruction mode = "real time"; reuse memory = "no"; Save raw data = "no"; Hardcopy protocol = "no"; Ringing filtering = "default"; Geometry correction = "default"; Elliptical k-space shutter = "default"; IF_info_seperator = 1634755923; Total scan duration = "17:56.5"; Rel. signal level (%) = 100; Act. TR/TE (shifted) (ms) = "20 / 30"; Dyn. scan time = "00:01.5"; Time to k0 = "0.750"; ACQ matrix M x P = "76 x 58"; ACQ voxel MPS (mm) = "3.03 / 3.92 / 3.00"; REC voxel MPS (mm) = "2.88 / 2.88 / 3.00"; Scan percentage (%) = 77.272728; Act. WFS (pix) / BW (Hz) = "4.999 / 86.9"; BW in EPI freq. dir. (Hz) = "2654.6"; Min. WFS (pix) / Max. BW (Hz) = "4.976 / 87.3"; Min. TR/TE (ms) = "19 / 7.4"; ES-FFE: added M area = -15.5215797; ES-FFE: added P area = -15.5215797; ES-FFE: added S area = -15.5215797; SAR / head = "< 2 %"; Whole body / level = "0.0 W/kg / normal"; B1 rms = "0.33 uT"; PNS / level = "46 % / normal"; Sound Pressure Level (dB) = 14.8522921;

    ———————————————————————————————— PUBLICATIONS ———————————————————————————————— Dalenberg, J.R., Weitkamp, L., Renken, R.J., Nanetti, L., ter Horst G.J., (2017). Flavor pleasantness processing in the ventral emotion network. Plos One _, doi: _

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Defacing

    Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds000218/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000218 accession number. 3) Send an email to submissions@openfmri.org. Please include the ds000218 accession number in your email.

    Known Issues

    N/A

    Bids-validator Output

    u should define 'SliceTiming' for this file. If you don't provide this information slice time correction will not be possible. It can be included one of the following locations: /task-ONSflavourtask_bold.json,

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Marta Vidorreta; Ze Wang; Yulin V. Chang; David A Wolk; Maria A. Fernandez-Seara; John A. Detre (2018). ds000234 [Dataset]. https://openneuro.org/datasets/ds000234/versions/00001
Organization logo

ds000234

Explore at:
Dataset updated
Jul 17, 2018
Dataset provided by
OpenNeurohttps://openneuro.org/
Authors
Marta Vidorreta; Ze Wang; Yulin V. Chang; David A Wolk; Maria A. Fernandez-Seara; John A. Detre
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

Description of the ASL sequence A sequence with pseudo-continuous labeling, background suppression and 3D RARE Stack-Of-Spirals readout with optional through-plane acceleration was implemented for this study. At the beginning of the sequence, gradients were rapidly played with alternating polarity to correct for their delay in the spiral trajectories, followed by two preparation TRs, to allow the signal to reach the steady state. A non-accelerated readout was played during the preparation TRs, in order to obtain a fully sampled k-space dataset, used for calibration of the parallel imaging reconstruction kernel, needed to reconstruct the skipped kz partitions in the accelerated images.

Description of study Non-accelerated and accelerated versions of the sequence were compared during the execution of a functional activation paradigm. For each participant, first a high-resolution anatomical T1-weighted image was acquired with a magnetization prepared rapid gradient echo (MPRAGE) sequence. Subjects underwent two perfusion runs, in which functional data were acquired with the non-accelerated and the accelerated version of the sequence, in pseudo-randomized order, during a visual-motor activation paradigm. During each run, 3 resting blocks alternated with 3 task blocks, with each block comprising 8 label-control pairs (72 s and 64 s for the non-accelerated and accelerated sequence versions, respectively). During the resting blocks, subjects were instructed to remain still while looking at a fixation cross. During the task blocks, a flashing checkerboard was displayed and subjects were asked to tap their right-hand fingers while looking at the center of the board. Labeling and PLD times were 1.5 and 1.5 s. In addition, four M0 images with long TR and no magnetization preparation were acquired per perfusion run for CBF quantification purposes.

Comments added by Openfmri Curators

===========================================

General Comments

Defacing

Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

Quality Control

Mriqc was run on the dataset. Results are located in derivatives/mriqc. Learn more about it here: https://mriqc.readthedocs.io/en/latest/

Where to discuss the dataset

1) www.openfmri.org/dataset/ds000234/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000234. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.

Known Issues

N/A

Bids-validator Output

Search
Clear search
Close search
Google apps
Main menu