20 datasets found
  1. MPI-Leipzig_Mind-Brain-Body

    • openneuro.org
    • search.kg.ebrains.eu
    Updated Jul 22, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MPI-Leipzig_Mind-Brain-Body [Dataset]. https://openneuro.org/datasets/ds000221/versions/1.0.0
    Explore at:
    Dataset updated
    Jul 22, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Anahit Babayan; Blazeij Baczkowski; Roberto Cozatl; Maria Dreyer; Haakon Engen; Miray Erbey; Marcel Falkiewicz; Nicolas Farrugia; Michael Gaebler; Johannes Golchert; Laura Golz; Krzysztof Gorgolewski; Philipp Haueis; Julia Huntenburg; Rebecca Jost; Yelyzaveta Kramarenko; Sarah Krause; Deniz Kumral; Mark Lauckner; Daniel S. Margulies; Natacha Mendes; Katharina Ohrnberger; Sabine Oligschläger; Anastasia Osoianu; Jared Pool; Janis Reichelt; Andrea Reiter; Josefin Röbbig; Lina Schaare; Jonathan Smallwood; Arno Villringer
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Leipzig
    Description

    The MPI-Leipzig Mind-Brain-Body dataset contains MRI and behavioral data from 318 participants. Datasets for all participants include at least a structural quantitative T1-weighted image and a single 15-minute eyes-open resting-state fMRI session.

    The participants took part in one or two extended protocols: Leipzig Mind-Body-Brain Interactions (LEMON) and Neuroanatomy & Connectivity Protocol (N&C). The data from LEMON protocol is included in the ‘ses-01’ subfolder; the data from N&C protocol in ‘ses-02’ subfolder.

    LEMON focuses on structural imaging. 228 participants were scanned. In addition to the quantitative T1-weighted image, the participants also have a structural T2-weighted image (226 participants), a diffusion-weighted image with 64 directions (228) and a 15-minute eyes-open resting-state session (228). New imaging sequences were introduced into the LEMON protocol after data acquisition for approximately 110 participants. Before the change, a low-resolution 2D FLAIR images were acquired for clinical purposes (110). After the change, 2D FLAIR was replaced with high-resolution 3D FLAIR (117). The second addition was the acquisition of gradient-echo images (112) that can be used for Susceptibility-Weighted Imaging (SWI) and Quantitative Susceptibility Mapping (QSM).

    The N&C protocol focuses on resting-state fMRI data. 199 participants were scanned with this protocol; 109 participants also took part in the LEMON protocol. Structural data was not acquired for the overlapping LEMON participants. For the unique N&C participants, only a T1-weighted and a low-resolution FLAIR image were acquired. Four 15-minute runs of eyes-open resting-state are the main component of N&C; they are complete for 194 participants, three participants have 3 runs, one participant has 2 runs and one participant has a single run. Due to a bug in multiband sequence used in this protocol, the echo time for N&C resting-state is longer than in LEMON — 39.4 ms vs 30 ms.

    Forty-five participants have complete imaging data: quantitative T1-weighted, T2-weighted, high-resolution 3D FLAIR, DWI, GRE and 75 minutes of resting-state. Both gradient-echo and spin-echo field maps are available in both datasets for all EPI-based sequences (rsfMRI and DWI).

    Extensive behavioral data was acquired in both protocols. They include trait and state questionnaires, as well as behavioral tasks. Here we only list the tasks; more extenstive descriptions are available in the manuscripts.

    LEMON QUESTIONNAIRES/TASKS [not yet released]

    California Verbal Learning Test (CVLT) Testbatterie zur Aufmerksamkeitsprüfung (TAP Alertness, Incompatibility, Working Memory) Trail Marking Test (TMT) Wortschatztest (WST) Leistungsprüfungssystem 2 (LPS-2) Regensburger Wortflüssigkeitstest (RWT)

    NEO Five-Factor Inventory (NEO-FFI) Impulsive Behavior Scale (UPPS) Behavioral Inhibition and Approach System (BISBAS) Cognitive Emotion Regulation Questionnaire (CERQ) Measure of Affective Style (MARS) Fragebogen zur Sozialen Unterstützung (F-SozU K) The Multidimensional Scale of Perceived Social Support (MSPSS) Coping Orientations to Problems Experienced (COPE) Life Orientation Test-Revised (LOT-R) Perceived Stress Questionnaire (PSQ) the Trier Inventory of Chronic Stress (TICS) The three-factor eating questionnaire (TFEQ) Yale Food Addiction Scale (YFAS) The Trait Emotional Intelligence Questionnaire (TEIQue-SF) Trait Scale of the State-Trait Anxiety Inventory (STAI) State-Trait Anger expression Inventory (STAXI) Toronto-Alexithymia Scale (TAS) Multidimensional Mood Questionnaire (MDMQ) New York Cognition Questionnaire (NYC-Q)

    N&C QUESTIONNAIRES

    Adult Self Report (ASR) Goldsmiths Musical Sophistication Index (Gold-MSI) Internet Addiction Test (IAT) Involuntary Musical Imagery Scale (IMIS) Multi-Gender Identity Questionnaire (MGIQ) Brief Self-Control Scale (SCS) Short Dark Triad (SD3) Social Desirability Scale-17 (SDS) Self-Esteem Scale (SE) Tuckman Procrastination Scale (TPS) Varieties of Inner Speech (VISQ) UPPS-P Impulsive Behavior Scale (UPPS-P) Attention Control Scale (ACS) Beck's Depression Inventory-II (BDI) Boredom Proneness Scale (BP) Esworth Sleepiness Scale (ESS) Hospital Anxiety and Depression Scale (HADS) Multimedia Multitasking Index (MMI) Mobile Phone Usage (MPU) Personality Style and Disorder Inventory (PSSI) Spontaneous and Deliberate Mind-Wandering (S-D-MW) Short New York Cognition Scale (Short-NYC-Q) New York Cognition Scale (NYC-Q) Abbreviated Math Anxiety Scale (AMAS) Behavioral Inhibition and Approach System (BIS/BAS) NEO Personality Inventory Revised (NEO-PI-R) Body Consciousness Questionnaire (BCQ) Creative achievement questionnaire (CAQ) Five facets of mindfulness (FFMQ) Metacognition (MCQ-30)

    N&C TASKS

    Conjunctive continuous performance task (CCPT) Emotional task switching (ETS) Adaptive visual and auditory oddball target detection task (Oddball) Alternative uses task (AUT) Remote associates test (RAT) Synesthesia color picker test (SYN) Test of creative imagery abilities (TCIA)

    Comments added by Openfmri Curators

    ===========================================

    General Comments

    Defacing

    Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

    Where to discuss the dataset

    1) www.openfmri.org/dataset/ds000221/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000221. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.

    Known Issues

    N/A

    Bids-validator Output

    A verbose bids-validator output is under '/derivatives/bidsvalidatorOutput_long'. Short version of BIDS output is as follows:

    1: This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. (code: 1 - NOT_INCLUDED)
      /sub-010001/ses-02/anat/sub-010001_ses-02_inv-1_mp2rage.json
        Evidence: sub-010001_ses-02_inv-1_mp2rage.json
      /sub-010001/ses-02/anat/sub-010001_ses-02_inv-1_mp2rage.nii.gz
        Evidence: sub-010001_ses-02_inv-1_mp2rage.nii.gz
      /sub-010001/ses-02/anat/sub-010001_ses-02_inv-2_mp2rage.json
        Evidence: sub-010001_ses-02_inv-2_mp2rage.json
      /sub-010001/ses-02/anat/sub-010001_ses-02_inv-2_mp2rage.nii.gz
        Evidence: sub-010001_ses-02_inv-2_mp2rage.nii.gz
      /sub-010002/ses-01/anat/sub-010002_ses-01_inv-1_mp2rage.json
        Evidence: sub-010002_ses-01_inv-1_mp2rage.json
      /sub-010002/ses-01/anat/sub-010002_ses-01_inv-1_mp2rage.nii.gz
        Evidence: sub-010002_ses-01_inv-1_mp2rage.nii.gz
      /sub-010002/ses-01/anat/sub-010002_ses-01_inv-2_mp2rage.json
        Evidence: sub-010002_ses-01_inv-2_mp2rage.json
      /sub-010002/ses-01/anat/sub-010002_ses-01_inv-2_mp2rage.nii.gz
        Evidence: sub-010002_ses-01_inv-2_mp2rage.nii.gz
      /sub-010003/ses-01/anat/sub-010003_ses-01_inv-1_mp2rage.json
        Evidence: sub-010003_ses-01_inv-1_mp2rage.json
      /sub-010003/ses-01/anat/sub-010003_ses-01_inv-1_mp2rage.nii.gz
        Evidence: sub-010003_ses-01_inv-1_mp2rage.nii.gz
      ... and 1710 more files having this issue (Use --verbose to see them all).
    
    2: Not all subjects contain the same files. Each subject should contain the same number of files with the same naming unless some files are known to be missing. (code: 38 - INCONSISTENT_SUBJECTS)
      /sub-010001/ses-01/anat/sub-010001_ses-01_T2w.json
      /sub-010001/ses-01/anat/sub-010001_ses-01_T2w.nii.gz
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-highres_FLAIR.json
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-highres_FLAIR.nii.gz
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-lowres_FLAIR.json
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-lowres_FLAIR.nii.gz
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-mp2rage_T1map.nii.gz
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-mp2rage_T1w.nii.gz
      /sub-010001/ses-01/anat/sub-010001_ses-01_acq-mp2rage_defacemask.nii.gz
      /sub-010001/ses-01/dwi/sub-010001_ses-01_dwi.bval
      ... and 8624 more files having this issue (Use --verbose to see them all).
    
    3: Not all subjects/sessions/runs have the same scanning parameters. (code: 39 - INCONSISTENT_PARAMETERS)
      /sub-010007/ses-02/anat/sub-010007_ses-02_acq-mp2rage_T1map.nii.gz
      /sub-010007/ses-02/anat/sub-010007_ses-02_acq-mp2rage_T1w.nii.gz
      /sub-010007/ses-02/anat/sub-010007_ses-02_acq-mp2rage_defacemask.nii.gz
      /sub-010045/ses-01/dwi/sub-010045_ses-01_dwi.nii.gz
      /sub-010087/ses-02/func/sub-010087_ses-02_task-rest_acq-PA_run-01_bold.nii.gz
      /sub-010189/ses-02/anat/sub-010189_ses-02_acq-lowres_FLAIR.nii.gz
      /sub-010201/ses-02/func/sub-010201_ses-02_task-rest_acq-PA_run-02_bold.nii.gz
    
      Summary:           Available Tasks:    Available Modalities:
      14714 Files, 390.74GB    Rest          FLAIR
      318 - Subjects                    T1map
      2 - Sessions                     T1w
                                 defacemask
                                 bold
                                 T2w
                                 dwi
                                 fieldmap
                                 fieldmap
    
  2. Data from: Modeling short visual events through the BOLD Moments video fMRI...

    • openneuro.org
    Updated Jul 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benjamin Lahner; Kshitij Dwivedi; Polina Iamshchinina; Monika Graumann; Alex Lascelles; Gemma Roig; Alessandro Thomas Gifford; Bowen Pan; SouYoung Jin; N.Apurva Ratan Murty; Kendrick Kay; Radoslaw Cichy*; Aude Oliva* (2024). Modeling short visual events through the BOLD Moments video fMRI dataset and metadata. [Dataset]. http://doi.org/10.18112/openneuro.ds005165.v1.0.4
    Explore at:
    Dataset updated
    Jul 21, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Benjamin Lahner; Kshitij Dwivedi; Polina Iamshchinina; Monika Graumann; Alex Lascelles; Gemma Roig; Alessandro Thomas Gifford; Bowen Pan; SouYoung Jin; N.Apurva Ratan Murty; Kendrick Kay; Radoslaw Cichy*; Aude Oliva*
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is the data repository for the BOLD Moments Dataset. This dataset contains brain responses to 1,102 3-second videos across 10 subjects. Each subject saw the 1,000 video training set 3 times and the 102 video testing set 10 times. Each video is additionally human-annotated with 15 object labels, 5 scene labels, 5 action labels, 5 sentence text descriptions, 1 spoken transcription, 1 memorability score, and 1 memorability decay rate.

    Overview of contents:

    The home folder (everything except the derivatives/ folder) contains the raw data in BIDS format before any preprocessing. Download this folder if you want to run your own preprocessing pipeline (e.g., fMRIPrep, HCP pipeline).

    To comply with licensing requirements, the stimulus set is not available here on OpenNeuro (hence the invalid BIDS validation). See the GitHub repository (https://github.com/blahner/BOLDMomentsDataset) to download the stimulus set and stimulus set derivatives (like frames). To make this dataset perfectly BIDS compliant for use with other BIDS-apps, you may need to copy the 'stimuli' folder from the downloaded stimulus set into the parent directory.

    The derivatives folder contains all data derivatives, including the stimulus annotations (./derivatives/stimuli_metadata/annotations.json), model weight checkpoints for a TSM ResNet50 model trained on a subset of Multi-Moments in Time, and prepared beta estimates from two different fMRIPrep preprocessing pipelines (./derivatives/versionA and ./derivatives/versionB).

    VersionA was used in the main manuscript, and versionB is detailed in the manuscript's supplementary. If you are starting a new project, we highly recommend you use the prepared data in ./derivatives/versionB/ because of its better registration, use of GLMsingle, and availability in more standard/non-standard output spaces. Code used in the manuscript is located at the derivatives version level. For example, the code used in the main manuscript is located under ./derivatives/versionA/scripts. Note that versionA prepared data is very large due to beta estimates for 9 TRs per video. See this GitHub repo for starter code demonstrating basic usage and dataset download scripts: https://github.com/blahner/BOLDMomentsDataset. See this GitHub repo for the TSM ResNet50 model training and inference code: https://github.com/pbw-Berwin/M4-pretrained

    Data collection notes: All data collection notes explained below are detailed here for the purpose of full transparency and should be of no concern to researchers using the data i.e. these inconsistencies have been attended to and integrated into the BIDS format as if these exceptions did not occur. The correct pairings between field maps and functional runs are detailed in the .json sidecars accompanying each field map scan.

    Subject 2: Session 1: Subject repositioned head for comfort after the third resting state scan, approximately 1 hour into the session. New scout and field map scans were taken. In the case of applying a susceptibility distortion correction analysis, session 1 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

    Session 4: Completed over two separate days due to subject feeling sleepy. All 3 testing runs and 6/10 training runs were completed on the first day, and the last 4 training runs were completed on the second day. Each of the two days for session 4 had its own field map. This did not interfere with session 5. All scans across both days belonging to session 4 were analyzed as if they were collected on the same day. In the case of applying a susceptibility distortion correction analysis, session 4 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

    Subject 4: Sessions 1 and 2: The fifth (out of 5) localizer run from session 1 was completed at the end of session 2 due to a technical error. This localizer run therefore used the field map from session 2. In the case of applying a susceptibility distortion correction analysis, session 1 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

    Subject 10: Session 5: Subject moved a lot to readjust earplug after the third functional run (1 test and 2 training runs completed). New field map scans were collected. In the case of applying a susceptibility distortion correction analysis, session 5 therefore has two sets of field maps, denoted by “run-1” and “run-2” in the filename. The “IntendedFor” field in the field map’s identically named .json sidecar file specifies which functional scans correspond to which field map.

  3. l

    Data from: Business Improvement Districts (BID)

    • visionzero.geohub.lacity.org
    • citysurvey-lacs.opendata.arcgis.com
    • +4more
    Updated Oct 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GIS@LADCP (2018). Business Improvement Districts (BID) [Dataset]. https://visionzero.geohub.lacity.org/datasets/business-improvement-districts-bid
    Explore at:
    Dataset updated
    Oct 26, 2018
    Dataset authored and provided by
    GIS@LADCP
    Area covered
    Description

    A business improvement district is a geographically defined area within the City of Los Angeles, in which services, activities and programs are paid for through a special assessment which is charged to all members within the district in order to equitably distribute the benefits received and the costs incurred to provide the agreed-upon services, activities and programs. Because the assessment funds collected in a given district cannot legally be spent outside of that BID, the City creates a trust fund for each BID, with funds periodically released to support operations. Additional information can be referenced from the Office of the City Clerk's BID website.The Neighborhood and Business Improvement District Division of the Office of the City Clerk manages the Business Improvement District Program and provides various types of assistance and information to interested parties. For more information about these services click on this link.Refresh Rate: As NeededLast Updated: Sept 11, 2020

  4. Macaca mulatta and Macaca fascicularis anatomical and functional MRI data

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip +5
    Updated Jul 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zheng Wang; Zheng Wang (2024). Macaca mulatta and Macaca fascicularis anatomical and functional MRI data [Dataset]. http://doi.org/10.5281/zenodo.3402113
    Explore at:
    json, application/gzip, sh, zip, txt, pdfAvailable download formats
    Dataset updated
    Jul 22, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Zheng Wang; Zheng Wang
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    The ION dataset includes anatomical, field map, and fMRI data from 8 monkeys: 4 Macaca mulatta and 4 Macaca fascicularis.

    The data is provided both as a directory structure compressed as all_the_data.zip, and as individual files. The correspondence between the directory structure and the individual files is contained in the file tree.json. The bash command source unflatten.sh can be used to convert the individual files into the original directory structure.

    Scanner Specifications

    • Siemens Tim Trio 3T whole-body scanner with or without head-only gradient insert (Siemens AC88)
    • 8-channel phased-array transceiver coils
    • Optimization of the magnetic field prior to data acquisition: Manual shimming

    Sample Description

    • Sample size: 8
    • Age distribution: 3.80-5.99 years
    • Weight distribution: 5.0-10.2 kg
    • Sex distribution: 7 male, 1 female

    Click here for the full sample description (.csv download)

    Scan Procedures and Parameters

    Ethics approval: All experimental procedures for nonhuman primate research were approved by the Institutional Animal Care and Use Committee in the Institute of Neuroscience and by the Biomedical Research Ethics Committee, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, and conformed to National Institutes of Health guidelines for the humane care and use of laboratory animals.

    Animal care and housing: Animals were housed in single cage

    Any applicable training: none

    Scanning preparations

    Anesthesia procedures: Anesthesia of the animals was inducted with an intramuscular injection of a cocktail of dexmedetomidine (18 - 30 µg/kg) and midazolam (0.2 - 0.3 mg/kg), supplemented with atropine sulfate (0.05 mg/kg). After intubation, anesthesia was maintained using the lowest possible concentration of isoflurane gas via a MRI-compatible ventilator.

    Time between anesthesia and scanning: Scanning lasted about 1.5 hours after induction.

    Head fixation: Custom-built MRI-compatible stereotaxic frame

    Position in scanner and procedure used: Sphinx position

    Contrast agent: none

    During scanning

    Physiological monitoring: Physiological parameters including blood oxygenation, ECG, rectal temperature, respiration rate and end-tidal CO2 were monitored. Oxygen saturation was kept over 95%.

    Additional procedures: Animals were ventilated by a MRI-compatible ventilator. Body temperature was kept constant using hot water blanket.

    Scan sequences

    • Resting-state:
      • Gradient-echo EPI
      • TR: 2000ms
      • TE: 29ms
      • Flip angle: 77°
      • Field of view: 96 x 96 mm
      • In plane resolution: 1.5 x 1.5 mm
      • Slices number: 32
      • Slice thickness: 2.5mm
      • GRAPPA factor: 2
      • Measurements: 200
      • Slice direction: Coronal slice
    • Structural:
      • T1 MPRAGE Sequence
      • Voxel resolution: 0.5 x 0.5 x 0.5 mm
      • TE: 3.12ms
      • TR: 2500ms
      • TI: 1100ms
      • Flip angle: 9°
      • Slice direction: 44 sagittal slices
      • Number of averages: 2
    • Additional:
      • Field map: a pair of gradient echo images
      • TE1: 4.22ms
      • TE2: 6.68ms
      • Orientation and resolution: same as resting-state images
      • Intended for EPI distortion correction

    Publications

    • Lv, Q., Yang, L., Li, G., Wang, Z., Shen, Z., Yu, W., Jiang, Q., Hou, B., Pu, J., Hu, H., & Wang, Z. (2016). Large-Scale Persistent Network Reconfiguration Induced by Ketamine in Anesthetized Monkeys: Relevance to Mood Disorders. Biological Psychiatry, 79(9), 765–775. https://doi.org/10.1016/j.biopsych.2015.02.028

    Personnel

    Zheng Wang1

    1Institute of Neuroscience, Key Laboratory of Primate Neurobiology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, China

    Acknowledgements

    The authors thank Drs. Lawrence Wald, Ravi Menon, John Gore, Franz Schmitt, Renate Jerecic, Thomas Benner, Kecheng Liu, Ignacio Vallines, and Hui Liu for their generous help and contribution to the construction of our custom-tuned gradient-insert (AC88) 3T MRI facility for nonhuman primate subjects.

    Funding

    • Hundred Talent Program of Chinese Academy of Sciences (Technology) (Zheng Wang)
    • Chinese 973 Program (2011CBA00400)
    • The “Strategic Priority Research Program (B) of the Chinese Academy of Sciences (XDB02030004)
    • The Outstanding Youth Grant (Hailan Hu)

    Detailed information can be found at http://fcon_1000.projects.nitrc.org/indi/PRIME/ion.html.

    Citation

  5. Single-echo/multi-echo comparison pilot

    • openneuro.org
    Updated Jun 20, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Taylor Salo; Dylan Tisdall; Lia Brodrick; Adam Czernuszenko; David Roalf; Sage Rush-Goebel; Nick Wellman; Ted Satterthwaite (2024). Single-echo/multi-echo comparison pilot [Dataset]. http://doi.org/10.18112/openneuro.ds005250.v1.0.0
    Explore at:
    Dataset updated
    Jun 20, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Taylor Salo; Dylan Tisdall; Lia Brodrick; Adam Czernuszenko; David Roalf; Sage Rush-Goebel; Nick Wellman; Ted Satterthwaite
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Single-echo/multi-echo comparison pilot

    This dataset contains ABCD-protocol single-echo BOLD scans, along with complex-valued, multi-echo BOLD scans for comparison. The multi-echo BOLD protocol uses the CMRR MB-EPI sequence, and comes from collaborators at UMinn. These scans include five echoes with both magnitude and phase reconstruction.

    The primary goal of this dataset was to evaluate the usability of the multi-echo fMRI protocol in a larger study, via direct comparison to the ABCD fMRI protocol, as well as via test-retest reliability analyses. However, these data may be useful to others (e.g., for testing complex-valued models, applying phase regression to multi-echo data, testing multi-echo denoising methods).

    Dataset contents

    This dataset includes 8 participants, each with between 1 and 3 sessions. MR data were acquired using a 3-Tesla Siemens Prisma MRI scanner.

    The imaging data were converted to NIfTI-1 format with dcm2niix v1.0.20220505, using heudiconv 0.13.1.

    In each session, the following scans were acquired:

    Structural data

    A T1-weighted anatomical scan (256 slices; repetition time, TR=1900 ms; echo time, TE=2.93 ms; flip angle, FA=9 degrees; field of view, FOV=176x262.144 mm, matrix size=176x256; voxel size=1x0.977x0.977 mm).

    Functional data

    One run of Penn fractal n-back task five-echo fMRI data (72 slices; repetition time, TR=1761 ms; echo times, TE=14.2, 38.93, 63.66, 88.39, 113.12 ms; flip angle, FA=68 degrees; field of view, FOV=220x220 mm, matrix size=110x110; voxel size=2x2x2 mm; multiband acceleration factor=6). Both magnitude and phase data were reconstructed for this run. The run was 7:03 minutes in length, including the three no-radiofrequency-excitation volumes at the end. After the _noRF volumes were split into separate files, each run was 6:58 minutes long.

    Two runs of open-eye resting-state five-echo fMRI data (72 slices; repetition time, TR=1761 ms; echo times, TE=14.2, 38.93, 63.66, 88.39, 113.12 ms; flip angle, FA=68 degrees; field of view, FOV=220x220 mm, matrix size=110x110; voxel size=2x2x2 mm; multiband acceleration factor=6). Both magnitude and phase data were reconstructed for these runs. Each run was 5:59 minutes in length, including the three no-radiofrequency-excitation volumes at the end. After the _noRF volumes were split into separate files, each run was 5:54 minutes long.

    Two runs of open-eye resting-state single-echo fMRI data acquired according to the ABCD protocol (60 slices; repetition time, TR=800 ms; echo time, TE=30 ms; flip angle, FA=52 degrees; field of view, FOV=216x216 mm, matrix size=90x90; voxel size=2.4x2.4x2.4 mm; multiband acceleration factor=6). Only magnitude data were reconstructed for these runs. Each run was 6:00 minutes in length.

    Field maps

    Two sets of field maps were acquired for the multi-echo fMRI scans.

    One set was a multiband, multi-echo gradient echo PEpolar-type field map (acq-MEGE), acquired with the same parameters as the multi-echo fMRI scans (except without magnitude+phase reconstruction). For each acquisition, we have created a copy of the single-band reference image from the first echo as the primary field map.

    The other set was a multi-echo spin-echo PEpolar-type field map (acq-MESE). We have also created a copy of the first echo for each direction as a standard field map.

    The single-echo copies of both the acq-MEGE and the acq-MESE field maps have B0FieldIdentifier fields and IntendedFor fields, though we used the acq-MESE field maps for the B0FieldSource fields of the multi-echo fMRI scans. Therefore, tools which leverage the B0* fields, such as fMRIPrep, should use the single-echo acq-MESE scans for distortion correction.

    Single-echo PEpolar-type EPI field maps (acq-SESE) with parameters matching the single-echo fMRI data were also acquired for distortion correction.

    Dataset idiosyncrasies

    Multi-echo field maps

    There are two sets of PEpolar-style field maps for the multi-echo BOLD scans: one gradient echo and one spin echo. Each field map set contains five echoes, like the BOLD scans. However, because distortion shouldn't vary across echoes (at least not at 3T), there is no need for multi-echo PEpolar-style field maps, and tools like fMRIPrep can't use them. As such, we have made a copy of the spin echo field map's first echo without the echo entity for BIDS compliance, as well as a copy of the gradient echo field map's first echo's single-band reference image.

    No radio frequency excitation scans

    The multi-echo BOLD scans included three no-radio-frequency noise scans acquired at the end of the scan, which have been split off into files with the _noRF suffix. These noise scans can be used to suppress thermal noise with NORDIC denoising. BOLD runs that were stopped early or failed to fully reconstruct may be missing these noise scans.

    The _noRF suffix is not (as of 2024/03/22) supported within BIDS, but there is an open pull request adding it (https://github.com/bids-standard/bids-specification/pull/1451).

    NORDIC-denoised BOLD runs

    We have run NORDIC on the multi-echo scans, using the noRF files when available. The NORDIC-denoised data have the rec-nordic entity in the filenames.

    We have made copies of the associated single-band reference images as well.

    sub-08 ses-noHC

    Subject 08's ses-noHC was accidentally acquired without the head coil plugged in. We included the session in the dataset in case anyone might find it useful, but do not recommend using the data for analyses.

    sub-04 ses-2 and ses-3

    Subject 04 had to stop session 2 early, so a separate session was acquired to finish acquiring the remaining scans.

    Excluded data

    Physio (PPG + chest belt) data were acquired for a subset of the scans, but, due to equipment issues, the data were unusable and have been excluded from the dataset.

    There was also an MEGRE field map sequence in the protocol, provided by Dr. Andrew Van, but there were reconstruction errors at the scanner, so these field maps were not usable. We've chosen to exclude the remaining files from the dataset.

    In some cases, we noticed reconstruction errors on final volumes in the multi-echo BOLD runs. When that happened, we dropped any trailing volumes, so that all files from a given run are the same length. For some runs, this involved entirely removing the noRF scans.

    Penn Fractal N-Back events files

    The events files for the fractal n-back task are not included in version 1.0.0 of the dataset. We will add them in a future patch release.

    Notes about acquisition

    The multi-echo BOLD scans were acquired on a 3T Siemens Prisma scanner running VE11C. The same protocol has exhibited consistent reconstruction errors on XA30.

  6. e

    London Plan Business Improvement Districts

    • data.europa.eu
    unknown
    Updated May 24, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GLA GIS team (2017). London Plan Business Improvement Districts [Dataset]. https://data.europa.eu/88u/dataset/london-plan-business-improvement-districts-1
    Explore at:
    unknownAvailable download formats
    Dataset updated
    May 24, 2017
    Dataset authored and provided by
    GLA GIS team
    Description

    A Business Improvement District (BID) is a defined area within which local businesses are required to pay an additional levy. The collected tax will be invested locally to fund projects within the district's boundaries. The BID is often funded primarily through the levy but can also draw on other public and private funding streams. There are currently 70 London BIDs with new ones being developed every year.

    Note: The dataset was created by the GLA's GIS team through digitisation of maps published by the BID creators. These original mapsvary in quality and scales of resolution, as a result the GLA cannot guarantee the exact boundary locations.

    There is an inconsistency between the number of BID polygons and number of BIDs, this is due to some BID polygons having multiple BID types e.g. Town Centre, Property and Industrial.

    Please direct an queries you may have to the contacts below.

  7. City Centre Litter Bin Survey DCC - Dataset - data.gov.ie

    • data.gov.ie
    Updated Oct 14, 2011
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.gov.ie (2011). City Centre Litter Bin Survey DCC - Dataset - data.gov.ie [Dataset]. https://data.gov.ie/dataset/dublin-city-centre-litter-bin-survey
    Explore at:
    Dataset updated
    Oct 14, 2011
    Dataset provided by
    data.gov.ie
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Survey of location of street litter bins in Dublin City Centre; including the Central Business District and part of the South-East administrative area of the city. Survey of location of street litter bins in Dublin City Centre. This 2008 survey was carried out in the Central Business District and extended to include part of the South-East administrative area of the city. The Central Business District (CBD) was selected as it has a higher concentration of street bins in an area with a high concentration of commercial, office and public buildings. It also comprises 97% of the citys Business Improvement District (BIDS) area. The CBD comprises 5% of the area of the city within the City Boundary and with 1000 litter bins is estimated to have 22% of the citys litter bins. 'It was decided to link the survey to the administrative and constituency boundaries in the city; Street name, District Electoral Division (DED), Dublin City Council Administrative areas and Local Electoral Area (Committee). Each bin is assigned a 7 digit identity number. The first four digits are the published DED number which relates to the respective DED, and are therefore similar for all of the lines with a DED. Dublin City DED, LEA and DCC area maps are also available as open data. The last 3 digits are unique to the bin within its DED. 'There are a total of 30 items of information, including location and description. The location data, i.e. DED, will facilitate transfer of data between constituencies and Areas in the event of boundary changes. Information fields include bin type, advertising space, cigarette butt box, location description (footpath, residential, retail/shop, school, public building, park or other) and adjoining Local Authority (shared roads). Bins types for the survey were divided into six categories:'A. Grey Round and Curved Top'B. Grey Round and Curved Top'C. Blue Square Triangular Top'D. Blue Square'E. Black Round'F. Silver Round'G. Black Round with Wide base'H. Black Round with Narrow base'Spatial projection; two sets of co-ordinates, Irish Grid (NG Eastings and Northings), and Irish Transverse Mercator (ITM Eastings and Northings), are given for each bin

  8. CoSpine Database_pain_dataset

    • openneuro.org
    Updated Jul 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhaoxing Wei; Xiaomin Lin; Lingfei Guo; Jixin Liu; Li Hu; Yaou Liu; Yazhuo Kong (2025). CoSpine Database_pain_dataset [Dataset]. http://doi.org/10.18112/openneuro.ds005883.v1.1.0
    Explore at:
    Dataset updated
    Jul 1, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Zhaoxing Wei; Xiaomin Lin; Lingfei Guo; Jixin Liu; Li Hu; Yaou Liu; Yazhuo Kong
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Simultaneous cortico-spinal fMRI (CoSpine) Pain-dataset

    Version: v1.1.0
    Date: 2025-06-19
    BIDS-compliant

    1. Dataset overview

    This is the first open-access, BIDS-compliant cortico-spinal task-based fMRI dataset. It enables exploration of cortical and spinal responses to thermal pain in healthy adults by providing simultaneous brain-and-spinal fMRI, physiological recordings, and behavioural ratings.

    • Task: 4 s thermal pain stimuli followed by self-report ratings
    • Modalities: fMRI (task-pain BOLD), T1-weighted structural MRI, field maps (GRE magnitude/phase, reversed phase-encoding B0), physiological signals (pulse & respiration), events/behavioural logs
    • Subjects: 39

    2. Experimental design

    • Pain stimulus: contact thermode, temperatures 42–47 °C (subject-specific)
    • Ratings:
      • PR – pain intensity (0–10 VAS)
      • UpR – unpleasantness (0–10 VAS)

    Full stimulus & rating onset/duration are recorded in each *_events.tsv.

    3. Data acquisition

    • Scanner: 3 T Siemens Prisma (Erlangen, Germany)
    • Coil: Standard 64-channel head-neck coil

    4. Dataset structure

    dataset/
    ├── dataset_description.json
    ├── README
    ├── task-pain_events.json
    ├── participants.tsv + .json
    ├── sub-01
    │  ├── anat
    │  │  ├── sub-01_T1w.json
    │  │  └── sub-01_T1w.nii.gz
    │  ├── fmap
    │  │  ├── sub-01_acq-GRE_magnitude1.json
    │  │  ├── sub-01_acq-GRE_magnitude1.nii.gz
    │  │  ├── sub-01_acq-GRE_magnitude2.json
    │  │  ├── sub-01_acq-GRE_magnitude2.nii.gz
    │  │  ├── sub-01_acq-GRE_phase1.json
    │  │  ├── sub-01_acq-GRE_phase1.nii.gz
    │  │  ├── sub-01_dir-AP_epi.json
    │  │  ├── sub-01_dir-AP_epi.nii.gz
    │  │  ├── sub-01_dir-PA_epi.json
    │  │  └── sub-01_dir-PA_epi.nii.gz
    │  └── func
    │    ├── sub-01_task-pain_bold.json
    │    ├── sub-01_task-pain_bold.nii.gz
    │    ├── sub-01_task-pain_dicom001_anonymized.IMA
    │    ├── sub-01_task-pain_events.tsv
    │    ├── sub-01_task-pain_recording-pulse_physio.json
    │    ├── sub-01_task-pain_recording-pulse_physio.tsv.gz
    │    ├── sub-01_task-pain_recording-respiratory_physio.json
    │    └── sub-01_task-pain_recording-respiratory_physio.tsv.gz
    ├── derivatives/
    │  └── preprocessed/
    │    ├── dataset_description.json
    │    └── sub-01/
    

    5. Column & label definitions

    *_events.tsv

    ColumnDescription
    trial_typepain = heat stimulus, rating = VAS response period
    PRPain intensity (0–10)
    UpRUnpleasantness (0–10)

    participants.tsv

    ColumnUnits
    ageyears
    sexm / f
    educationyears
    heat_temperature°C

    6. Quality control / preprocessing

    • Structural images have been defaced to protect participant identity.
    • Optimized preprocessing pipeline applied using:
      • FSL 6.0.7
      • Spinal Cord Toolbox 5.9

    Includes distortion correction (TOPUP) and physiological noise correction.
    Preprocessed data available in derivatives/preprocessed/.

    Contact

    • Zhaoxing Wei – zhaoxing.wei@dartmouth.edu
    • Yazhuo Kong – kongyz@psych.ac.cn
  9. Z

    Data from: A novel interface for rt-fMRI neurofeedback using music

    • data.niaid.nih.gov
    Updated Feb 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pereira, João (2025). A novel interface for rt-fMRI neurofeedback using music [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14727260
    Explore at:
    Dataset updated
    Feb 25, 2025
    Dataset provided by
    Sousa, Teresa
    Pereira, João
    Castelo-Branco, Miguel
    Sayal, Alexandre
    Direito, Bruno
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A novel interface for rt-fMRI neurofeedback using music

    This dataset was acquired with the goal of validating a musical interface for real-time fMRI neurofeedback that is adaptable to various experimental paradigms. Using a previously explored motor imagery connectivity-based framework, we evaluate its feasibility and efficacy by comparing the modulation of bilateral premotor cortex (PMC) activity during functional runs with real versus sham (random) feedback. The experiment involves a 60-minute MRI session, including anatomical scans, a PMC localizer run, and four functional runs (two with active feedback and two with sham feedback). Pre- and post-session questionnaires assess mood, musical background, and subjective feedback experiences. During functional runs, participants practice motor imagery of finger-tapping, with feedback delivered as a dynamic, pre-validated chord progression that evolves or regresses based on the correlation between left and right PMC activity.

    fMRI dataset specifications:

    BIDS

    22 subjects

    T1w, T2w, defaced using pydeface

    GRE fieldmaps, one per subject

    4 Neurofeedback runs, two active and two sham

    Study pre-registration: OSF Registries

  10. a

    City Owned Parcels: Live

    • hub.arcgis.com
    • arc-gis-hub-home-arcgishub.hub.arcgis.com
    Updated Mar 9, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Open_Data_Admin (2020). City Owned Parcels: Live [Dataset]. https://hub.arcgis.com/datasets/RochesterNY::city-owned-parcels-live
    Explore at:
    Dataset updated
    Mar 9, 2020
    Dataset authored and provided by
    Open_Data_Admin
    Area covered
    Description

    Please note: this data is live (updated nightly) to reflect the latest changes in the City's systems of record.Overview of the Data:This dataset is a polygon feature layer with the boundaries of all tax parcels owned by the City of Rochester. This includes all public parks, and municipal buildings, as well as vacant land and structures currently owned by the City. The data includes fields with features about each property including property type, date of sale, land value, dimensions, and more.About City Owned Properties:The City's real estate inventory is managed by the Division of Real Estate in the Department of Neighborhood and Business Development. Properties like municipal buildings and parks are expected to be in long term ownership of the City. Properties such as vacant land and vacant structures are ones the City is actively seeking to reposition for redevelopment to increase the City's tax base and economic activity. The City acquires many of these properties through the tax foreclosure auction process when no private entity bids the minimum bid. Some of these properties stay in the City's ownership for years, while others are quickly sold to development partners. For more information please visit the City's webpage for the Division of Real Estate: https://www.cityofrochester.gov/realestate/Data Dictionary: SBL: The twenty-digit unique identifier assigned to a tax parcel. PRINTKEY: A unique identifier for a tax parcel, typically in the format of “Tax map section – Block – Lot". Street Number: The street number where the tax parcel is located. Street Name: The street name where the tax parcel is located. NAME: The street number and street name for the tax parcel. City: The city where the tax parcel is located. Property Class Code: The standardized code to identify the type and/or use of the tax parcel. For a full list of codes, view the NYS Real Property System (RPS) property classification codes guide. Property Class: The name of the property class associated with the property class code. Property Type: The type of property associated with the property class code. There are nine different types of property according to RPS: 100: Agricultural 200: Residential 300: Vacant Land 400: Commercial 500: Recreation & Entertainment 600: Community Services 700: Industrial 800: Public Services 900: Wild, forested, conservation lands and public parks First Owner Name: The name of the property owner of the vacant tax parcel. If there are multiple owners, then the first one is displayed. Postal Address: The USPS postal address for the vacant landowner. Postal City: The USPS postal city, state, and zip code for the vacant landowner. Lot Frontage: The length (in feet) of how wide the lot is across the street. Lot Depth: The length (in feet) of how far the lot goes back from the street. Stated Area: The area of the vacant tax parcel. Current Land Value: The current value (in USD) of the tax parcel. Current Total Assessed Value: The current value (in USD) assigned by a tax assessor, which takes into consideration both the land value, buildings on the land, etc. Current Taxable Value: The amount (in USD) of the assessed value that can be taxed. Tentative Land Value: The current value (in USD) of the land on the tax parcel, subject to change based on appeals, reassessments, and public review. Tentative Total Assessed Value: The preliminary estimate (in USD) of the tax parcel’s assessed value, which includes tentative land value and tentative improvement value. Tentative Taxable Value: The preliminary estimate (in USD) of the tax parcel’s value used to calculate property taxes. Sale Date: The date (MM/DD/YYYY) of when the vacant tax parcel was sold. Sale Price: The price (in USD) of what the vacant tax parcel was sold for. Book: The record book that the property deed or sale is recorded in. Page: The page in the record book where the property deed or sale is recorded in. Deed Type: The type of deed associated with the vacant tax parcel sale. RESCOM: Notes whether the vacant tax parcel is zoned for residential or commercial use. R: Residential C: Commercial BISZONING: Notes the zoning district the vacant tax parcel is in. For more information on zoning, visit the City’s Zoning District map. OWNERSHIPCODE: Code to note type of ownership (if applicable). Number of Residential Units: Notes how many residential units are available on the tax parcel (if applicable). LOW_STREET_NUM: The street number of the vacant tax parcel. HIGH_STREET_NUM: The street number of the vacant tax parcel. GISEXTDATE: The date and time when the data was last updated. SALE_DATE_datefield: The recorded date of sale of the vacant tax parcel (if available). Source: This data comes from the department of Neighborhood and Business Development, Bureau of Real Estate.

  11. whole-spine

    • openneuro.org
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nathan Molinier; Sandrine Bédard; Mathieu Boudreau; Julien Cohen-Adad; Virginie Callot; Eva Alonso-Ortiz; Charles Pageot; Nilser Laines-Medina (2025). whole-spine [Dataset]. http://doi.org/10.18112/openneuro.ds005616.v1.1.1
    Explore at:
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Nathan Molinier; Sandrine Bédard; Mathieu Boudreau; Julien Cohen-Adad; Virginie Callot; Eva Alonso-Ortiz; Charles Pageot; Nilser Laines-Medina
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Whole-Spine Anatomical MRI dataset & B0 simulations

    Dataset Description

    This dataset includes structural MRI (T1-weighted) and simulated ΔB0 field maps for sixty volunteers. Participants were scanned using two Siemens 3T MRI scanners (MAGNETOM Tim Trio and Verio) equipped with head, neck, and spine coils. The scans cover anatomical regions extending from the head to the torso and include lateral torso encompassing most of both lungs.

    All data is organized in BIDS format and is available on OpenNeuro.

    Participants

    • Total Participants: 60
    • Males: 32
    • Females: 18
    • Undisclosed sex: 10
    • Age: Mean = 27.1 years, SD = 6.5, Range = 21-56 years
    • Weight: Mean = 66.7 kg, SD = 9.5, Range = 45-90 kg
    • Height: Mean = 175.6 cm, SD = 8.8, Range = 155-192 cm

    MRI Acquisition

    • Scanner Models: Siemens MAGNETOM Tim Trio and Verio (3T)
    • Coils Used: Head, neck, and spine coils
    • Structural Images: T1-weighted MPRAGE
    • Resolution: 1 mm³
    • Field of View (FOV): From head to torso, including lateral regions of both lungs
    • Data Processing

    Structural Data Segmentation

    1. Automated Segmentation Tools:

      • TotalSegmentator MRI: Used for full-body, sinuses, trachea, ear canal, and lungs based on training with 10 manually segmented subjects.
      • Samseg: Used for segmenting brain, eyes, and skull.
      • TotalSpineSeg: Used for segmenting spinal cord, vertebrae, and intervertebral disks.
    2. Post-Processing Steps:

      • Tissue islands were removed, holes were closed, and tissue masks for specific regions (skull, brain, eyes, sinus, and ear canal) were smoothed using a custom pipeline (GitHub repo, release v1.1, commit: 4f3c471db542fa9b12f308aaeece401323980965).
      • Tissue masks were then merged into a single NIfTI file with the following voxel assignments: background (air), body, brain, spine, lungs, skull, trachea, sinus, ear canal, and eyes.

    Susceptibility Assignment

    Each anatomical label in the segmentation volumes was assigned a specific susceptibility value (χ) as defined in this Github repository:

    • Air: 0.35 ppm
    • Sinus & Ear Canals: -2 ppm
    • Trachea & Lungs: -4.2 ppm
    • Brain: -9.04 ppm
    • Body & Eyes: -9.05 ppm
    • Spinal Canal & Disks: -9.055 ppm
    • Skull & Vertebrae: -11 ppm

    Field Map Simulation

    Field maps (ΔB0) were generated by applying a convolution in the Fourier domain between the susceptibility maps and an analytical dipole distribution. Key parameters:

    • Implementation: Python (GitHub repo)
    • Padding:
      • Edge-value padding applied on five volume surfaces
      • Constant-value padding applied on the dorsal surface
      • Padding Size: 50 voxels per surface

    Dataset Files and Structure

    This dataset is organized according to the BIDS format. Key directories and files include:

    • /sub-
    • /derivatives: Includes simulated ΔB0 field maps and segmentation labels
  12. Data from: A large-scale fMRI dataset for human action recognition

    • openneuro.org
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ming Zhou; Zhengxin Gong; Yuxuan Dai; Yushan Wen; Youyi Liu; Zonglei Zhen (2023). A large-scale fMRI dataset for human action recognition [Dataset]. http://doi.org/10.18112/openneuro.ds004488.v1.1.1
    Explore at:
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Ming Zhou; Zhengxin Gong; Yuxuan Dai; Yushan Wen; Youyi Liu; Zonglei Zhen
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Summary

    Human action recognition is one of our critical living abilities, allowing us to interact easily with the environment and others in everyday life. Although the neural basis of action recognition has been widely studied using a few categories of actions from simple contexts as stimuli, how the human brain recognizes diverse human actions in real-world environments still need to be explored. Here, we present the Human Action Dataset (HAD), a large-scale functional magnetic resonance imaging (fMRI) dataset for human action recognition. HAD contains fMRI responses to 21,600 video clips from 30 participants. The video clips encompass 180 human action categories and offer a comprehensive coverage of complex activities in daily life. We demonstrate that the data are reliable within and across participants and, notably, capture rich representation information of the observed human actions. This extensive dataset, with its vast number of action categories and exemplars, has the potential to deepen our understanding of human action recognition in natural environments.

    Data record

    The data were organized according to the Brain-Imaging-Data-Structure (BIDS) Specification version 1.7.0 and can be accessed from the OpenNeuro public repository (accession number: ds004488). The raw data of each subject were stored in "sub-< ID>" directories. The preprocessed volume data and the derived surface-based data were stored in “derivatives/fmriprep” and “derivatives/ciftify” directories, respectively. The video clips stimuli were stored in “stimuli” directory.

    Video clips stimuli The video clips stimuli selected from HACS are deposited in the "stimuli" folder. Each of the 180 action categories holds a folder in which 120 unique video clips are stored.

    Raw data The data for each participant are distributed in three sub-folders, including the “anat” folder for the T1 MRI data, the “fmap” folder for the field map data, and the “func” folder for functional MRI data. The events file in “func” folder contains the onset, duration, trial type (category index) in specific scanning run.

    Preprocessed volume data from fMRIprep The preprocessed volume-based fMRI data are in subject's native space, saved as “sub-

    Preprocessed surface data from ciftify Under the “results” folder, the preprocessed surface-based data are saved in standard fsLR space, named as “sub-

  13. Postnatal Affective MRI Dataset

    • openneuro.org
    Updated Sep 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PhD Heidemarie Laurent; Megan K. Finnegan; Katherine Haigler (2020). Postnatal Affective MRI Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds003136.v1.0.0
    Explore at:
    Dataset updated
    Sep 12, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    PhD Heidemarie Laurent; Megan K. Finnegan; Katherine Haigler
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Postnatal Affective MRI Dataset

    Authors Heidemarie Laurent, Megan K. Finnegan, and Katherine Haigler

    The Postnatal Affective MRI Dataset (PAMD) includes MRI and psych data from 25 mothers at three months postnatal, with additional psych data collected at three additional timepoints (six, twelve, and eighteen months postnatal). Mother-infant dyad psychosocial tasks and cortisol samples were also collected at all four timepoints, but this data is not included in this dataset. In-scanner tasks involved viewing own- and other-infant affective videos and viewing and labeling adult affective faces. This repository includes de-identified MRI, in-scanner task, demographic, and psych data from this study.

    Citation Laurent, H., Finnegan, M. K., & Haigler, K. (2020). Postnatal Affective MRI Dataset. OpenNeuro. Retrieved from OpenNeuro.org.

    Acknowledgments Saumya Agrawal was instrumental in getting the PAMD dataset into a BIDS-compliant structure.

    Funding This work was supported by the Society for Research in Child Development Victoria Levin Award "Early Calibration of Stress Systems: Defining Family Influences and Health Outcomes" to Heidemarie Laurent and by the University of Oregon College of Arts and Sciences

    Contact For questions about this dataset or to request access to alcohol- and tobacco-related psych data, please contact Dr. Heidemarie Laurent, hlaurent@illinois.edu.

    References Laurent, H. K., Wright, D., & Finnegan, M. K. (2018). Mindfulness-related differences in neural response to own-infant negative versus positive emotion contexts. Developmental Cognitive Neuroscience 30: 70-76. https://doi.org/10.1016/j.dcn.2018.01.002.

    Finnegan, M. K., Kane, S., Heller, W., & Laurent, H. (2020). Mothers' neural response to valenced infant interactions predicts postnatal depression and anxiety. PLoS One (under review).

    MRI Acquisition The PAMD dataset was acquired in 2015 at the University of Oregon Robert and Beverly Lewis Center for Neuroimaging with a 3T Siemens Allegra 3 magnet. A standard 32-channel phase array birdcage coil was used to acquire data from the whole brain. Sessions began with a shimming routine to optimize signal-to-noise ratio, followed by a fast localizer scan (FISP) and Siemens Autoalign routine, a field map, then the 4 functional runs and anatomical scan.

    Anatomical: T1*-weighted 3D MPRAGE sequence, TI=1100 ms, TR=2500 ms, TE=3.41 ms, flip angle=7°, 176 sagittal slices, 1.0mm thick, 256×176 matrix, FOV=256mm.

    Fieldmap: gradient echo sequence TR=.4ms, TE=.00738 ms, deltaTE=2.46 ms, 4mm thick, 64x64x32x2 matrix.

    Task: T2-weighted gradient echo sequence, TR=2000 ms, TE=30 ms, flip angle=90°, 32 contiguous slices acquired ascending and interleaved, 4 mm thick, 64×64 voxel matrix, 226 vols per run.

    Participants Mothers (n=25) of 3-month-old infants were recruited from the Women, Infants, and Children program and other community agencies serving low-income women in a midsize Pacific Northwest city. Mothers' ages ranged from 19 to 33 (M=26.4, SD=3.8). Most mothers were Caucasian (72%, 12% Latina, 8% Asian American, 8% other) and married or living with a romantic partner (88%). Although most reported some education past high school (84%), only 24% had completed college or received a graduate degree, and their median household income was between $20,000 and $29,999. For more than half of the mothers (56%), this was their first child (36% second child, 8% third child). Most infants were born on time (4% before 37 weeks and 8% after 41 weeks of pregnancy), and none had serious health problems. A vaginal delivery was reported by 56% of mothers, with 88% breastfeeding and 67% bed-sharing with their infant at the time of assessment. Over half of the mothers (52%) reported having engaged in some form of contemplative practice (mostly yoga and only 8% indicated some form of meditation), and 31% reported currently engaging in that practice. All women gave informed consent prior to participation, and all study procedures were approved by the University of Oregon Institutional Review Board. Due to a task malfunction, participant 178's scanning session was split over two days, with the anatomical acquired in ses-01, and the field maps and tasks acquired in ses-02.

    Study overview Mothers visited the lab to complete assessments at four timepoints postnatal: the first session occurred when mothers were approximately three months postnatal (T1), the second session at approximately six months postnatal (T2), the third session at approximately twelve months postnatal (T3), and the fourth and last session at approximately eighteen months postnatal (T4). MRI scans were acquired shortly after their first session (T1).

    Asssessment data Assessments collected during sessions include demographic, relationship, attachment, mental health, and infant-related questionnaires. For a full list of included measures and timepoints at which they were acquired, please refer to PAMD_codebook.tsv in the phenotype folder. Data has been made available and included in the phenotype folder as 'PAMD_T1_psychdata', 'PAMD_T2_psychdata', 'PAMD_T3_psychdata', 'PAMD_T4_psychdata'. To protect participants' privacy, all identifiers and questions relating to drugs or alcohol have been removed. If you would like access to drug- and alcohol-related questions, please contact the principle investigator, Dr. Heidemarie Laurent, to request access. Assessment data will be uploaded shortly.

    Post-scan ratings After the scan session, mothers watched all of the infant videos and rated the infant's and their own emotional valence and intensity for each video. For valence, mothers were asked "In this video clip, how positive or negative is your baby's emotion?" and "While watching this video clip, how positive or negative is your emotion? from -100 (negative) to +100 (positive). For emotional intensity, mothers were asked "In this video clip, how intense is your baby's emotion?" and "While watching this video clip, how intense is your emotion?"" on a scale of 0 (no intensity) to 100 (maximum intensity). Post-scan ratings are available in the phenotype folder as "PAMD_Post-ScanRatings."

    MRI Tasks

    Neural Reactivity to Own- and Other-Infant Affect

    File Name: task-infant 
    

    Approximately three months postnatal, a graduate research assistant visited mothers’ homes to conduct a structured clinical interview and video-record the mother interacting with her infant during a peekaboo and arm-restraint task, designed to elicit positive and negative emotions, respectively. The mother and infant were face-to-face for both tasks. For the peekaboo task, the mother covered her face with her hands and said "baby," then opened her hands and said "peekaboo" (Montague and Walker-Andrews, 2001). This continued for three minutes, or until the infant showed expressions of joy. For the arm-restraint task, the mother changed their baby's diaper and then held the infant's arms to their side for up to two minutes (Moscardino and Axia, 2006). The mother was told to keep her face neutral and not talk to her infant during this task. This procedure was repeated with a mother-infant dyad that were not included in the rest of the study to generate other-infant videos. Videos were edited to 15-second clips that showed maximum positive and negative affect. Presentation® software (Version 14.7, Neurobehavioral Systems, Inc. Berkeley, CA, www.neurobs.com) was used to present positive and negative own- and other-infant clips and rest blocks in counterbalanced order during two 7.5-minute runs. Participants were instructed to watch the videos and respond as they normally would without additional task demands. To protect participants' and their infants' privacy, infant videos will not be made publicly available. However, the mothers' post-scan rating of their infant's, the other infant's, and their own emotional valence and intensity can be found in the phenotype folder as "PAMD_Post-ScanRatings."

    Observing and Labeling Affective Faces

    File Name: task-affect 
    

    Face stimuli were selected from a standardized set of images (Tottenham, Borscheid, Ellersten, Markus, & Nelson, 2002). Presentation Software (version 14.7, Neurobehavioral Systems, Inc., Berkeley, CA, www.neurobs.com) was used to show participants race-matched adult target faces displaying emotional expressions (positive: three happy faces; negative: one fear, one sad, one anger; two from each category were open-mouthed; one close-mouthed) and were instructed to "observe" or choose the correct affect label for the target image. In the observe task, subjects viewed an emotionally evocative face without making a response. During the affect-labeling task, subjects chose the correct affect label (e.g., "scared," "angry," "happy," "surprised") from a pair of words shown at the bottom of the screen (Lieberman et al., 2007). Each block was preceded by a 3-second instruction screen cueing participants for the current task ("observe" and "affect labeling") and consisted of five affective faces presented for 5 seconds each, with a 1- to 3-second jittered fixation cross between stimuli. Each run consisted of twelve blocks (six observe; six label) counterbalanced within the run and in a semi-random order of trials within blocks (no more than four in a row of positive or negative and, in the affect-labeling task, of the correct label on the right or left side).

    .Nii to BIDs

    The raw DICOMs were anonymized and converted to BIDS format using the following procedure (for more details, seehttps://github.com/Haigler/PAMD_BIDS/).

    1. Deidentifying DICOMS: Batch Anonymization of the DICOMS using DicomBrowser (https://nrg.wustl.edu/software/dicom-browser/)

    2. Conversion to .nii and BIDS structure: Anonymized DICOMs were converted to

  14. AutomatedZShimSpinalCord

    • openneuro.org
    Updated Mar 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Merve Kaptan; Falk Eippert (2022). AutomatedZShimSpinalCord [Dataset]. http://doi.org/10.18112/openneuro.ds004068.v1.0.0
    Explore at:
    Dataset updated
    Mar 14, 2022
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Merve Kaptan; Falk Eippert
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Description

    This is a data set consisting of spinal cord MRI data from 48 participants. For each participant, there is i) a T2-weighted anatomical image, ii) two different kinds of field-maps (vendor-based and in-house), iii) z-shim reference EPI scans, and iv) EPI images obtained under different z-shim sequence variants (no z-shim, manual z-shim, automated z-shim) and with different echo times. For a detailed description please see the following article: Kaptan, M., Vannesjo, S. J., Mildner, T., Horn, U., Hartley-Davies, R., Oliva, V., Brooks, J. C. W., Weiskopf, N., Finsterbusch, J., & Eippert, F. (2021). Automated slice-specific z-shimming for fMRI of the human spinal cord. BioRxiv, 2021.07.27.454049. https://doi.org/10.1101/2021.07.27.454049

    Citing this dataset

    Should you make use of this data set in any publication, please cite the following article: Kaptan, M., Vannesjo, S. J., Mildner, T., Horn, U., Hartley-Davies, R., Oliva, V., Brooks, J. C. W., Weiskopf, N., Finsterbusch, J., & Eippert, F. (2021). Automated slice-specific z-shimming for fMRI of the human spinal cord. BioRxiv, 2021.07.27.454049. https://doi.org/10.1101/2021.07.27.454049

    License

    This data set is made available under the Creative Commons CC0 license. For more information, see https://creativecommons.org/share-your-work/public-domain/cc0/

    Data set

    This data set is organized according to the Brain Imaging Data Structure specification. For more information on this data specification, see https://bids-specification.readthedocs.io/en/stable/

    Each participant’s data are in one subdirectory (e.g., sub-ZS001), which contains the raw NIfTI data (after DICOM to NIfTI conversion) for this particular participant, as well as the associated metadata.

    The z-shim indices that were selected during scanning for both the automated and the manual sequence variants can be found in the following files (in the “derivatives” parent directory), which contain the data for all participants: autozshimPicks_duringScan.csv and manualzshimPicks_duringScan.csv.

    Please note that the EPI time-series data (250 volumes acquired under different z-shimming sequence variants) are not shared. Derivatives based on these volumes that were used to produce the results reported in the article by Kaptan et al. (such as tSNR maps) are shared for each participant and can be found in each participant’s subdirectory of the “derivatives” parent directory. For more details about the preprocessing pipeline and the description of each derivative, please see the following GitHub link: https://github.com/eippertlab/zshim-spinalcord

    Also note that due to technical problems, in the first 3 participants (sub-ZS001, sub-ZS002, sub-ZS003) we did not acquire the second in-house field map (...run-02_fieldmap.nii.gz).

    Should you have any questions about this data set, please contact mkaptan@cbs.mpg.de or eippert@cbs.mpg.de.

  15. Data from: Disarming emotional memories using Targeted Memory Reactivation...

    • openneuro.org
    Updated Oct 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Viviana Greco; Tamas A. Foldes; Neil A. Harrison; Kevin Murphy; Marta Wawrzuta; Mahmoud E. A. Abdellahi; Penelope A. Lewis (2024). Disarming emotional memories using Targeted Memory Reactivation during Rapid Eye Movement sleep [Dataset]. http://doi.org/10.18112/openneuro.ds005530.v1.0.4
    Explore at:
    Dataset updated
    Oct 3, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Viviana Greco; Tamas A. Foldes; Neil A. Harrison; Kevin Murphy; Marta Wawrzuta; Mahmoud E. A. Abdellahi; Penelope A. Lewis
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Disarming emotional memories using Targeted Memory Reactivation during Rapid Eye Movement sleep

    This dataset contains fMRI and EEG data from a study investigating the effects of Targeted Memory Reactivation (TMR) during REM sleep on emotional reactivity.

    Study Design

    Participants rated the arousal of 48 affective images paired with semantically matching sounds. Half of these sounds were cued during REM in the subsequent overnight sleep cycle. Participants rated the images in an MRI scanner with pulse oximetry 48 hours after encoding, and again two weeks later.

    Sessions

    1. Baseline: Initial arousal ratings and overnight sleep with TMR
    2. Session 48-H: fMRI scanning and arousal ratings (48 hours after baseline)
    3. Session 2-Wk: Online follow-up (2 weeks after baseline)

    Data Acquisition

    • fMRI: Acquired using a Siemens Magnetom Prisma 3T scanner with a 32-channel head coil
    • Heart Rate: Recorded using pulse oximetry during the fMRI session3
    • EEG data from the overnight sleep session

    Dataset Contents

    This initial upload contains: - T1-weighted structural images - Functional MRI data from Session 48-H - B0 field maps

    Preprocessing

    fMRI data were preprocessed using fMRIPrep 20.2.7. Details of the preprocessing pipeline can be found in the methods section of the associated publication.

    T1-weighted structural scans were defaced using pydeface version 2.0.2 to ensure participant anonymity.

    Additional Information

    For more detailed information about the study design, methods, and results, please refer to the associated publication (citation to be added upon publication).

    This dataset was initially converted to BIDS format using ezBIDS (https://brainlife.io/ezbids).

    Contact

    For questions about this dataset, please contact: Dr Tamas Foldes foldesta@cardiff.ac.uk

  16. The NIMH Healthy Research Volunteer Dataset

    • openneuro.org
    Updated Feb 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Allison C. Nugent; Adam G Thomas; Margaret Mahoney; Alison Gibbons; Jarrod Smith; Antoinette Charles; Jacob S Shaw; Jeffrey D Stout; Anna M Namyst; Arshitha Basavaraj; Eric Earl; Dustin Moraczewski; Emily Guinee; Michael Liu; Travis Riddle; Joseph Snow; Shruti Japee; Morgan Andrews; Adriana Pavletic; Stephen Sinclair; Vinai Roopchansingh; Peter A Bandettini; Joyce Chung (2025). The NIMH Healthy Research Volunteer Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds005752.v2.1.0
    Explore at:
    Dataset updated
    Feb 18, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Allison C. Nugent; Adam G Thomas; Margaret Mahoney; Alison Gibbons; Jarrod Smith; Antoinette Charles; Jacob S Shaw; Jeffrey D Stout; Anna M Namyst; Arshitha Basavaraj; Eric Earl; Dustin Moraczewski; Emily Guinee; Michael Liu; Travis Riddle; Joseph Snow; Shruti Japee; Morgan Andrews; Adriana Pavletic; Stephen Sinclair; Vinai Roopchansingh; Peter A Bandettini; Joyce Chung
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The National Institute of Mental Health (NIMH) Research Volunteer (RV) Data Set

    A comprehensive dataset characterizing healthy research volunteers in terms of clinical assessments, mood-related psychometrics, cognitive function neuropsychological tests, structural and functional magnetic resonance imaging (MRI), along with diffusion tensor imaging (DTI), and a comprehensive magnetoencephalography battery (MEG).

    In addition, blood samples are currently banked for future genetic analysis. All data collected in this protocol are broadly shared in the OpenNeuro repository, in the Brain Imaging Data Structure (BIDS) format. In addition, task paradigms and basic pre-processing scripts are shared on GitHub. This dataset is unprecedented in its depth of characterization of a healthy population and will allow a wide array of investigations into normal cognition and mood regulation.

    This dataset is licensed under the Creative Commons Zero (CC0) v1.0 License.

    Release Notes

    Release v2.0.0

    This release includes data collected between 2020-06-03 (cut-off date for v1.0.0) and 2024-04-01. Notable changes in this release:

    1. 769 new participants have been added along with re-evaluation data for 15 participants. Total unique participants count is now 1859.
    2. visit and age_at_visit columns added to phenotype files to distinguish between visits and intervals between them.
    3. Follow-up online survey data included.
    4. Replaced Beck Anxiety Inventory (BAI) and Beck Depression Inventory-II (BDI-II) with General Anxiety Disorder-7 (GAD7) and Patient Health Questionnaire 9 (PHQ9) surveys, respectively.
    5. Discontinued the Perceived Health rating survey.
    6. Added Brief Trauma Questionnaire (BTQ) and Big Five personality survey to online screening questionnaires.
    7. MRI:
      • Replaced ADNI-3 resting state sequence with a multi-echo sequence with higher spatial resolution.
      • Replaced field map scans with a shorter reversed-blipped EPI scan.
    8. MEG:
      • Some participants have 6-minute empty room data instead of the shorter duration empty room acquisition.

    See the CHANGES file for complete version-wise changelog.

    Participant Eligibility

    To be eligible for the study, participants need to be medically healthy adults over 18 years of age with the ability to read, speak and understand English. All participants provided electronic informed consent for online pre-screening, and written informed consent for all other procedures. Participants with a history of mental illness or suicidal or self-injury thoughts or behavior are excluded. Additional exclusion criteria include current illicit drug use, abnormal medical exam, and less than an 8th grade education or IQ below 70. Current NIMH employees, or first degree relatives of NIMH employees are prohibited from participating. Study participants are recruited through direct mailings, bulletin boards and listservs, outreach exhibits, print advertisements, and electronic media.

    Clinical Measures

    All potential volunteers visit the study website, check a box indicating consent, and fill out preliminary screening questionnaires. The questionnaires include basic demographics, the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), the DSM-5 Self-Rated Level 1 Cross-Cutting Symptom Measure, the DSM-5 Level 2 Cross-Cutting Symptom Measure - Substance Use, the Alcohol Use Disorders Identification Test (AUDIT), the Edinburgh Handedness Inventory, and a brief clinical history checklist. The WHODAS 2.0 is a 15 item questionnaire that assesses overall general health and disability, with 14 items distributed over 6 domains: cognition, mobility, self-care, “getting along”, life activities, and participation. The DSM-5 Level 1 cross-cutting measure uses 23 items to assess symptoms across diagnoses, although an item regarding self-injurious behavior was removed from the online self-report version. The DSM-5 Level 2 cross-cutting measure is adapted from the NIDA ASSIST measure, and contains 15 items to assess use of both illicit drugs and prescription drugs without a doctor’s prescription. The AUDIT is a 10 item screening assessment used to detect harmful levels of alcohol consumption, and the Edinburgh Handedness Inventory is a systematic assessment of handedness. These online results do not contain any personally identifiable information (PII). At the conclusion of the questionnaires, participants are prompted to send an email to the study team. These results are reviewed by the study team, who determines if the participant is appropriate for an in-person interview.

    Participants who meet all inclusion criteria are scheduled for an in-person screening visit to determine if there are any further exclusions to participation. At this visit, participants receive a History and Physical exam, Structured Clinical Interview for DSM-5 Disorders (SCID-5), the Beck Depression Inventory-II (BDI-II), Beck Anxiety Inventory (BAI), and the Kaufman Brief Intelligence Test, Second Edition (KBIT-2). The purpose of these cognitive and psychometric tests is two-fold. First, these measures are designed to provide a sensitive test of psychopathology. Second, they provide a comprehensive picture of cognitive functioning, including mood regulation. The SCID-5 is a structured interview, administered by a clinician, that establishes the absence of any DSM-5 axis I disorder. The KBIT-2 is a brief (20 minute) assessment of intellectual functioning administered by a trained examiner. There are three subtests, including verbal knowledge, riddles, and matrices.

    Biological and physiological measures

    Biological and physiological measures are acquired, including blood pressure, pulse, weight, height, and BMI. Blood and urine samples are taken and a complete blood count, acute care panel, hepatic panel, thyroid stimulating hormone, viral markers (HCV, HBV, HIV), c-reactive protein, creatine kinase, urine drug screen and urine pregnancy tests are performed. In addition, three additional tubes of blood samples are collected and banked for future analysis, including genetic testing.

    Imaging Studies

    Participants were given the option to enroll in optional magnetic resonance imaging (MRI) and magnetoencephalography (MEG) studies.

    MRI

    On the same visit as the MRI scan, participants are administered a subset of tasks from the NIH Toolbox Cognition Battery. The four tasks asses attention and executive functioning (Flanker Inhibitory Control and Attention Task), executive functioning (Dimensional Change Card Sort Task), episodic memory (Picture Sequence Memory Task), and working memory (List Sorting Working Memory Task). The MRI protocol used was initially based on the ADNI-3 basic protocol, but was later modified to include portions of the ABCD protocol in the following manner:

    1. The T1 scan from ADNI3 was replaced by the T1 scan from the ABCD protocol.
    2. The Axial T2 2D FLAIR acquisition from ADNI2 was added, and fat saturation turned on.
    3. Fat saturation was turned on for the pCASL acquisition.
    4. The high-resolution in-plane hippocampal 2D T2 scan was removed, and replaced with the whole brain 3D T2 scan from the ABCD protocol (which is resolution and bandwidth matched to the T1 scan).
    5. The slice-select gradient reversal method was turned on for DTI acquisition, and reconstruction interpolation turned off.
    6. Scans for distortion correction were added (reversed-blip scans for DTI and resting state scans).
    7. The 3D FLAIR sequence was made optional, and replaced by one where the prescription and other acquisition parameters provide resolution and geometric correspondence between the T1 and T2 scans.

    MEG

    The optional MEG studies were added to the protocol approximately one year after the study was initiated, thus there are relatively fewer MEG recordings in comparison to the MRI dataset. MEG studies are performed on a 275 channel CTF MEG system. The position of the head was localized at the beginning and end of the recording using three fiducial coils. These coils were placed 1.5 cm above the nasion, and at each ear, 1.5 cm from the tragus on a line between the tragus and the outer canthus of the eye. For some participants, photographs were taken of the three coils and used to mark the points on the T1 weighted structural MRI scan for co-registration. For the remainder of the participants, a BrainSight neuro-navigation unit was used to coregister the MRI, anatomical fiducials, and localizer coils directly prior to MEG data acquisition.

    Specific Survey and Test Data within Data Set

    NOTE: In the release 2.0 of the dataset, two measures Brief Trauma Questionnaire (BTQ) and Big Five personality survey were added to the online screening questionnaires. Also, for the in-person screening visit, the Beck Anxiety Inventory (BAI) and Beck Depression Inventory-II (BDI-II) were replaced with the General Anxiety Disorder-7 (GAD7) and Patient Health Questionnaire 9 (PHQ9) surveys, respectively. The Perceived Health rating survey was discontinued.

    1. Preliminary Online Screening Questionnaires

    Survey or TestBIDS TSV Name
    Alcohol Use Disorders Identification Test (AUDIT)audit.tsv
    Brief Trauma Questionnaire (BTQ)btq.tsv
    Big-Five Personalitybig_five_personality.tsv
    Demographicsdemographics.tsv
    Drug Use Questionnaire
  17. PAFIN: PennLINC AFfective INstability

    • openneuro.org
    Updated May 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juliette B. H. Brook; Taylor Salo; Audrey C. Luo; Joëlle Bagautdinova; Sage Rush; Aaron F. Alexander-Bloch; Erica B. Baller; Monica E. Calkins; Matt Cieslak; Elena C. Cooper; John A. Detre; Mark A. Elliot; Damien A. Fair; Phoebe Freedman; Philip R. Gehrman; Ruben C. Gur; Raquel E. Gur; Arno Klein; Nina Laney; Timothy O. Laumann; Kahini Mehta; Kathleen Merikangas; Michael Milham; Jonathan A. Mitchell; Tyler M. Moore; Steven M. Nelson; Kosha Ruparel; Brooke L. Sevchik; Sheila Shanmugan; Haochang Shou; Manuel Taso; Lauren K. White; Daniel H. Wolf; M. Dylan Tisdall; David R. Roalf; Theodore D. Satterthwaite (2025). PAFIN: PennLINC AFfective INstability [Dataset]. http://doi.org/10.18112/openneuro.ds006131.v1.0.0
    Explore at:
    Dataset updated
    May 22, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Juliette B. H. Brook; Taylor Salo; Audrey C. Luo; Joëlle Bagautdinova; Sage Rush; Aaron F. Alexander-Bloch; Erica B. Baller; Monica E. Calkins; Matt Cieslak; Elena C. Cooper; John A. Detre; Mark A. Elliot; Damien A. Fair; Phoebe Freedman; Philip R. Gehrman; Ruben C. Gur; Raquel E. Gur; Arno Klein; Nina Laney; Timothy O. Laumann; Kahini Mehta; Kathleen Merikangas; Michael Milham; Jonathan A. Mitchell; Tyler M. Moore; Steven M. Nelson; Kosha Ruparel; Brooke L. Sevchik; Sheila Shanmugan; Haochang Shou; Manuel Taso; Lauren K. White; Daniel H. Wolf; M. Dylan Tisdall; David R. Roalf; Theodore D. Satterthwaite
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    PAFIN: PennLINC AFfective INstability

    PAFIN is an ongoing study acquiring a wide range of imaging and phenotypic data in a sample of 100 participants.

    The first release of the dataset contains data from the first 10 participants in the study, along with two pilot subjects.

    Dataset contents

    Structural MRI

    Each participant has one T1-weighted and one T2-weighted structural scan. Additionally, we acquired a 5-echo MEGRE scan with magnitude and phase reconstruction for quantitative susceptibility mapping.

    Functional MRI

    Each participant has two fMRI runs: one with an AP phase encoding direction while watching the Pixar short "Bao" and one with a PA phase encoding direction while watching the Pixar short "Your Friend the Rat".

    One of the pilot subjects has a resting-state run instead of a "Bao" run.

    Each run includes 5 echoes and magnitude+phase reconstruction.

    Concurrent physiological recordings were acquired using a respiration belt and a pulse oximeter, although we have not performed quality control on these recordings yet.

    The dataset includes both the raw and NORDIC-denoised versions of the fMRI data, as NORDIC is not implemented in fMRIPrep yet.

    Diffusion MRI

    One 92-direction compressed sensing diffusion spectrum imaging (CS-DSI) run was acquired for all participants. Magnitude and phase reconstruction was enabled in order to improve denoising.

    Perfusion MRI

    One single-delay, background-suppressed PCASL scan was acquired for all participants (post-labeling delay = 1.8 s, labeling duration = 1.8 s).

    A reference scan was also acquired to assist in ASL calibration. This reference scan includes two volumes: an M0 scan and a presaturated inversion recovery volume. The M0 volume is retained in the perfusion folder with the m0scan suffix, and a copy of the M0 volume, along with the inversion recovery volume, are retained in the anat folder with the TDP suffix. This type of scan is not supported in BIDS, so we used the custom TDP suffix (transit delay prescan). We may change this in the future if BIDS starts supporting this type of scan.

    Field maps

    Multi-echo field gradient echo PEpolar-style field maps (acq-func+meepi) were acquired for all participants. For each acquisition, we have created a copy of the first echo (acq-func) as the primary field map.

    Field maps were also acquired for the diffusion data (acq-dwi) with the opposite phase encoding direction as the CS-DSI scans.

    Phenotypic data

    A number of self-report measures were collected from participants. These measures include the following:

    • Accountable Health Communities Health-Related Social Needs Screening Tool (phenotype/ahc_hrsn)
    • Affective Reactivity Index (phenotype/ari)
    • Borderline Evaluation of Severity over Time (phenotype/best)
    • BIS/BAS Reward Subscale Child (phenotype/bisbas_reward_child)
    • Child and Adolescent Trauma Screen (phenotype/cats)
    • Difficulties in Emotion Regulation Scale (phenotype/ders)
    • Emotion Regulation Questionnaire - Short Form (phenotype/erq_s)
    • Extended Strengths and Weaknesses Assessment of ADHD Symptoms and Normal Behavior (phenotype/eswan_adhd)
    • Extended Strengths and Weaknesses Assessment of DMDD Symptoms and Normal Behavior (phenotype/eswan_dmdd)
    • Ultra-Short Munich Chronotype Questionnaire (phenotype/mctq_us)
    • Neighborhood Community Cohesion Short-Form (phenotype/ncc)
    • Neighborhood Safety and Crime Short-Form (phenotype/nsc)
    • Patient Health Questionnaire 8-Item (phenotype/phq8)
    • PRIME Screen Revised 5-Item (PRIME-5-SR) (phenotype/prime5_sr)
    • PROMIS Pediatric Sleep Disturbance Short Form 4a
    • Perceived Stress Scale (PSS-4) (phenotype/pss4)
    • Socioeconomics (self-report) (phenotype/ses_sr)
    • Sleep Reduction Screening Questionnaire (phenotype/srsq)

    Actigraphy

    Other data

    The acquisition date and time for each session was rounded to the 15th of the month and the nearest hour, and stored in the *_sessions.tsv files.

  18. CoSpine Database_motor_dataset

    • openneuro.org
    Updated Jul 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zhaoxing Wei; Xiaomin Lin; Lingfei Guo; Jixin Liu; Li Hu; Yaou Liu; Yazhuo Kong (2025). CoSpine Database_motor_dataset [Dataset]. http://doi.org/10.18112/openneuro.ds005884.v1.1.0
    Explore at:
    Dataset updated
    Jul 1, 2025
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Zhaoxing Wei; Xiaomin Lin; Lingfei Guo; Jixin Liu; Li Hu; Yaou Liu; Yazhuo Kong
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Simultaneous cortico-spinal fMRI (CoSpine) Motor-dataset

    Version: v1.0.0
    Date: 2025-06-30
    BIDS-compliant

    1. Dataset Overview

    This is the first open-access, BIDS-compliant cortico-spinal task-based fMRI dataset using voluntary motor tasks. It enables exploration of cortico-spinal dynamics during repetitive hand movements by providing simultaneous brain-and-spinal fMRI, physiological recordings, and task structure.

    • Task: Voluntary hand grasping (left and right)
    • Modalities: fMRI (task-motor BOLD), structural MRI (T1w), EPI-based field maps (reverse phase encoding), physiological signals (pulse & respiration), events/behavioural logs
    • Subjects: 22 healthy participants

    2. Experimental Design

    • Task: Participants performed hand grasping using a 6.5 cm diameter dynamometer.
    • Cueing: Auditory cues presented via MRI-compatible headphones.
    • Design: 12 blocks in total (6 for each hand). Each block consisted of 8 grasps at 1 Hz.
    • ITI: Randomized between 16–20 seconds.
    • Software: Stimuli were presented and synchronized using E-Prime 2.0 on Windows.

    Event timings (onset, duration) and trial types (motorL, motorR) are documented in each *_events.tsv.

    3. Data Acquisition

    • Scanner: 3T Siemens Prisma (Erlangen, Germany)
    • Head coil: Standard 64-channel head-neck coil
    • fMRI protocol: Simultaneous acquisition of brain and spinal cord
    • Physiology: Pulse and respiration recorded during all task runs

    4. Dataset Structure

    4. Dataset Structure

    dataset/
    ├── dataset_description.json
    ├── README
    ├── task-motor_events.json
    ├── participants.tsv + .json
    ├── sub-01/
    │  ├── anat/
    │  │  ├── sub-01_T1w.nii.gz
    │  │  └── sub-01_T1w.json
    │  ├── fmap/
    │  │  ├── sub-01_dir-AP_epi.nii.gz
    │  │  ├── sub-01_dir-AP_epi.json
    │  │  ├── sub-01_dir-PA_epi.nii.gz
    │  │  └── sub-01_dir-PA_epi.json
    │  └── func/
    │    ├── sub-01_task-motorL_bold.nii.gz
    │    ├── sub-01_task-motorL_bold.json
    │    ├── sub-01_task-motorL_events.tsv
    │    ├── sub-01_task-motorL_recording-pulse_physio.tsv.gz
    │    ├── sub-01_task-motorL_recording-pulse_physio.json
    │    ├── sub-01_task-motorL_recording-respiratory_physio.tsv.gz
    │    ├── sub-01_task-motorL_recording-respiratory_physio.json
    │    ├── sub-01_task-motorR_bold.nii.gz
    │    ├── sub-01_task-motorR_bold.json
    │    ├── sub-01_task-motorR_events.tsv
    │    ├── sub-01_task-motorR_recording-pulse_physio.tsv.gz
    │    ├── sub-01_task-motorR_recording-pulse_physio.json
    │    ├── sub-01_task-motorR_recording-respiratory_physio.tsv.gz
    │    └── sub-01_task-motorR_recording-respiratory_physio.json
    

    5 Quality control / preprocessing

    • Structural T1-weighted images have been defaced for privacy.
    • Preprocessed data is placed in derivatives/preprocessed/.

    Contact

    Zhaoxing Wei (zhaoxing.wei@dartmouth.edu)
    Yazhuo Kong (kongyz@psych.ac.cn)

  19. 7T resting state test-retest

    • openneuro.org
    Updated Jul 14, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chris Gorgolewski; Natacha Mendes; Domenica Wilfling; Elisabeth Wladimirow; Claudine J. Gauthier; Tyler Bonnen; Florence J.M. Ruby; Robert Trampel; Pierre-Louis Bazin; Roberto Cozatl; Jonathan Smallwood; Daniel S. Margulies (2018). 7T resting state test-retest [Dataset]. https://openneuro.org/datasets/ds001168/versions/00002
    Explore at:
    Dataset updated
    Jul 14, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Chris Gorgolewski; Natacha Mendes; Domenica Wilfling; Elisabeth Wladimirow; Claudine J. Gauthier; Tyler Bonnen; Florence J.M. Ruby; Robert Trampel; Pierre-Louis Bazin; Roberto Cozatl; Jonathan Smallwood; Daniel S. Margulies
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Here we present a test-retest dataset of functional magnetic resonance imaging (fMRI) data acquired at rest. 22 participants were scanned during two sessions spaced one week apart. Each session includes two 1.5 mm isotropic whole-brain scans and one 0.75 mm isotropic scan of the prefrontal cortex, giving a total of six time-points. Additionally, the dataset includes measures of mood, sustained attention, blood pressure, respiration, pulse, and the content of self-generated thoughts (mind wandering). This data enables the investigation of sources of both intra- and inter-session variability not only limited to physiological changes, but also including alterations in cognitive and affective states, at high spatial resolution. The dataset is accompanied by a detailed experimental protocol and source code of all stimuli used.

    Structural scan

    For structural images a 3D MP2RAGE29 sequence was used: 3D-acquisition with field of view 224×224×168 mm3 (H-F; A-P; R-L), imaging matrix 320×320×240, 0.7 mm3 isotropic voxel size, Time of Repetition (TR)=5.0 s, Time of Echo (TE)=2.45 ms, Time of Inversion (TI) 1/2=0.9 s/2.75 s, Flip Angle (FA) 1/2=5°/3°, Bandwidth (BW)=250 Hz/Px, Partial Fourier 6/8, and GRAPPA acceleration with iPAT factor of 2 (24 reference lines).

    Field map

    For estimating B0 inhomogeneities, a 2D gradient echo sequence was used. It was acquired in axial orientation with field of view 192×192 mm2 (R-L; A-P), imaging matrix 64×64, 35 slices with 3.0 mm thickness, 3.0 mm3 isotropic voxel size, TR=1.5 s, TE1/2=6.00 ms/7.02 ms (which gives delta TE=1.02 ms), FA=72°, and BW=256 Hz/Px.

    Whole-brain rs-fMRI

    Whole-brain rs-fMRI scans were acquired using a 2D sequence. It used axial orientation, field of view 192×192 mm2 (R-L; A-P), imaging matrix 128×128, 70 slices with 1.5 mm thickness, 1.5 mm3 isotropic voxel size, TR=3.0 s, TE=17 ms, FA=70°, BW=1,116 Hz/Px, Partial Fourier 6/8, GRAPPA acceleration with iPAT factor of 3 (36 reference lines), and 300 repetitions resulting in 15 min of scanning time. Before the scan subjects were instructed to stay awake, keep their eyes open and focus on a cross. In order to avoid a pronounced g-factor penalty30 when using a 24 channel receive coil, the acceleration factor was kept at a maximum of 3, preventing the acquisition of whole-brain data sets at submillimeter resolution. However, as 7 T provides the necessary SNR for such high spatial resolutions a second experiment was performed with only partial brain coverage but with an 0.75 mm isotropic resolution.

    Prefrontal cortex rs-fMRI

    The submillimeter rs-fMRI scan was acquired with a zoomed EPI31 2D acquisition sequence. It was acquired in axial orientation with skewed saturation pulse32 suppressing signal from posterior part of the brain (see Figure 2). The position of the field of view was motivated by the involvement of medial prefrontal cortex in the default mode network and mindwandering33. This location can also improve our understanding of functional anatomy of the prefrontal cortex which is understudied in comparison to primary sensory cortices. Field of view was 150×45 mm2 (R-L; A-P), imaging matrix=200×60, 40 slices with 0.75 mm thickness, 0.75 mm3 isotropic voxel size, TR=4.0 s, TE=26 ms, FA=70°, BW=1,042 Hz/Px, Partial Fourier 6/8. A total of 150 repetitions were acquired resulting in 10 min of scanning time. Before the scan subjects were instructed to stay awake, keep their eyes open and focus on a cross.

    Known issues

    • sub-07 Session 1 & 2 Shimming window was offset causing minor signal deterioration. The same shimming window was used for both sessions.
    • sub-08 Session 1 The lightbulb in the projector died during the first resting state scan. A projector from another scanner was used as a replacement. The replacement took approximately 30 min. The participant was in the scanner during this time.
    • sub-11 Session 1 & 2 All whole-brain scans were accidentally acquired with voxel size 3 mm instead of 1.5 mm.
    • sub-12 Session 1 & 2 Second fieldmap was run after second whole-brain not before, the same order was used for the second session.
    • sub-13 Session 1 Second magnitude image of the first fieldmap was damaged during transfer from the scanner (one slice is missing). The phase image is intact so the fieldmap can still be reconstructed.
    • sub-19 Session 1 Second physiological recording (corresponding to second whole-brain scan) was stopped before the scan finished, third physiological recording (corresponding to the prefrontal scan) was started after the scan started.
  20. Discrimination-Estimation Task

    • openneuro.org
    Updated Nov 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hyunwoo Gu; Joonwon Lee; Sungje Kim; Jaeseob Lim; Hyang-Jung Lee; Heeseung Lee; Minjin Choe; Dong-Gyu Yoo; Jun Hwan (Joshua) Ryu; Sukbin Lim; Sang-Hun Lee (2024). Discrimination-Estimation Task [Dataset]. http://doi.org/10.18112/openneuro.ds005381.v1.0.0
    Explore at:
    Dataset updated
    Nov 13, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Hyunwoo Gu; Joonwon Lee; Sungje Kim; Jaeseob Lim; Hyang-Jung Lee; Heeseung Lee; Minjin Choe; Dong-Gyu Yoo; Jun Hwan (Joshua) Ryu; Sukbin Lim; Sang-Hun Lee
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Discrimination-Estimation Task

    This project aimed to investigate the dynamic interplay of decision-making and working memory. A sequential discrimination-estimation task (DET) was developed to probe decision-making and working memory. For details about the experimental paradigm, please refer to https://www.biorxiv.org/content/10.1101/2023.06.28.546818v1.

    BIDS dataset

    Data were converted from DICOM source files using dcm2niix. Scans were divided into the main task (DET), retinotopy-mapping task (Retino), and resting scan (Resting). Stimuli information and behavioral responses for the main task are contained in events.tsv files.

    Derivatives

    Currently, derivatives contain the outputs of the fMRIprep 20.2.0 with the field map-free distortion correction option (–use-syn-sdc). freesurfer directory under each participant contains the Freesurfer reconstructions from the fMRIprep preprocessing stream.

  21. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
MPI-Leipzig_Mind-Brain-Body [Dataset]. https://openneuro.org/datasets/ds000221/versions/1.0.0
Organization logo

MPI-Leipzig_Mind-Brain-Body

Explore at:
38 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jul 22, 2020
Dataset provided by
OpenNeurohttps://openneuro.org/
Authors
Anahit Babayan; Blazeij Baczkowski; Roberto Cozatl; Maria Dreyer; Haakon Engen; Miray Erbey; Marcel Falkiewicz; Nicolas Farrugia; Michael Gaebler; Johannes Golchert; Laura Golz; Krzysztof Gorgolewski; Philipp Haueis; Julia Huntenburg; Rebecca Jost; Yelyzaveta Kramarenko; Sarah Krause; Deniz Kumral; Mark Lauckner; Daniel S. Margulies; Natacha Mendes; Katharina Ohrnberger; Sabine Oligschläger; Anastasia Osoianu; Jared Pool; Janis Reichelt; Andrea Reiter; Josefin Röbbig; Lina Schaare; Jonathan Smallwood; Arno Villringer
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Area covered
Leipzig
Description

The MPI-Leipzig Mind-Brain-Body dataset contains MRI and behavioral data from 318 participants. Datasets for all participants include at least a structural quantitative T1-weighted image and a single 15-minute eyes-open resting-state fMRI session.

The participants took part in one or two extended protocols: Leipzig Mind-Body-Brain Interactions (LEMON) and Neuroanatomy & Connectivity Protocol (N&C). The data from LEMON protocol is included in the ‘ses-01’ subfolder; the data from N&C protocol in ‘ses-02’ subfolder.

LEMON focuses on structural imaging. 228 participants were scanned. In addition to the quantitative T1-weighted image, the participants also have a structural T2-weighted image (226 participants), a diffusion-weighted image with 64 directions (228) and a 15-minute eyes-open resting-state session (228). New imaging sequences were introduced into the LEMON protocol after data acquisition for approximately 110 participants. Before the change, a low-resolution 2D FLAIR images were acquired for clinical purposes (110). After the change, 2D FLAIR was replaced with high-resolution 3D FLAIR (117). The second addition was the acquisition of gradient-echo images (112) that can be used for Susceptibility-Weighted Imaging (SWI) and Quantitative Susceptibility Mapping (QSM).

The N&C protocol focuses on resting-state fMRI data. 199 participants were scanned with this protocol; 109 participants also took part in the LEMON protocol. Structural data was not acquired for the overlapping LEMON participants. For the unique N&C participants, only a T1-weighted and a low-resolution FLAIR image were acquired. Four 15-minute runs of eyes-open resting-state are the main component of N&C; they are complete for 194 participants, three participants have 3 runs, one participant has 2 runs and one participant has a single run. Due to a bug in multiband sequence used in this protocol, the echo time for N&C resting-state is longer than in LEMON — 39.4 ms vs 30 ms.

Forty-five participants have complete imaging data: quantitative T1-weighted, T2-weighted, high-resolution 3D FLAIR, DWI, GRE and 75 minutes of resting-state. Both gradient-echo and spin-echo field maps are available in both datasets for all EPI-based sequences (rsfMRI and DWI).

Extensive behavioral data was acquired in both protocols. They include trait and state questionnaires, as well as behavioral tasks. Here we only list the tasks; more extenstive descriptions are available in the manuscripts.

LEMON QUESTIONNAIRES/TASKS [not yet released]

California Verbal Learning Test (CVLT) Testbatterie zur Aufmerksamkeitsprüfung (TAP Alertness, Incompatibility, Working Memory) Trail Marking Test (TMT) Wortschatztest (WST) Leistungsprüfungssystem 2 (LPS-2) Regensburger Wortflüssigkeitstest (RWT)

NEO Five-Factor Inventory (NEO-FFI) Impulsive Behavior Scale (UPPS) Behavioral Inhibition and Approach System (BISBAS) Cognitive Emotion Regulation Questionnaire (CERQ) Measure of Affective Style (MARS) Fragebogen zur Sozialen Unterstützung (F-SozU K) The Multidimensional Scale of Perceived Social Support (MSPSS) Coping Orientations to Problems Experienced (COPE) Life Orientation Test-Revised (LOT-R) Perceived Stress Questionnaire (PSQ) the Trier Inventory of Chronic Stress (TICS) The three-factor eating questionnaire (TFEQ) Yale Food Addiction Scale (YFAS) The Trait Emotional Intelligence Questionnaire (TEIQue-SF) Trait Scale of the State-Trait Anxiety Inventory (STAI) State-Trait Anger expression Inventory (STAXI) Toronto-Alexithymia Scale (TAS) Multidimensional Mood Questionnaire (MDMQ) New York Cognition Questionnaire (NYC-Q)

N&C QUESTIONNAIRES

Adult Self Report (ASR) Goldsmiths Musical Sophistication Index (Gold-MSI) Internet Addiction Test (IAT) Involuntary Musical Imagery Scale (IMIS) Multi-Gender Identity Questionnaire (MGIQ) Brief Self-Control Scale (SCS) Short Dark Triad (SD3) Social Desirability Scale-17 (SDS) Self-Esteem Scale (SE) Tuckman Procrastination Scale (TPS) Varieties of Inner Speech (VISQ) UPPS-P Impulsive Behavior Scale (UPPS-P) Attention Control Scale (ACS) Beck's Depression Inventory-II (BDI) Boredom Proneness Scale (BP) Esworth Sleepiness Scale (ESS) Hospital Anxiety and Depression Scale (HADS) Multimedia Multitasking Index (MMI) Mobile Phone Usage (MPU) Personality Style and Disorder Inventory (PSSI) Spontaneous and Deliberate Mind-Wandering (S-D-MW) Short New York Cognition Scale (Short-NYC-Q) New York Cognition Scale (NYC-Q) Abbreviated Math Anxiety Scale (AMAS) Behavioral Inhibition and Approach System (BIS/BAS) NEO Personality Inventory Revised (NEO-PI-R) Body Consciousness Questionnaire (BCQ) Creative achievement questionnaire (CAQ) Five facets of mindfulness (FFMQ) Metacognition (MCQ-30)

N&C TASKS

Conjunctive continuous performance task (CCPT) Emotional task switching (ETS) Adaptive visual and auditory oddball target detection task (Oddball) Alternative uses task (AUT) Remote associates test (RAT) Synesthesia color picker test (SYN) Test of creative imagery abilities (TCIA)

Comments added by Openfmri Curators

===========================================

General Comments

Defacing

Pydeface was used on all anatomical images to ensure deindentification of subjects. The code can be found at https://github.com/poldracklab/pydeface

Where to discuss the dataset

1) www.openfmri.org/dataset/ds000221/ See the comments section at the bottom of the dataset page. 2) www.neurostars.org Please tag any discussion topics with the tags openfmri and ds000221. 3) Send an email to submissions@openfmri.org. Please include the accession number in your email.

Known Issues

N/A

Bids-validator Output

A verbose bids-validator output is under '/derivatives/bidsvalidatorOutput_long'. Short version of BIDS output is as follows:

1: This file is not part of the BIDS specification, make sure it isn't included in the dataset by accident. Data derivatives (processed data) should be placed in /derivatives folder. (code: 1 - NOT_INCLUDED)
  /sub-010001/ses-02/anat/sub-010001_ses-02_inv-1_mp2rage.json
    Evidence: sub-010001_ses-02_inv-1_mp2rage.json
  /sub-010001/ses-02/anat/sub-010001_ses-02_inv-1_mp2rage.nii.gz
    Evidence: sub-010001_ses-02_inv-1_mp2rage.nii.gz
  /sub-010001/ses-02/anat/sub-010001_ses-02_inv-2_mp2rage.json
    Evidence: sub-010001_ses-02_inv-2_mp2rage.json
  /sub-010001/ses-02/anat/sub-010001_ses-02_inv-2_mp2rage.nii.gz
    Evidence: sub-010001_ses-02_inv-2_mp2rage.nii.gz
  /sub-010002/ses-01/anat/sub-010002_ses-01_inv-1_mp2rage.json
    Evidence: sub-010002_ses-01_inv-1_mp2rage.json
  /sub-010002/ses-01/anat/sub-010002_ses-01_inv-1_mp2rage.nii.gz
    Evidence: sub-010002_ses-01_inv-1_mp2rage.nii.gz
  /sub-010002/ses-01/anat/sub-010002_ses-01_inv-2_mp2rage.json
    Evidence: sub-010002_ses-01_inv-2_mp2rage.json
  /sub-010002/ses-01/anat/sub-010002_ses-01_inv-2_mp2rage.nii.gz
    Evidence: sub-010002_ses-01_inv-2_mp2rage.nii.gz
  /sub-010003/ses-01/anat/sub-010003_ses-01_inv-1_mp2rage.json
    Evidence: sub-010003_ses-01_inv-1_mp2rage.json
  /sub-010003/ses-01/anat/sub-010003_ses-01_inv-1_mp2rage.nii.gz
    Evidence: sub-010003_ses-01_inv-1_mp2rage.nii.gz
  ... and 1710 more files having this issue (Use --verbose to see them all).

2: Not all subjects contain the same files. Each subject should contain the same number of files with the same naming unless some files are known to be missing. (code: 38 - INCONSISTENT_SUBJECTS)
  /sub-010001/ses-01/anat/sub-010001_ses-01_T2w.json
  /sub-010001/ses-01/anat/sub-010001_ses-01_T2w.nii.gz
  /sub-010001/ses-01/anat/sub-010001_ses-01_acq-highres_FLAIR.json
  /sub-010001/ses-01/anat/sub-010001_ses-01_acq-highres_FLAIR.nii.gz
  /sub-010001/ses-01/anat/sub-010001_ses-01_acq-lowres_FLAIR.json
  /sub-010001/ses-01/anat/sub-010001_ses-01_acq-lowres_FLAIR.nii.gz
  /sub-010001/ses-01/anat/sub-010001_ses-01_acq-mp2rage_T1map.nii.gz
  /sub-010001/ses-01/anat/sub-010001_ses-01_acq-mp2rage_T1w.nii.gz
  /sub-010001/ses-01/anat/sub-010001_ses-01_acq-mp2rage_defacemask.nii.gz
  /sub-010001/ses-01/dwi/sub-010001_ses-01_dwi.bval
  ... and 8624 more files having this issue (Use --verbose to see them all).

3: Not all subjects/sessions/runs have the same scanning parameters. (code: 39 - INCONSISTENT_PARAMETERS)
  /sub-010007/ses-02/anat/sub-010007_ses-02_acq-mp2rage_T1map.nii.gz
  /sub-010007/ses-02/anat/sub-010007_ses-02_acq-mp2rage_T1w.nii.gz
  /sub-010007/ses-02/anat/sub-010007_ses-02_acq-mp2rage_defacemask.nii.gz
  /sub-010045/ses-01/dwi/sub-010045_ses-01_dwi.nii.gz
  /sub-010087/ses-02/func/sub-010087_ses-02_task-rest_acq-PA_run-01_bold.nii.gz
  /sub-010189/ses-02/anat/sub-010189_ses-02_acq-lowres_FLAIR.nii.gz
  /sub-010201/ses-02/func/sub-010201_ses-02_task-rest_acq-PA_run-02_bold.nii.gz

  Summary:           Available Tasks:    Available Modalities:
  14714 Files, 390.74GB    Rest          FLAIR
  318 - Subjects                    T1map
  2 - Sessions                     T1w
                             defacemask
                             bold
                             T2w
                             dwi
                             fieldmap
                             fieldmap
Search
Clear search
Close search
Google apps
Main menu