100+ datasets found
  1. NIfTI-MRS Example Data

    • zenodo.org
    application/gzip
    Updated Jul 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    William T Clarke; William T Clarke (2021). NIfTI-MRS Example Data [Dataset]. http://doi.org/10.5281/zenodo.5085449
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jul 10, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    William T Clarke; William T Clarke
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Example data accompanying the data format specification for the NIfTI-MRS file format. NIfTI-MRS is a NIfTI derived format for storing magnetic resonance spectroscopy (MRS) and spectroscopic imaging (MRSI) data.

    Example data is generated from code at the NIfTI-MRS GitHub repository.

    Each file is named example_{n}.nii.gz and corresponds to the following list:

    1. Manually converted SVS - Water suppressed STEAM
    2. spec2nii converted SVS - Water suppressed STEAM
    3. spec2nii converted edited SVS - MEGA-PRESS
    4. Manually converted 31P FID-MRSI (CSI sequence)
    5. spec2nii converted sLASER-localised 1H MRSIs
    6. j-difference editing example (MEGA-PRESS)
    7. Variable echo time: full representation
    8. Variable echo time: short representation
    9. Multiple dynamic parameters "Fingerprinting"
    10. FSL-MRS processed MEGA-PRESS spectrum
  2. 4D Nifti files for seed-based correlations

    • figshare.com
    application/gzip
    Updated Dec 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vincent van de Ven (2024). 4D Nifti files for seed-based correlations [Dataset]. http://doi.org/10.6084/m9.figshare.28057481.v1
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Dec 29, 2024
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Vincent van de Ven
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Experiment information: Transcranial alternating current stimulation was delivered during 3T fMRI using concentric circular electrodes over area P4 in stimulation periods that each included attention task and resting-state blocks. MR images were preprocessed, normalized and denoised using the CONN toolbox for Matlab (https://web.conn-toolbox.org/) and analysed using seed-based correlations.File information: This repository contains gzipped 4D-Nifti files of seed-based correlations as calculated using the Matlab CONN toolbox. Each filename refers to SBC_Subj.nii.gz. Each4D Nifti file contains 11 volumes that refer to the 11 experiment conditions:Volume 1 = null; Volume 2 = rest 10 Hz; Volume 3 = task 10 Hz; Volume 4 = rest 20 Hz; Volume 5 = task 20 Hz; Volume 6 = rest 40 Hz; Volume 7 = task 40 Hz; Volume 8 = rest 5 Hz; Volume 9 = task 5 Hz; Volume 10 = rest No Stimulation; Volume 11 = task No Stimulation.Seed correlation values are Fisher Z-normalized (for more information, see the CONN toolbox or the associated manuscript).

  3. Z

    NIfTI data files

    • data.niaid.nih.gov
    Updated Jun 14, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Atkinson, David (2021). NIfTI data files [Dataset]. https://data.niaid.nih.gov/resources?id=ZENODO_4940071
    Explore at:
    Dataset updated
    Jun 14, 2021
    Dataset provided by
    University College London
    Authors
    Atkinson, David
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    NIfTI files to support the SIRF Exercises regarding Geometry.

    Data from static phantoms in MRI and a PET/MR phantom - see readme file.

  4. Brats2020 Nifti format for DeepMedic

    • kaggle.com
    zip
    Updated May 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DarkStell (2024). Brats2020 Nifti format for DeepMedic [Dataset]. https://www.kaggle.com/datasets/darksteeldragon/brats2020-nifti-format-for-deepmedic
    Explore at:
    zip(4468643987 bytes)Available download formats
    Dataset updated
    May 21, 2024
    Authors
    DarkStell
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Overview: This dataset consists of brain tumor images from the BraTS2020 challenge, formatted in Nifti (.nii) files, specifically prepared for use with the DeepMedic framework. It includes both training and validation sets. This is my first dataset publication will update it and finetune, please feel free to ask and help me on what i need to edit

    Content:

    Training Data: Contains multimodal MRI scans (T1, T1Gd, T2, FLAIR) with corresponding ground truth segmentations.
    Validation Data: Includes multimodal MRI scans without ground truth segmentations for model validation.
    Metadata: Information about each scan and its respective patient.
    

    Use Case: Ideal for training and evaluating deep learning models for brain tumor segmentation using the DeepMedic framework.

    License: Public Domain (CC0)

    Size: 4.48 GB

    Usage: This dataset can be used for developing and testing machine learning algorithms for medical image segmentation, particularly in the context of brain tumor analysis.

    Citations: Please cite the BraTS2020 challenge and the authors of the dataset in any publications or presentations.

  5. spleen nifti file

    • kaggle.com
    zip
    Updated Mar 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ashraf Alsinglawi (2025). spleen nifti file [Dataset]. https://www.kaggle.com/datasets/ashrafalsinglawi/spleen-nifti-file
    Explore at:
    zip(12882908 bytes)Available download formats
    Dataset updated
    Mar 10, 2025
    Authors
    Ashraf Alsinglawi
    Description

    Abdominal cross-section as NIFTI volume showing spleen. Note: NIFTI files store all their pixel dimensions in a single place - pixdim field.

  6. Z

    HCP-YA Tractography Atlas (NIFTI Files)

    • data.niaid.nih.gov
    Updated Mar 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fang-Cheng Yeh (2022). HCP-YA Tractography Atlas (NIFTI Files) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3627771
    Explore at:
    Dataset updated
    Mar 1, 2022
    Dataset provided by
    University of Pittsburgh
    Authors
    Fang-Cheng Yeh
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Yeh, F. C., Panesar, S., Fernandes, D., Meola, A., Yoshino, M., Fernandez-Miranda, J. C., ... & Verstynen, T. (2018). Population-averaged atlas of the macroscale human structural connectome and its network topology. NeuroImage, 178, 57-68.

  7. Z

    2D high-resolution synthetic MR images of Alzheimer's patients and healthy...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Dec 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lai, Matteo; Marzi, Chiara; Citi, Luca; Diciotti, Stefano (2023). 2D high-resolution synthetic MR images of Alzheimer's patients and healthy subjects using PACGAN [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8276785
    Explore at:
    Dataset updated
    Dec 13, 2023
    Dataset provided by
    University of Bologna
    University of Essex
    University of Firenze
    Authors
    Lai, Matteo; Marzi, Chiara; Citi, Luca; Diciotti, Stefano
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset encompasses a NIfTI file containing a collection of 500 images, each capturing the central axial slice of a synthetic brain MRI.

    Accompanying this file is a CSV dataset that serves as a repository for the corresponding labels linked to each image:

    Label 0: Healthy Controls (HC)

    Label 1: Alzheimer's Disease (AD)

    Each image within this dataset has been generated by PACGAN (Progressive Auxiliary Classifier Generative Adversarial Network), a framework designed and implemented by the AI for Medicine Research Group at the University of Bologna.

    PACGAN is a generative adversarial network trained to generate high-resolution images belonging to different classes. In our work, we trained this framework on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, which contains brain MRI images of AD patients and HC.

    The implementation of the training algorithm can be found within our GitHub repository, with Docker containerization.

    For further exploration, the pre-trained models are available within the Code Ocean capsule. These models can facilitate the generation of synthetic images for both classes and also aid in classifying new brain MRI images.

  8. Z

    Training Dataset for HNTSMRG 2024 Challenge

    • data.niaid.nih.gov
    Updated Jun 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wahid, Kareem; Dede, Cem; Naser, Mohamed; Fuller, Clifton (2024). Training Dataset for HNTSMRG 2024 Challenge [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_11199558
    Explore at:
    Dataset updated
    Jun 21, 2024
    Dataset provided by
    The University of Texas MD Anderson Cancer Center
    Authors
    Wahid, Kareem; Dede, Cem; Naser, Mohamed; Fuller, Clifton
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Training Dataset for HNTSMRG 2024 Challenge

    Overview

    This repository houses the publicly available training dataset for the Head and Neck Tumor Segmentation for MR-Guided Applications (HNTSMRG) 2024 Challenge.

    Patient cohorts correspond to patients with histologically proven head and neck cancer who underwent radiotherapy (RT) at The University of Texas MD Anderson Cancer Center. The cancer types are predominately oropharyngeal cancer or cancer of unknown primary. Images include a pre-RT T2w MRI scan (1-3 weeks before start of RT) and a mid-RT T2w MRI scan (2-4 weeks intra-RT) for each patient. Segmentation masks of primary gross tumor volumes (abbreviated GTVp) and involved metastatic lymph nodes (abbreviated GTVn) are provided for each image (derived from multi-observer STAPLE consensus).

    HNTSMRG 2024 is split into 2 tasks:

    Task 1: Segmentation of tumor volumes (GTVp and GTVn) on pre-RT MRI.

    Task 2: Segmentation of tumor volumes (GTVp and GTVn) on mid-RT MRI.

    The same patient cases will be used for the training and test sets of both tasks of this challenge. Therefore, we are releasing a single training dataset that can be used to construct solutions for either segmentation task. The test data provided (via Docker containers), however, will be different for the two tasks. Please consult the challenge website for more details.

    Data Details

    DICOM files (images and structure files) have been converted to NIfTI format (.nii.gz) for ease of use by participants via DICOMRTTool v. 1.0.

    Images are a mix of fat-suppressed and non-fat-suppressed MRI sequences. Pre-RT and mid-RT image pairs for a given patient are consistently either fat-suppressed or non-fat-suppressed.

    Though some sequences may appear to be contrast enhancing, no exogenous contrast is used.

    All images have been manually cropped from the top of the clavicles to the bottom of the nasal septum (~ oropharynx region to shoulders), allowing for more consistent image field of views and removal of identifiable facial structures.

    The mask files have one of three possible values: background = 0, GTVp = 1, GTVn = 2 (in the case of multiple lymph nodes, they are concatenated into one single label). This labeling convention is similar to the 2022 HECKTOR Challenge.

    150 unique patients are included in this dataset. Anonymized patient numeric identifiers are utilized.

    The entire training dataset is ~15 GB.

    Dataset Folder/File Structure

    The dataset is uploaded as a ZIP archive. Please unzip before use. NIfTI files conform to the following standardized nomenclature: ID_timepoint_image/mask.nii.gz. For mid-RT files, a "registered" suffix (ID_timepoint_image/mask_registered.nii.gz) indicates the image or mask has been registered to the mid-RT image space (see more details in Additional Notes below).

    The data is provided with the following folder hierarchy:

    Top-level folder (named "HNTSMRG24_train")

    Patient-level folder (anonymized patient ID, example: "2")

    Pre-radiotherapy data folder ("preRT")

    Original pre-RT T2w MRI volume (example: "2_preRT_T2.nii.gz").

    Original pre-RT tumor segmentation mask (example: "2_preRT_mask.nii.gz").

    Mid-radiotherapy data folder ("midRT")

    Original mid-RT T2w MRI volume (example: "2_midRT_T2.nii.gz").

    Original mid-RT tumor segmentation mask (example: "2_midRT_mask.nii.gz").

    Registered pre-RT T2w MRI volume (example: "2_preRT_T2_registered.nii.gz").

    Registered pre-RT tumor segmentation mask (example: "2_preRT_mask_registered.nii.gz").

    Note: Cases will exhibit variable presentation of ground truth mask structures. For example, a case could have only a GTVp label present, only a GTVn label present, both GTVp and GTVn labels present, or a completely empty mask (i.e., complete tumor response at mid-RT). The following case IDs have empty masks at mid-RT (indicating a complete response): 21, 25, 29, 42. These empty masks are not errors. There will similarly be some cases in the test set for Task 2 that have empty masks.

    Details Relevant for Algorithm Building

    The goal of Task 1 is to generate a pre-RT tumor segmentation mask (e.g., "2_preRT_mask.nii.gz" is the relevant label). During blind testing for Task 1, only the pre-RT MRI (e.g., "2_preRT_T2.nii.gz") will be provided to the participants algorithms.

    The goal of Task 2 is to generate a mid-RT segmentation mask (e.g., "2_midRT_mask.nii.gz" is the relevant label). During blind testing for Task 2, the mid-RT MRI (e.g., "2_midRT_T2.nii.gz"), original pre-RT MRI (e.g., "2_preRT_T2.nii.gz"), original pre-RT tumor segmentation mask (e.g., "2_preRT_mask.nii.gz"), registered pre-RT MRI (e.g., "2_preRT_T2_registered.nii.gz"), and registered pre-RT tumor segmentation mask (e.g., "2_preRT_mask_registered.nii.gz") will be provided to the participants algorithms.

    When building models, the resolution of the generated prediction masks should be the same as the corresponding MRI for the given task. In other words, the generated masks should be in the correct pixel spacing and origin with respect to the original reference frame (i.e., pre-RT image for Task 1, mid-RT image for Task 2). More details on the submission of models will be located on the challenge website.

    Additional Notes

    General notes.

    NIfTI format images and segmentations may be easily visualized in any NIfTI viewing software such as 3D Slicer.

    Test data will not be made public until the completion of the challenge. The complete training and test data will be published together (along with all original multi-observer annotations and relevant clinical data) at a later date via The Cancer Imaging Archive. Expected date ~ Spring 2025.

    Task 1 related notes.

    When training their algorithms for Task 1, participants can choose to use only pre-RT data or add in mid-RT data as well. Initially, our plan was to limit participants to utilizing only pre-RT data for training their algorithms in Task 1. However, upon reflection, we recognized that in a practical setting, individuals aiming to develop auto-segmentation algorithms could theoretically train models using any accessible data at their disposal. Based on current literature, we actually don't know what the best solution would be! Would the incorporation of mid-RT data for training a pre-RT segmentation model actually be helpful, or would it merely introduce harmful noise? The answer remains unclear. Therefore, we leave this choice to the participants.

    Remember, though, during testing, you will ONLY have the pre-RT image as an input to your model (naturally, since Task 1 is a pre-RT segmentation task and you won't know what mid-RT data for a patient will look like).

    Task 2 related notes.

    In addition to the mid-RT MRI and segmentation mask, we have also provided a registered pre-RT MRI and the corresponding registered pre-RT segmentation mask for each patient. We offer this data for participants who opt not to integrate any image registration techniques into their algorithms for Task 2 but still wish to use the two images as a joint input to their model. Moreover, in a real-world adaptive RT context, such registered scans are typically readily accessible. Naturally, participants are also free to incorporate their own image registration processes into their pipelines if they wish (or ignore the pre-RT images/masks altogether).

    Registrations were generated using SimpleITK, where the mid-RT image serves as the fixed image and the pre-RT image serves as the moving image. Specifically, we utilized the following steps: 1. Apply a centered transformation, 2. Apply a rigid transformation, 3. Apply a deformable transformation with Elastix using a preset parameter map (Parameter map 23 in the Elastix Zoo). This particular deformable transformation was selected as it is open-source and was benchmarked in a previous similar application (https://doi.org/10.1002/mp.16128). For cases where excessive warping was noted during deformable registration (a small minority of cases), only the rigid transformation was applied.

    Contact

    We have set up a general email address that you can message to notify all organizers at: hntsmrg2024@gmail.com. Additional specific organizer contacts:

    Kareem A. Wahid, PhD (kawahid@mdanderson.org)

    Cem Dede, MD (cdede@mdanderson.org)

    Mohamed A. Naser, PhD (manaser@mdanderson.org)

  9. m

    resting-state fMRI data for Normal adults

    • data.mendeley.com
    Updated Feb 4, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tie-Qiang Li (2021). resting-state fMRI data for Normal adults [Dataset]. http://doi.org/10.17632/pt9d2rdv46.1
    Explore at:
    Dataset updated
    Feb 4, 2021
    Authors
    Tie-Qiang Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Whole brine resting-state fMRI data from 227 healthy adults between 18 and 78 years old acquired at a 3T clinical MRI scanner. The tar file contains 227 compressed nifti files. The 1st alphabet in the file names indicates the gender of the volunteers (f for female and m for male). The 2nd and 3rd digits indicate the age of volunteers. The remaining alphabets are randomized to encode individual subjects. The original images were reconstructed into dicom format and converted into nifti file format.

  10. u

    Registered histology, MRI, and manual annotations of over 300 brain regions...

    • rdr.ucl.ac.uk
    txt
    Updated Oct 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eugenio Iglesias Gonzalez; Adria Casamitjana; Alessia Atzeni; Benjamin Billot; David Thomas; Emily Blackburn; James Hughes; Juri Althonayan; Loic Peter; Matteo Mancini; Nellie Robinson; Peter Schmidt; Shauna Crampsie (2023). Registered histology, MRI, and manual annotations of over 300 brain regions in 5 human hemispheres (data from ERC Starting Grant 677697 "BUNGEE-TOOLS") [Dataset]. http://doi.org/10.5522/04/24243835.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Oct 6, 2023
    Dataset provided by
    University College London
    Authors
    Eugenio Iglesias Gonzalez; Adria Casamitjana; Alessia Atzeni; Benjamin Billot; David Thomas; Emily Blackburn; James Hughes; Juri Althonayan; Loic Peter; Matteo Mancini; Nellie Robinson; Peter Schmidt; Shauna Crampsie
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Summary:

    This repository includes data related to the ERC Starting Grant project 677697: "Building Next-Generation Computational Tools for High Resolution Neuroimaging Studies" (BUNGEE-TOOLS). It includes: (a) Dense histological sections from five human hemispheres with manual delineations of >300 brain regions; (b) Corresponding ex vivo MRI scans; (c) Dissection photographs; (d) A spatially aligned version of the dataset; (e) A probabilistic atlas built from the hemispheres; and (f) Code to apply the atlas to automated segmentation of in vivo MRI scans.

    More detailed description on what this dataset includes:

    Data files and Python code for Bayesian segmentation of human brain MRI based on a next-generation, high-resolution histological atlas: "Next-Generation histological atlas for high-resolution segmentation of human brain MRI" A Casamitjana et al., in preparation. This repository contains a set of zip files, each corresponding to one directory. Once decompressed, each directory has a readme.txt file explaining its contents. The list of zip files / compressed directories is:

    • 3dAtlas.zip: nifti files with summary imaging volumes of the probabilistic atlas.

    • BlockFacePhotoBlocks.zip: nifti files with the blackface photographs acquired during tissue sectioning, reconstructed into 3D volumes (in RGB).

    • Histology.zip: jpg files with the LFB and H&E stained sections.

    • HistologySegmentations.zip: 2D nifti files with the segmentations of the histological sections.

    • MRI.zip: ex vivo T2-weighted MRI scans and corresponding FreeSurfer processing files

    • SegmentationCode.zip: contains the the Python code and data files that we used to segment brain MRI scans and obtain the results presented in the article (for reproducibility purposes). Note that it requires an installation of FreeSurfer. Also, note that the code is also maintained in FreeSurfer (but may not produce exactly the same results): https://surfer.nmr.mgh.harvard.edu/fswiki/HistoAtlasSegmentation

    • WholeHemispherePhotos.zip: photographs of the specimens prior to dissection

    • WholeSlicePhotos.zip: photographs of the tissue slabs prior to blocking.

    We also note that the registered images for the five cases can be found in GitHub: https://github.com/UCL/BrainAtlas-P41-16 https://github.com/UCL/BrainAtlas-P57-16 https://github.com/UCL/BrainAtlas-P58-16 https://github.com/UCL/BrainAtlas-P85-18 https://github.com/UCL/BrainAtlas-EX9-19
    These registered images can be interactively explored with the following web interface: https://github-pages.ucl.ac.uk/BrainAtlas/#/atlas

  11. COBRE preprocessed with NIAK 0.17 - lightweight release

    • figshare.com
    application/gzip
    Updated Nov 3, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pierre Bellec; Pierre Bellec (2016). COBRE preprocessed with NIAK 0.17 - lightweight release [Dataset]. http://doi.org/10.6084/m9.figshare.4197885.v1
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Nov 3, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Pierre Bellec; Pierre Bellec
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ContentThis work is a derivative from the COBRE sample found in the International Neuroimaging Data-sharing Initiative (INDI), originally released under Creative Commons -- Attribution Non-Commercial. It includes preprocessed resting-state functional magnetic resonance images for 72 patients diagnosed with schizophrenia (58 males, age range = 18-65 yrs) and 74 healthy controls (51 males, age range = 18-65 yrs). The fMRI dataset for each subject are single nifti files (.nii.gz), featuring 150 EPI blood-oxygenation level dependent (BOLD) volumes were obtained in 5 mns (TR = 2 s, TE = 29 ms, FA = 75°, 32 slices, voxel size = 3x3x4 mm3 , matrix size = 64x64, FOV = mm2 ). The data processing as well as packaging was implemented by Pierre Bellec, CRIUGM, Department of Computer Science and Operations Research, University of Montreal, 2016.The COBRE preprocessed fMRI release more specifically contains the following files:README.md: a markdown (text) description of the release.phenotypic_data.tsv.gz: A gzipped tabular-separated value file, with each column representing a phenotypic variable as well as measures of data quality (related to motions). Each row corresponds to one participant, except the first row which contains the names of the variables (see file below for a description).keys_phenotypic_data.json: a json file describing each variable found in phenotypic_data.tsv.gz.fmri_XXXXXXX.tsv.gz: A gzipped tabular-separated value file, with each column representing a confounding variable for the time series of participant XXXXXXX (which is the same participant ID found in phenotypic_data.tsv.gz). Each row corresponds to a time frame, except for the first row, which contains the names of the variables (see file below for a definition).keys_confounds.json: a json file describing each variable found in the files fmri_XXXXXXX.tsv.gz.fmri_XXXXXXX.nii.gz: a 3D+t nifti volume at 6 mm isotropic resolution, stored as short (16 bits) integers, in the MNI non-linear 2009a symmetric space (http://www.bic.mni.mcgill.ca/ServicesAtlases/ICBM152NLin2009). Each fMRI data features 150 volumes.Usage recommendationsIndividual analyses: You may want to remove some time frames with excessive motion for each subject, see the confounding variable called scrub in fmri_XXXXXXX.tsv.gz. Also, after removing these time frames there may not be enough usable data. We recommend a minimum number of 60 time frames. A fairly large number of confounds have been made available as part of the release (slow time drifts, motion paramaters, frame displacement, scrubbing, average WM/Vent signal, COMPCOR, global signal). We strongly recommend regression of slow time drifts. Everything else is optional.Group analyses: There will also be some residuals effect of motion, which you may want to regress out from connectivity measures at the group level. The number of acceptable time frames as well as a measure of residual motion (called frame displacement, as described by Power et al., Neuroimage 2012), can be found in the variables Frames OK and FD scrubbed in phenotypic_data.tsv.gz. Finally, the simplest use case with these data is to predict the overall presence of a diagnosis of schizophrenia (values Control or Patient in the phenotypic variable Subject Type). You may want to try to match the control and patient samples in terms of amounts of motion, as well as age and sex. Note that more detailed diagnostic categories are available in the variable Diagnosis.PreprocessingThe datasets were analysed using the NeuroImaging Analysis Kit (NIAK https://github.com/SIMEXP/niak) version 0.17, under CentOS version 6.3 with Octave (http://gnu.octave.org) version 4.0.2 and the Minc toolkit (http://www.bic.mni.mcgill.ca/ServicesSoftware/ServicesSoftwareMincToolKit) version 0.3.18. Each fMRI dataset was corrected for inter-slice difference in acquisition time and the parameters of a rigid-body motion were estimated for each time frame. Rigid-body motion was estimated within as well as between runs, using the median volume of the first run as a target. The median volume of one selected fMRI run for each subject was coregistered with a T1 individual scan using Minctracc (Collins and Evans, 1998), which was itself non-linearly transformed to the Montreal Neurological Institute (MNI) template (Fonov et al., 2011) using the CIVET pipeline (Ad-Dabbagh et al., 2006). The MNI symmetric template was generated from the ICBM152 sample of 152 young adults, after 40 iterations of non-linear coregistration. The rigid-body transform, fMRI-to-T1 transform and T1-to-stereotaxic transform were all combined, and the functional volumes were resampled in the MNI space at a 6 mm isotropic resolution.Note that a number of confounding variables were estimated and are made available as part of the release. WARNING: no confounds were actually regressed from the data, so it can be done interactively by the user who will be able to explore different analytical paths easily. The “scrubbing” method of (Power et al., 2012), was used to identify the volumes with excessive motion (frame displacement greater than 0.5 mm). A minimum number of 60 unscrubbed volumes per run, corresponding to ~120 s of acquisition, is recommended for further analysis. The following nuisance parameters were estimated: slow time drifts (basis of discrete cosines with a 0.01 Hz high-pass cut-off), average signals in conservative masks of the white matter and the lateral ventricles as well as the six rigid-body motion parameters (Giove et al., 2009), anatomical COMPCOR signal in the ventricles and white matter (Chai et al., 2012), PCA-based estimator of the global signal (Carbonell et al., 2011). The fMRI volumes were not spatially smoothed.ReferencesAd-Dab’bagh, Y., Einarson, D., Lyttelton, O., Muehlboeck, J. S., Mok, K., Ivanov, O., Vincent, R. D., Lepage, C., Lerch, J., Fombonne, E., Evans, A. C., 2006. The CIVET Image-Processing Environment: A Fully Automated Comprehensive Pipeline for Anatomical Neuroimaging Research. In: Corbetta, M. (Ed.), Proceedings of the 12th Annual Meeting of the Human Brain Mapping Organization. Neuroimage, Florence, Italy.Bellec, P., Rosa-Neto, P., Lyttelton, O. C., Benali, H., Evans, A. C., Jul. 2010. Multi-level bootstrap analysis of stable clusters in resting-state fMRI. NeuroImage 51 (3), 1126–1139. URL http://dx.doi.org/10.1016/j.neuroimage.2010.02.082F. Carbonell, P. Bellec, A. Shmuel. Validation of a superposition model of global and system-specific resting state activity reveals anti-correlated networks. Brain Connectivity 2011 1(6): 496-510. doi:10.1089/brain.2011.0065Chai, X. J., Castan, A. N. N., Ongr, D., Whitfield-Gabrieli, S., Jan. 2012. Anticorrelations in resting state networks without global signal regression. NeuroImage 59 (2), 1420-1428. http://dx.doi.org/10.1016/j.neuroimage.2011.08.048 Collins, D. L., Evans, A. C., 1997. Animal: validation and applications of nonlinear registration-based segmentation. International Journal of Pattern Recognition and Artificial Intelligence 11, 1271–1294.Fonov, V., Evans, A. C., Botteron, K., Almli, C. R., McKinstry, R. C., Collins, D. L., Jan. 2011. Unbiased average age-appropriate atlases for pediatric studies. NeuroImage 54 (1), 313–327. URLhttp://dx.doi.org/10.1016/j.neuroimage.2010.07.033Giove, F., Gili, T., Iacovella, V., Macaluso, E., Maraviglia, B., Oct. 2009. Images-based suppression of unwanted global signals in resting-state functional connectivity studies. Magnetic resonance imaging 27 (8), 1058–1064. URLhttp://dx.doi.org/10.1016/j.mri.2009.06.004Power, J. D., Barnes, K. A., Snyder, A. Z., Schlaggar, B. L., Petersen, S. E., Feb. 2012. Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. NeuroImage 59 (3), 2142–2154. URLhttp://dx.doi.org/10.1016/j.neuroimage.2011.10.018

  12. A high-density diffuse optical tomography dataset of naturalistic viewing

    • openneuro.org
    Updated Sep 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arefeh Sherafati*; Aahana Bajracharya*; Michael S. Jones; Emma Speh; Monalisa Munsi; Chen Hao Lin; Andrew K. Fishell; Tamara Hershey; Adam Eggebrecht; Joseph P. Culver; Jonathan E. Peelle (2023). A high-density diffuse optical tomography dataset of naturalistic viewing [Dataset]. http://doi.org/10.18112/openneuro.ds004569.v1.0.0
    Explore at:
    Dataset updated
    Sep 22, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Arefeh Sherafati*; Aahana Bajracharya*; Michael S. Jones; Emma Speh; Monalisa Munsi; Chen Hao Lin; Andrew K. Fishell; Tamara Hershey; Adam Eggebrecht; Joseph P. Culver; Jonathan E. Peelle
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    README

    This data set is described in Sherefati et al. “A high-density diffuse optical tomography dataset of naturalistic viewing”. A link will be provided once the paper is available.

    Traditional laboratory tasks offer tight experimental control, but lack the richness of our everyday human experience. As a result many cognitive neuroscientists have been motivated to adopt experimental paradigms that are more natural, such as stories and movies. Here we describe data from 58 healthy adult participants (aged 18-76 years) who viewed at least 10 minutes of a movie (The Good, the Bad, and the Ugly, 1966); 36 of the participants viewed the clip more than once, resulting in 106 sessions of data. The data were collected on a custom high-density diffuse optical tomography system (~1200 total measurements), covering large portions of the superficial aspects of occipital, temporal, parietal, and frontal lobes. Data are provided in both channel format and projected to standard space, using an atlas-based light model. The data are suitable for methods exploration as well as a wide variety of cognitive phenomena.

    Stimuli and paradigm

    Participants watched a 10-minute clip from The Good, the BAd, and the Ugly (1966). Some participants returned for multiple sesssions and watched the same clip again, facilitating test-retest reliability or validation analyses.

    Preprocessing

    Data pre-processing was done using the NeuroDOT toolbox based on the principles of modeling light emission, diffusion, and detection through the head. The latest version of the NeuroDOT toolbox can be found at https://www.nitrc.org/projects/neurodot. These preprocessed data were then converted to .nii file format for analysis and sharing purposes. Additionally, the ndot2snirf function in NeuroDOT was used to convert the raw data to SNIRF file format followed by snirf2bids function to generate other necessary metadata files to satisfy the BIDS specification for NIRS

    Neuroimaging file types and organization

    Data are provided in three formats: NeuroDOT (.mat file), SNIRF (.snirf file), and Nifti (.nii file).

    The Nifti files follow standard BIDS naming conventions for fMRI data.

    SNIRF and NeuroDOT files are in the nirs folder for each subject. Because nirs is not a recognized modality for OpenNeuro, we have also added nirs to .bidsignore to facilitate validation.

    Important note on timing information

    For the SNIRF and NeuroDOT formatted data, the onset of the movie varies across participants. For each subject, the onset time is provided XXXX.

    For the Nifti formatted data, extraneous data points have been trimmed. Thus time 0 of the data (i.e., the first frame) coincides with the onset of the movie.

  13. S

    Age- and enthinity-specific brain templates and growth charts for children...

    • scidb.cn
    Updated Dec 10, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hao-Ming Dong; F. Xavier Castellanos; Ning Yang; Zhe Zhang; Quan Zhou; Ye He; Lei Zhang; Ting Xu; Avram J. Holmes; B.T. Thomas Yeo; Feiyan Chen; Bin Wang; Christian Beckmann; Tonya White; Olaf Sporns; Jiang Qiu; Tingyong Feng; Antao Chen; Xun Liu; Xu Chen; Xuchu Weng; Michael P. Milham; Xi-Nian Zuo (2020). Age- and enthinity-specific brain templates and growth charts for children and adolescents at school age [Dataset]. http://doi.org/10.11922/sciencedb.00362
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 10, 2020
    Dataset provided by
    Science Data Bank
    Authors
    Hao-Ming Dong; F. Xavier Castellanos; Ning Yang; Zhe Zhang; Quan Zhou; Ye He; Lei Zhang; Ting Xu; Avram J. Holmes; B.T. Thomas Yeo; Feiyan Chen; Bin Wang; Christian Beckmann; Tonya White; Olaf Sporns; Jiang Qiu; Tingyong Feng; Antao Chen; Xun Liu; Xu Chen; Xuchu Weng; Michael P. Milham; Xi-Nian Zuo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Description: Brain growth charts and age-normed brain templates are essential resources for researchers to eventually contribute to the care of individuals with atypical developmental trajectories. The present work generates age-normed brain templates for children and adolescents at one-year intervals and the corresponding growth charts to investigate the influences of age and ethnicity using a common pediatric neuroimaging protocol. Two accelerated longitudinal cohorts with the identical experimental design were implemented in the United States and China. Anatomical magnetic resonance imaging (MRI) of typically developing school-age children (TDC) was obtained up to three times at nominal intervals of 1.25 years. The protocol generated and compared population- and age-specific brain templates and growth charts, respectively. A total of 674 Chinese pediatric MRI scans were obtained from 457 Chinese TDC and 190 American pediatric MRI scans were obtained from 133 American TDC.Population- and age-specific brain templates were used to quantify warp cost, the differences between individual brains and brain templates. Volumetric growth charts for labeled brain network areas were generated. Shape analyses of cost functions supported the necessity of age-specific and ethnicity-matched brain templates, which was confirmed by growth chart analyses. These analyses revealed volumetric growth differences between the two ethnicities primarily in lateral frontal and parietal areas, regions which are most variable across individuals in regard to their structure and function. Age- and ethnicity-specific brain templates facilitate establishing unbiased pediatric brain growth charts, indicating the necessity of the brain charts and brain templates generated in tandem. These templates and growth charts as well as related codes have been made freely available to the public for open neuroscience

    Usage: The age-specific head brain templates can be used for pediatric neuroimaging studies to provide a standard reference on head brain spaces. Sample codes for such uses can be found on Github(https://github.com/zuoxinian/CCS/tree/master/H3/GrowthCharts). The growth charts on various school-age children and adolescents can provide a normal growth standard on the brain development across school age, together with the normative modeling methods, they offer an analytic way of implementing individualized or personalized pediatrics. All the templates and growth charts are downloadable as NIFTI files. For a given NIFTI file in the dataset, IPCAS indicates the Chinese school age template,NKI named files indicate American template.Users can find the age-specific template in the name of (IPCAS/NKI)_age(X)_brain_template.nii.gz and the different tissue template are also provided by this dataset in the name of (IPCAS/NKI)_age(X)_brain_pve(_0/_1/_2/seg) in which 0 indicates CSF, 1 indicates gray matter, 2 indicates white matter, and seg indicates hard segmentatin.

  14. vers19 test nifti 512 axial

    • kaggle.com
    zip
    Updated Sep 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    mahdi (2021). vers19 test nifti 512 axial [Dataset]. https://www.kaggle.com/datasets/mbonyani/vers19testnifti512axial
    Explore at:
    zip(2362361575 bytes)Available download formats
    Dataset updated
    Sep 27, 2021
    Authors
    mahdi
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Dataset

    This dataset was created by mahdi

    Released under CC0: Public Domain

    Contents

  15. i

    Youth of Utrecht. (2024). Structural Magnetic Resonance Imaging (MRI) [Data...

    • data.individualdevelopment.nl
    Updated Oct 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Youth of Utrecht. (2024). Structural Magnetic Resonance Imaging (MRI) [Data set]. Utrecht University. https://doi.org/10.60641/ecxz-6z12 [Dataset]. https://data.individualdevelopment.nl/dataset/b7b7fa0d2ca62908b3023d47c698f022
    Explore at:
    Dataset updated
    Oct 17, 2024
    Description

    Structural magnetic resonance imaging (MRI) is a non-invasive technique for examining the anatomy and pathology of the brain (as opposed to using functional magnetic resonance imaging [fMRI] to examine brain activity). In the YOUth Baby and Child cohort, fetal MRI (T2) in the mother consisted of coronal, saggital and/or axial T2-weighted turbo fast spin-echo sequences with the following parameters: 80 2.5mm slices; echo time (TE) 180 ms; repetition time (TR) 55321 ms; flip angle 110 degrees; in-plane voxel size 1.25 x 1.25mm^2. The raw Philips DICOM files were converted to NIfTi format via dcm2niix (v20190112, https://github.com/rordenlab/dcm2niix) and 3D reconstructed with the dHCP method (https://github.com/SVRTK/SVRTK). We visually checked the scans and segmentations for quailty. Neonatal MRI (T1 and T2) in the newborn child consisted of a T2-weighted turbo fast spin-echo sequence with the following parameters: 110x1.2mm slices; echo time (TE) 150 ms; repetition time (TR) 4851 ms; flip angle 90 degrees; in-plane voxel size 1.2 x 1.2mm^2. The raw Philips DICOM files were converted to NIfTi format via dcm2niix (v20190112, https://github.com/rordenlab/dcm2niix) and volumes were calculated with the dHCP method (https://github.com/BioMedIA/dhcp-structural-pipeline). We visually checked the scans and segmentations for quailty. In the YOUth Child and Adolescent cohort, adolescent MRI (T1) consisted of a T1-weighted 3D fast-field echo scan with the following parameters: 200 0.8 mm contiguous slices; echo time (TE) 4.6 ms; repetition time (TR) 10ms; flip angle 8°; in-plane voxel size 0.75 × 0.75 mm^2. The raw Philips DICOM files were converted to NIfTi format via dcm2niix (v20190112, https://github.com/rordenlab/dcm2niix) using the flag '-p n' (no Philips precise float scaling), and subsequently defaced using mri_deface (v1.22, https://surfer.nmr.mgh.harvard.edu/fswiki/mri_deface), resulting in 4-byte float gzipped NIfTi files.

  16. Dataset: Methods for computing the maximum performance of computational...

    • zenodo.org
    bin
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sr Agustin Lage-Castellanos; Sr Agustin Lage-Castellanos (2020). Dataset: Methods for computing the maximum performance of computational models of fMRI responses. [Dataset]. http://doi.org/10.5281/zenodo.1489531
    Explore at:
    binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sr Agustin Lage-Castellanos; Sr Agustin Lage-Castellanos
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Accompanying data for the revised version of the manuscript: Methods for computing the maximum performance of computational models of fMRI responses. written by Agustin Lage-Castellanos, Giancarlo Valente, Elia Formisano, Federico De Martino, submitted for publication in Plos Computational Biology, November 2018.

    The dataset (01.rar) contains the fMRI time series for one subject in Nifti format acquired on an actively shielded MAGNETOM 7T whole body system driven by a Siemens console at Scannexus (www.scannexus.nl). Every folder (24 runs, one folder per run) contains 150 Nifti files, one Nifti file for each fMRI volume. Preprocessing consisted of slice scan-time correction (with sinc interpolation), 3-dimensional motion correction, and temporal high pass filtering (removing drifts of 4 cycles or less per run).

    The matlab file dmS01_24runs.mat contains a 24-length cell array of fMRI design matrices, one for every run. Every fMRI design matrix is size 150 volumes x 51 covariates. The first 42 columns correspond to the stimuli presented (42 sounds per run). Columns 43, and 44, correspond to the run mean and the linear trend covariates. The rest of the columns correspond to the covariates obtained with GLMdenoise. The matlab variable stimulus of size 24 x 42 contains the index of the sounds presented at every run. A total of 168 sounds were presented, each sound was presented 6 times across the 24 fMRI runs.

    The file SPMgls0.rar contains the Beta images in Nifti format for every column of the fMRI design matrix, including noise covariates, for every fMRI run. This model was estimated assuming i.i.d fMRI noise (OLS). The codes for computing the noise ceiling are available in the file nccodes.rar, together with a two of examples of their use. SPM is required.

  17. Milky-Vodka

    • openneuro.org
    Updated Jul 14, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Y. Yeshurun (2018). Milky-Vodka [Dataset]. https://openneuro.org/datasets/ds001131/versions/00001
    Explore at:
    Dataset updated
    Jul 14, 2018
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Y. Yeshurun
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Task: This dataset contains of 18 subjects passively listening to an audio Story1 (“MilkyWay”) and 18 subjects passively listening to an audio Story2 (“Vodka”). Each subject has one functional run of 297 TRs, during which s/he heard an audio story (“Milky Way” or “Vodka”), while viewing a gray screen. Subjects were instructed to attend to the details of the narrative.

    Image acquisition: All imaging data were acquired using a 3T Siemens Skyra MRI scanner with a 16-channel head coil. Functional images were obtained with a T2*-weighted EPI sequence: TR=1500ms, TE=28ms, FOV=192mm, flip angle=64, thickness=4mm (3x3x4mm voxels). 27 oblique axial slices aligned to the anterior/posterior-commissure line were collected in interleaved order. To align scans high-resolution MPRAGE T1-weighted anatomical images were collected.

    File descriptions: * There are 18 subjects that listened to Story1 (18 nifti files, “subj1” to “subj18”)and 18 subjects that listened to Story2 (18 nifti files, “subj19” to “subj36”). * Story1 audio file (“milky.aiff”) and Story2 audio file (“Vodka.aiff”) * Story1 text (“MilkyWayNew.doc”) and Story2 text (“vodkaNew.doc”)

    The scan included: 18 seconds of music, 3 seconds of silence, and then the story started. At the end of the story there were 15 seconds of silence. As the TR was 1.5 seconds, the story actually started at TR=15, and ended at TR=283. This is true for all the subjects, except Subj18 and Subj27 who started at TR=11 and ended at TR=279 (due to trigger issues in the scanner).

    Note from Neggin Keshavarzian, Data Curator, neggink@princeton.edu: All identifiable information in this dataset has been removed; facial information removed via brain extraction, and other identifiable metadata removed from the NIFTI headers.

  18. MUG500+(B) Standardized Nifti Augmented Data

    • figshare.com
    bin
    Updated Dec 22, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dhiren Oswal; Yugal Khanter (2023). MUG500+(B) Standardized Nifti Augmented Data [Dataset]. http://doi.org/10.6084/m9.figshare.24872079.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 22, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Dhiren Oswal; Yugal Khanter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Following data is an extension of MUG500(B) DOI: 10.6084/m9.figshare.9616319If using this dataset, please cite 10.6084/m9.figshare.24872079The MUG500(B) set has a "nrrd" defective skull scan file, with its respective implants in "stl" format.Front these 29 pairs, 27 pairs had single defects and 2 pairs had double defects, hence, two implants for one particular file.This dataset includes the converted "Nifti" file(float32) formatted 29 pairs of craniotomy defective skulls, and its manually designed implants, with a pair of implants and skull being resampled to one common ROI. Each pair with double defects was simplified to different samples by combining the volumes of the defective skull with one implant at a time. hence total available samples are 31.This standardized resampled nifti data is augmented with affine registration to obtain 930 pairs(31*30).Following is the folder structure of the augmented data:-----Augmented_data└── 03to01└── 4 files└── 03to02└── 4 files................

  19. d

    Data for: A principled approach to synthesize neuroimaging data for...

    • musc.digitalcommonsdata.com
    Updated Apr 26, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kenneth Vaden (2021). Data for: A principled approach to synthesize neuroimaging data for replication and exploration [Dataset]. http://doi.org/10.17632/3w9662wjpr.1
    Explore at:
    Dataset updated
    Apr 26, 2021
    Authors
    Kenneth Vaden
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The synthetic predictor tables and fully synthetic neuroimaging data produced for the analysis of fully synthetic data in the current study are available as Research Data available from Mendeley Data. Ten fully synthetic datasets include synthetic gray matter images (nifti files) that were generated for analysis with simulated participant data (text files). An archive file predictor_tables.tar.gz contains ten fully synthetic predictor tables with information for 264 simulated subjects. Due to large file sizes, a separate archive was created for each set of synthetic gray matter image data: RBS001.tar.gz, …, RBS010.tar.gz. Regression analyses were performed for each synthetic dataset, then average statistic maps were made for each contrast, which were then smoothed (see accompanying paper for additional information).

    The supplementary materials also include commented MATLAB and R code to implement the current neuroimaging data synthesis methods (SKexample.zip). The example data were selected from an earlier fMRI study (Kuchinsky et al., 2012) to demonstrate that the current approach can be used with other types of neuroimaging data. The example code can also be adapted to produce fully synthetic group-level datasets based on observed neuroimaging data from other sources. The zip archive includes a document with important information for performing the example analyses, and details that should be communicated with recipients of a synthetic neuroimaging dataset.

    Kuchinsky, S.E., Vaden, K.I., Keren, N.I., Harris, K.C., Ahlstrom, J.B., Dubno, J.R., Eckert, M.A., 2012. Word intelligibility and age predict visual cortex activity during word listening. Cerebral Cortex 22, 1360–71. https://doi.org/10.1093/cercor/bhr211

  20. N

    Morphospace of white matter cognition: ambiguous.nii

    • neurovault.org
    nifti
    Updated Jul 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Morphospace of white matter cognition: ambiguous.nii [Dataset]. http://identifiers.org/neurovault.image:903154
    Explore at:
    niftiAvailable download formats
    Dataset updated
    Jul 3, 2025
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    fMRI data was projected into the white matter using the Functionnectome. The number of subjectes refers to the number of fMRI literature sources

    glassbrain

    Collection description

    Description: To characterise the organisation of cognitive processes in the brain's white matter, we created a morphospace following the procedure described in Pacella et al (2024). We used 506 meta-analytic association maps from Neurosynth (www.neurosynth.org) projected to the white matter, which are statistical representations of fMRI activation patterns linked to specific cognitive terms. Here, we upload the files to build this morphospace: - `Images/` – NIfTI files for 506 cognitive terms (from Neurosynth) projected to the white matter

    Subject species

    homo sapiens

    Modality

    fMRI-BOLD

    Analysis level

    meta-analysis

    Cognitive paradigm (task)

    Other evaluation task

    Map type

    Other

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
William T Clarke; William T Clarke (2021). NIfTI-MRS Example Data [Dataset]. http://doi.org/10.5281/zenodo.5085449
Organization logo

NIfTI-MRS Example Data

Explore at:
2 scholarly articles cite this dataset (View in Google Scholar)
application/gzipAvailable download formats
Dataset updated
Jul 10, 2021
Dataset provided by
Zenodohttp://zenodo.org/
Authors
William T Clarke; William T Clarke
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Example data accompanying the data format specification for the NIfTI-MRS file format. NIfTI-MRS is a NIfTI derived format for storing magnetic resonance spectroscopy (MRS) and spectroscopic imaging (MRSI) data.

Example data is generated from code at the NIfTI-MRS GitHub repository.

Each file is named example_{n}.nii.gz and corresponds to the following list:

  1. Manually converted SVS - Water suppressed STEAM
  2. spec2nii converted SVS - Water suppressed STEAM
  3. spec2nii converted edited SVS - MEGA-PRESS
  4. Manually converted 31P FID-MRSI (CSI sequence)
  5. spec2nii converted sLASER-localised 1H MRSIs
  6. j-difference editing example (MEGA-PRESS)
  7. Variable echo time: full representation
  8. Variable echo time: short representation
  9. Multiple dynamic parameters "Fingerprinting"
  10. FSL-MRS processed MEGA-PRESS spectrum
Search
Clear search
Close search
Google apps
Main menu