100+ datasets found
  1. NIfTI data files

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jun 14, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Atkinson; David Atkinson (2021). NIfTI data files [Dataset]. http://doi.org/10.5281/zenodo.4940072
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 14, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    David Atkinson; David Atkinson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    NIfTI files to support the SIRF Exercises regarding Geometry.

    Data from static phantoms in MRI and a PET/MR phantom - see readme file.

  2. Z

    HCP-YA Tractography Atlas (NIFTI Files)

    • data.niaid.nih.gov
    Updated Mar 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fang-Cheng Yeh (2022). HCP-YA Tractography Atlas (NIFTI Files) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3627771
    Explore at:
    Dataset updated
    Mar 1, 2022
    Dataset authored and provided by
    Fang-Cheng Yeh
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Yeh, F. C., Panesar, S., Fernandes, D., Meola, A., Yoshino, M., Fernandez-Miranda, J. C., ... & Verstynen, T. (2018). Population-averaged atlas of the macroscale human structural connectome and its network topology. NeuroImage, 178, 57-68.

  3. m

    resting-state fMRI data for Normal adults

    • data.mendeley.com
    Updated Feb 10, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tie-Qiang Li (2021). resting-state fMRI data for Normal adults [Dataset]. http://doi.org/10.17632/pt9d2rdv46.2
    Explore at:
    Dataset updated
    Feb 10, 2021
    Authors
    Tie-Qiang Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Whole brain resting-state fMRI data from 227 healthy adults between 18 and 78 years old acquired at a 3T clinical MRI scanner. The tar file contains 227 compressed nifti files. The 1st alphabet in the file names indicates the gender of the volunteers (f for female and m for male). The 2nd and 3rd digits indicate the age of volunteers. The remaining alphabets are randomized to encode individual subjects. The original images were reconstructed into dicom format and converted into nifti file format.

  4. b

    Brain/MINDS Marmoset Brain MRI NA216 (In Vivo) and eNA91 (Ex Vivo) datasets

    • dataportal.brainminds.jp
    nifti-1
    Updated Jan 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Junichi Hata; Ken Nakae; Daisuke Yoshimaru; Hideyuki Okano (2024). Brain/MINDS Marmoset Brain MRI NA216 (In Vivo) and eNA91 (Ex Vivo) datasets [Dataset]. http://doi.org/10.24475/bminds.mri.thj.4624
    Explore at:
    nifti-1(102 GB)Available download formats
    Dataset updated
    Jan 30, 2024
    Dataset provided by
    RIKEN Center for Brain Science
    Brain/MINDS — Brain Mapping by Integrated Neurotechnologies for Disease Studies
    Authors
    Junichi Hata; Ken Nakae; Daisuke Yoshimaru; Hideyuki Okano
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    Japan Agency for Medical Research and Development (AMED)
    Description

    The Brain/MINDS Marmoset MRI NA216 and eNA91 datasets currently constitutes the largest public marmoset brain MRI resource (483 individuals), and includes in vivo and ex vivo data for large variety of image modalities covering a wide age range of marmoset subjects.
    * The in vivo part corresponds to a total of 455 individuals, ranging in age from 0.6 to 12.7 years (mean age: 3.86 ± 2.63), and standard brain data (NA216) from 216 of these individuals (mean age: 4.46 ± 2.62).
    T1WI, T2WI, T1WI/T2WI, DTI metrics (FA, FAc, MD, RD, AD), DWI, rs-fMRI in awake and anesthetized states, NIfTI files (.nii.gz) of label data, individual brain and population average connectome matrix (structural and functional) csv files are included.
    * The ex vivo part is ex vivo data, mainly from a subset of 91 individuals with a mean age of 5.27 ± 2.39 years.
    It includes NIfTI files (.nii.gz) of standard brain, T2WI, DTI metrics (FA, FAc, MD, RD, AD), DWI, and label data, and csv files of individual brain and population average structural connectome matrices.

  5. Royal Brisbane_TOFMRA_Intracranial Aneurysm_Database

    • openneuro.org
    Updated Jun 3, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chloe M. de Nys; Ee Shern Liang; Marita Prior; Maria A. Woodruff; James I. Novak; Ashley R. Murphy; Zhiyong Li; Craig D. Winter; Mark C. Allenby (2024). Royal Brisbane_TOFMRA_Intracranial Aneurysm_Database [Dataset]. http://doi.org/10.18112/openneuro.ds005096.v1.0.1
    Explore at:
    Dataset updated
    Jun 3, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Chloe M. de Nys; Ee Shern Liang; Marita Prior; Maria A. Woodruff; James I. Novak; Ashley R. Murphy; Zhiyong Li; Craig D. Winter; Mark C. Allenby
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Brisbane
    Description

    This dataset was obtained through Metro North Hospital and Health Service and the Royal Brisbane and Women's Hospital (RBWH). Description: This dataset comprises 63 patients with TOF-MRA scans, containing 85 intracranial aneurysms each with clinician-segmentation and clinical annotation files (measurements, demographics, imaging parameters). A unique feature of this dataset is that 24 patients contain interval surveillance imaging with clinical annotations for any aneurysm shape changes.

    Please cite the following reference if you use this dataset: "Time-of-Flight MRA of Intracranial Aneurysms with Interval Surveillance, Clinical Segmentation and Annotations" doi: https://doi.org/10.1038/s41597-024-03397-8

    Dataset Organisation

    Every original folder contains the TOF-MRA angiography of each subject.

    The "derivatives" folder contains clinician segmentations and annotations within a given subject on a single scan session. The majority of subjects will include 5 sub-folders as below:

    1) "3D Model": Contains an STL (standard tesselation language) model for the aneurysm-only, the aneurysm with parent artery, and the parent-artery-only. Subjects with multiple aneurysms will have an STL model per aneurysm. 2) "Mask": Contains the nifti angiography with an overlay of the aneurysm mask assigned a pixel value of zero and burnt into the image. 3) "Nifti Aneurysm Only": Contains a nifti file of only the aneurysm segmented mask. 4) "Nifti with Parent Artery": Contains a nifti file of the aneurysm with parent artery segmented mask. 5) "Slicer": Contains the 3D Slicer scene used for segmentation and annotation of the aneurysms and parent arteries.

    Details related to access to the data

    • Contact person: Mark C. Allenby (m.allenby@uq.edu.au)

    Overview

    • TOF-MRA scans acquired through Metro North Health at various institutions between 2008 and 2022.
    • This dataset is intended for research into intracranial aneurysms, and can be used in applications such as the development of automated detection algorithms, in computational fluid dynamics studies, for surgical training models and developing surgical guides, and for evaluating cell behaviour and dynamics in in vitro aneurysm models.
    • The supplementary documents provided includes a summary of MR image acquisition parameters for all subjects, patient gender and age, characterises aneurysm location and size, and also provides signal-to-noise (SNR) and contrast-to-noise (CNR) ratios for all scans that were clinically annotated. It might be necessary for users of this dataset to consult the SNR and CNR for patients intended for further research and analysis to ensure accurate segmentation can be achieved.

    Missing Data

    Aneurysms in some subjects have been segmentated and annotated using DSA or CTA scans, and thus only STL models have been provided for these subjects as the original files require further de-identification.

  6. b

    Brain/MINDS 3D Marmoset Reference Brain Atlas 2017

    • dataportal.brainminds.jp
    nifti-1
    Updated Apr 22, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander Woodward; Tsutomu Hashikawa; Masahide Maeda; Takaaki Kaneko; Keigo Hikishima; Atsushi Iriki; Hideyuki Okano; Yoko Yamaguchi (2020). Brain/MINDS 3D Marmoset Reference Brain Atlas 2017 [Dataset]. http://doi.org/10.24475/bma.2799
    Explore at:
    nifti-1(72 MB), nifti-1(103.35 MB)Available download formats
    Dataset updated
    Apr 22, 2020
    Dataset provided by
    Brain/MINDS — Brain Mapping by Integrated Neurotechnologies for Disease Studies
    Neuroinformatics Japan Center
    Authors
    Alexander Woodward; Tsutomu Hashikawa; Masahide Maeda; Takaaki Kaneko; Keigo Hikishima; Atsushi Iriki; Hideyuki Okano; Yoko Yamaguchi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    Japan Agency for Medical Research and Development (AMED)
    Description

    The dataset includes NIfTI files of MRI T2 ex-vivo data; reconstructed Nissl stained images of the same brain, registered to the shape of the MRI; brain region segmentation (with separate color lookup table); and gray, mid-cortical and white matter boundary segmentation. In addition, a 3D Slicer scene file is provided that can be used for testing the dataset within the freely downloadable 3D Slicer software (https://www.slicer.org/). The scene file can be dragged directly into 3D Slicer and the atlas can be used immediately. Files can be downloaded individually or as one zip file.
    The atlas can be viewed online via the Zooming Atlas Viewer (ZAV) by clicking here.

  7. MUG500+(B) Standardized Nifti Augmented Data

    • figshare.com
    bin
    Updated Dec 22, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dhiren Oswal; Yugal Khanter (2023). MUG500+(B) Standardized Nifti Augmented Data [Dataset]. http://doi.org/10.6084/m9.figshare.24872079.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 22, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Dhiren Oswal; Yugal Khanter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Following data is an extension of MUG500(B) DOI: 10.6084/m9.figshare.9616319If using this dataset, please cite 10.6084/m9.figshare.24872079The MUG500(B) set has a "nrrd" defective skull scan file, with its respective implants in "stl" format.Front these 29 pairs, 27 pairs had single defects and 2 pairs had double defects, hence, two implants for one particular file.This dataset includes the converted "Nifti" file(float32) formatted 29 pairs of craniotomy defective skulls, and its manually designed implants, with a pair of implants and skull being resampled to one common ROI. Each pair with double defects was simplified to different samples by combining the volumes of the defective skull with one implant at a time. hence total available samples are 31.This standardized resampled nifti data is augmented with affine registration to obtain 930 pairs(31*30).Following is the folder structure of the augmented data:-----Augmented_data└── 03to01└── 4 files└── 03to02└── 4 files................

  8. o

    Data from: Characterization of the Fiber Orientations in Non-Crimp Glass...

    • explore.openaire.eu
    • zenodo.org
    Updated Jun 4, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    N Jeppesen; VA Dahl; AN Christensen; AB Dahl; LP Mikkelsen (2020). Characterization of the Fiber Orientations in Non-Crimp Glass Fiber Reinforced Composites using Structure Tensor [Dataset]. http://doi.org/10.5281/zenodo.3877521
    Explore at:
    Dataset updated
    Jun 4, 2020
    Authors
    N Jeppesen; VA Dahl; AN Christensen; AB Dahl; LP Mikkelsen
    Description

    This dataset contains data, notebooks and code used in the publication: [1] Jeppesen, N., V.A. Dahl, A.N. Christensen, A.B. Dahl, L.P. Mikkelsen, Characterization of the fiber orientations in non-crimp glass fiber reinforced composites using structure tensor. IOP Conf. Ser.: Mater. Sci. Eng. 942, 012037, https://doi.org/10.1088/1757-899X/942/1/012037, 2020 If you reference this dataset, please also consider referencing the paper above. HF401TT-13_FoV16.5_Stitch.zip contains an X-ray CT scan of a non-crimp glass fiber composite sample saved in the TXM file format. HF401TT-13_FoV16.5_Stitch.txm.nii contains a cut-out of the TXM data, where air around the sample has been removed and the result has been saved in the NIfTI file format. The two notebooks, StructureTensorFiberAnalysisDemo and StructureTensorFiberAnalysisAdvancedDemo rely on the NIfTI scan data, HF401TT-13_FoV16.5_Stitch.txm.nii. They demonstrate how to do structure tensor orientation analysis on the data. The HF401TT-13_FoV16.5_Stitch notebook use the TXM scan data, HF401TT-13_FoV16.5_Stitch.zip. It can be used to recreate the results of the published experimental results. To run the notebooks, the Python file, structure_tensor_workers.py, must be in the same directory as the notebook. By default, the notebooks expect the following folder structure: /notebooks: Folder with the notebooks and Python files. /originals: Folder with the data (TXM/NIfTI files). /tmp: Folder for temporary files and output generated running the notebooks. /notebooks/figures: Folder for exporting figures as files (only needed if you want to save figures as files).

  9. NIfTI-MRS Example Data

    • zenodo.org
    application/gzip
    Updated Jul 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    William T Clarke; William T Clarke (2021). NIfTI-MRS Example Data [Dataset]. http://doi.org/10.5281/zenodo.5085449
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Jul 10, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    William T Clarke; William T Clarke
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Example data accompanying the data format specification for the NIfTI-MRS file format. NIfTI-MRS is a NIfTI derived format for storing magnetic resonance spectroscopy (MRS) and spectroscopic imaging (MRSI) data.

    Example data is generated from code at the NIfTI-MRS GitHub repository.

    Each file is named example_{n}.nii.gz and corresponds to the following list:

    1. Manually converted SVS - Water suppressed STEAM
    2. spec2nii converted SVS - Water suppressed STEAM
    3. spec2nii converted edited SVS - MEGA-PRESS
    4. Manually converted 31P FID-MRSI (CSI sequence)
    5. spec2nii converted sLASER-localised 1H MRSIs
    6. j-difference editing example (MEGA-PRESS)
    7. Variable echo time: full representation
    8. Variable echo time: short representation
    9. Multiple dynamic parameters "Fingerprinting"
    10. FSL-MRS processed MEGA-PRESS spectrum
  10. u

    Registered histology, MRI, and manual annotations of over 300 brain regions...

    • rdr.ucl.ac.uk
    txt
    Updated Oct 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eugenio Iglesias Gonzalez; Adria Casamitjana; Alessia Atzeni; Benjamin Billot; David Thomas; Emily Blackburn; James Hughes; Juri Althonayan; Loic Peter; Matteo Mancini; Nellie Robinson; Peter Schmidt; Shauna Crampsie (2023). Registered histology, MRI, and manual annotations of over 300 brain regions in 5 human hemispheres (data from ERC Starting Grant 677697 "BUNGEE-TOOLS") [Dataset]. http://doi.org/10.5522/04/24243835.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Oct 6, 2023
    Dataset provided by
    University College London
    Authors
    Eugenio Iglesias Gonzalez; Adria Casamitjana; Alessia Atzeni; Benjamin Billot; David Thomas; Emily Blackburn; James Hughes; Juri Althonayan; Loic Peter; Matteo Mancini; Nellie Robinson; Peter Schmidt; Shauna Crampsie
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Summary:

    This repository includes data related to the ERC Starting Grant project 677697: "Building Next-Generation Computational Tools for High Resolution Neuroimaging Studies" (BUNGEE-TOOLS). It includes: (a) Dense histological sections from five human hemispheres with manual delineations of >300 brain regions; (b) Corresponding ex vivo MRI scans; (c) Dissection photographs; (d) A spatially aligned version of the dataset; (e) A probabilistic atlas built from the hemispheres; and (f) Code to apply the atlas to automated segmentation of in vivo MRI scans.

    More detailed description on what this dataset includes:

    Data files and Python code for Bayesian segmentation of human brain MRI based on a next-generation, high-resolution histological atlas: "Next-Generation histological atlas for high-resolution segmentation of human brain MRI" A Casamitjana et al., in preparation. This repository contains a set of zip files, each corresponding to one directory. Once decompressed, each directory has a readme.txt file explaining its contents. The list of zip files / compressed directories is:

    • 3dAtlas.zip: nifti files with summary imaging volumes of the probabilistic atlas.

    • BlockFacePhotoBlocks.zip: nifti files with the blackface photographs acquired during tissue sectioning, reconstructed into 3D volumes (in RGB).

    • Histology.zip: jpg files with the LFB and H&E stained sections.

    • HistologySegmentations.zip: 2D nifti files with the segmentations of the histological sections.

    • MRI.zip: ex vivo T2-weighted MRI scans and corresponding FreeSurfer processing files

    • SegmentationCode.zip: contains the the Python code and data files that we used to segment brain MRI scans and obtain the results presented in the article (for reproducibility purposes). Note that it requires an installation of FreeSurfer. Also, note that the code is also maintained in FreeSurfer (but may not produce exactly the same results): https://surfer.nmr.mgh.harvard.edu/fswiki/HistoAtlasSegmentation

    • WholeHemispherePhotos.zip: photographs of the specimens prior to dissection

    • WholeSlicePhotos.zip: photographs of the tissue slabs prior to blocking.

    We also note that the registered images for the five cases can be found in GitHub: https://github.com/UCL/BrainAtlas-P41-16 https://github.com/UCL/BrainAtlas-P57-16 https://github.com/UCL/BrainAtlas-P58-16 https://github.com/UCL/BrainAtlas-P85-18 https://github.com/UCL/BrainAtlas-EX9-19
    These registered images can be interactively explored with the following web interface: https://github-pages.ucl.ac.uk/BrainAtlas/#/atlas

  11. c

    ckanext-papaya

    • catalog.civicdataecosystem.org
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). ckanext-papaya [Dataset]. https://catalog.civicdataecosystem.org/dataset/ckanext-papaya
    Explore at:
    Dataset updated
    Jun 4, 2025
    Description

    The ckanext-papaya extension enhances CKAN by providing specialized viewers for medical imaging data. Specifically, it enables the display of NIFTI (.nii) and DICOM (.dcm) files directly within the CKAN interface using the Papaya JavaScript viewer. The extension supports both single DICOM files and DICOM archives uploaded as ZIP files, facilitating easy access to and visualization of medical imaging datasets. Key Features: NIFTI and DICOM Viewing: Renders NIFTI (.nii) and DICOM (.dcm) files directly in CKAN using the Papaya viewer. DICOM ZIP Archive Support: Allows users to upload ZIP archives containing DICOM files, which are then extracted and displayed using Papaya. Only files with the .dcm extension within the ZIP are read, ignoring other file types. Automatic View Creation: Automatically creates Papaya views for newly-uploaded NIFTI files, single DICOM files, and DICOM ZIP archives. Note existing resources may need the view to be added manually. Client-Side Rendering: Leverages the Papaya JavaScript framework for client-side rendering of medical images, eliminating the need for a separate server. This provides a streamlined visualization experience directly within the user's browser. Temporary Unzipping Mechanism: The extension unzips DICOM archives temporarily to extract DICOM files for viewing, and immediately deletes these extracted files to conserve server space and maintain security.. Technical Integration: The ckanext-papaya extension integrates with CKAN by adding a new view type that utilizes the Papaya JavaScript library. To enable it, the papaya plugin must be added to the ckan.plugins setting in CKAN's configuration file. To avoid enabling Papaya viewer for all zip files, configure ckan.views.default_views accordingly. No other configuration settings are currently needed. Benefits & Impact: By incorporating ckanext-papaya, CKAN instances which host medical imaging datasets can offer a streamlined, in-browser viewing experience for NIFTI and DICOM files. This eliminates the need for users to download and use external viewers, simplifying data exploration and improving accessibility to medical imaging data shared through CKAN. It lowers the barrier to entry to exploration of medical imaging data for users without requiring prior knowledge of the underlying datasets.

  12. d

    Data for: A principled approach to synthesize neuroimaging data for...

    • musc.digitalcommonsdata.com
    Updated Apr 26, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kenneth Vaden (2021). Data for: A principled approach to synthesize neuroimaging data for replication and exploration [Dataset]. http://doi.org/10.17632/3w9662wjpr.1
    Explore at:
    Dataset updated
    Apr 26, 2021
    Authors
    Kenneth Vaden
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The synthetic predictor tables and fully synthetic neuroimaging data produced for the analysis of fully synthetic data in the current study are available as Research Data available from Mendeley Data. Ten fully synthetic datasets include synthetic gray matter images (nifti files) that were generated for analysis with simulated participant data (text files). An archive file predictor_tables.tar.gz contains ten fully synthetic predictor tables with information for 264 simulated subjects. Due to large file sizes, a separate archive was created for each set of synthetic gray matter image data: RBS001.tar.gz, …, RBS010.tar.gz. Regression analyses were performed for each synthetic dataset, then average statistic maps were made for each contrast, which were then smoothed (see accompanying paper for additional information).

    The supplementary materials also include commented MATLAB and R code to implement the current neuroimaging data synthesis methods (SKexample.zip). The example data were selected from an earlier fMRI study (Kuchinsky et al., 2012) to demonstrate that the current approach can be used with other types of neuroimaging data. The example code can also be adapted to produce fully synthetic group-level datasets based on observed neuroimaging data from other sources. The zip archive includes a document with important information for performing the example analyses, and details that should be communicated with recipients of a synthetic neuroimaging dataset.

    Kuchinsky, S.E., Vaden, K.I., Keren, N.I., Harris, K.C., Ahlstrom, J.B., Dubno, J.R., Eckert, M.A., 2012. Word intelligibility and age predict visual cortex activity during word listening. Cerebral Cortex 22, 1360–71. https://doi.org/10.1093/cercor/bhr211

  13. h

    HiPaS_dataset

    • huggingface.co
    Updated Jun 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jasper Eppink (2025). HiPaS_dataset [Dataset]. https://huggingface.co/datasets/JasperEppink/HiPaS_dataset
    Explore at:
    Dataset updated
    Jun 1, 2025
    Authors
    Jasper Eppink
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This dataset contains created nifti files for the dataset as mentioned in the paper "Deep learning-driven pulmonary artery and vein segmentation reveals demography-associated vasculature anatomical differences". They also released the code and datasets at https://github.com/Arturia-Pendragon-Iris/HiPaS_AV_Segmentation. If you find this dataset be helpful to you, please city their paper "Chu, Y., Luo, G., Zhou, L. et al. Deep learning-driven pulmonary artery and vein segmentation reveals… See the full description on the dataset page: https://huggingface.co/datasets/JasperEppink/HiPaS_dataset.

  14. pyAFQ Stanford HARDI tractography and mapping, updated

    • figshare.com
    bin
    Updated Aug 3, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Kruper; Ariel Rokem (2023). pyAFQ Stanford HARDI tractography and mapping, updated [Dataset]. http://doi.org/10.6084/m9.figshare.23848884.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Aug 3, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    John Kruper; Ariel Rokem
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This tractography and registration mapping was created from the data that is in this PURL on the Stanford Digital Repository: https://searchworks.stanford.edu/view/yx282xq2090.The data were processed using the pyAFQ software (https://github.com/yeatmanlab/pyAFQ) and Dipy, to create tensor-based tracts. These were down-sampled by a factor of 1000, and stored in the trk file. The data were also registered to the MNI T2 template, using the SyN algorithm (Avants et al. 2008), implemented in Dipy. Both forward and backward transformation images were saved and are stored the nifti files.

  15. o

    UK7T Network harmonized neuroimaging protocols

    • ora.ox.ac.uk
    plain, zip
    Updated Jan 1, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Clarke, W (2018). UK7T Network harmonized neuroimaging protocols [Dataset]. http://doi.org/10.5287/bodleian:mzROQd2N8
    Explore at:
    plain(2274), zip(0), zip(70728), zip(2773852)Available download formats
    Dataset updated
    Jan 1, 2018
    Dataset provided by
    University of Oxford
    Authors
    Clarke, W
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Area covered
    United Kingdom
    Description

    This deposit contains the UK7T Network's harmonised MRI neuroimaging protocols. It also contains example datasets from a single identical subject from each of the Network's scanners collected using these protocols.

    The protocols for each of the three types of scanner are supplied in .pdf or .chm format. Notes on the pulse sequence version and reconstruction code are contained in explanatory .txt files.

    The example data is supplied in both DICOM (.ima) and NIfTI (.nii.gz) format. Due to the size and number of the "Multi-band EPI" Philips DICOM files, these are not included. They are available on request, whilst the NIfTI files for these scans are included.

    NIfTI files were converted by dcm2niix (https://github.com/rordenlab/dcm2niix) and can be viewed using e.g. FSLeyes (https://fsl.fmrib.ox.ac.uk).

    This data release is intended to accompany explanatory publications. Data collected as part of the "UK7T Network" (http://www.uk7t.org/). The UK7T Network is a consortium of 7 tesla (7T) MRI capable sites in the UK. It was established with the aim of promoting the use of 7T MRI and ensuring all sites in the consortium have the ability to collect and meaningfully compare high quality neuroimaging data. This deposit is the results of the Networks harmonisation efforts.

  16. m

    Menelik head voxel model (in NIFTI format)

    • bridges.monash.edu
    • researchdata.edu.au
    application/gzip
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Behailu Kibret; MALIN PREMARATNE (2023). Menelik head voxel model (in NIFTI format) [Dataset]. http://doi.org/10.4225/03/5b2c63a5428ae
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Monash University
    Authors
    Behailu Kibret; MALIN PREMARATNE
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Menelik head voxel model in NIFTI format.Dimension: 396x514x442 (xyz)Voxel size: 0.5 0.5 0.5 (mm)Byte type: 1-Byte unsigned integerCompression: GZIPData Orientaion: Column(X+), Rows(Y-), Slices(Z+)File Format: NIFTI

  17. f

    COBRE preprocessed with NIAK 0.17 - lightweight release

    • figshare.com
    application/gzip
    Updated Nov 3, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pierre Bellec; Pierre Bellec (2016). COBRE preprocessed with NIAK 0.17 - lightweight release [Dataset]. http://doi.org/10.6084/m9.figshare.4197885.v1
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    Nov 3, 2016
    Dataset provided by
    figshare
    Authors
    Pierre Bellec; Pierre Bellec
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ContentThis work is a derivative from the COBRE sample found in the International Neuroimaging Data-sharing Initiative (INDI), originally released under Creative Commons -- Attribution Non-Commercial. It includes preprocessed resting-state functional magnetic resonance images for 72 patients diagnosed with schizophrenia (58 males, age range = 18-65 yrs) and 74 healthy controls (51 males, age range = 18-65 yrs). The fMRI dataset for each subject are single nifti files (.nii.gz), featuring 150 EPI blood-oxygenation level dependent (BOLD) volumes were obtained in 5 mns (TR = 2 s, TE = 29 ms, FA = 75°, 32 slices, voxel size = 3x3x4 mm3 , matrix size = 64x64, FOV = mm2 ). The data processing as well as packaging was implemented by Pierre Bellec, CRIUGM, Department of Computer Science and Operations Research, University of Montreal, 2016.The COBRE preprocessed fMRI release more specifically contains the following files:README.md: a markdown (text) description of the release.phenotypic_data.tsv.gz: A gzipped tabular-separated value file, with each column representing a phenotypic variable as well as measures of data quality (related to motions). Each row corresponds to one participant, except the first row which contains the names of the variables (see file below for a description).keys_phenotypic_data.json: a json file describing each variable found in phenotypic_data.tsv.gz.fmri_XXXXXXX.tsv.gz: A gzipped tabular-separated value file, with each column representing a confounding variable for the time series of participant XXXXXXX (which is the same participant ID found in phenotypic_data.tsv.gz). Each row corresponds to a time frame, except for the first row, which contains the names of the variables (see file below for a definition).keys_confounds.json: a json file describing each variable found in the files fmri_XXXXXXX.tsv.gz.fmri_XXXXXXX.nii.gz: a 3D+t nifti volume at 6 mm isotropic resolution, stored as short (16 bits) integers, in the MNI non-linear 2009a symmetric space (http://www.bic.mni.mcgill.ca/ServicesAtlases/ICBM152NLin2009). Each fMRI data features 150 volumes.Usage recommendationsIndividual analyses: You may want to remove some time frames with excessive motion for each subject, see the confounding variable called scrub in fmri_XXXXXXX.tsv.gz. Also, after removing these time frames there may not be enough usable data. We recommend a minimum number of 60 time frames. A fairly large number of confounds have been made available as part of the release (slow time drifts, motion paramaters, frame displacement, scrubbing, average WM/Vent signal, COMPCOR, global signal). We strongly recommend regression of slow time drifts. Everything else is optional.Group analyses: There will also be some residuals effect of motion, which you may want to regress out from connectivity measures at the group level. The number of acceptable time frames as well as a measure of residual motion (called frame displacement, as described by Power et al., Neuroimage 2012), can be found in the variables Frames OK and FD scrubbed in phenotypic_data.tsv.gz. Finally, the simplest use case with these data is to predict the overall presence of a diagnosis of schizophrenia (values Control or Patient in the phenotypic variable Subject Type). You may want to try to match the control and patient samples in terms of amounts of motion, as well as age and sex. Note that more detailed diagnostic categories are available in the variable Diagnosis.PreprocessingThe datasets were analysed using the NeuroImaging Analysis Kit (NIAK https://github.com/SIMEXP/niak) version 0.17, under CentOS version 6.3 with Octave (http://gnu.octave.org) version 4.0.2 and the Minc toolkit (http://www.bic.mni.mcgill.ca/ServicesSoftware/ServicesSoftwareMincToolKit) version 0.3.18. Each fMRI dataset was corrected for inter-slice difference in acquisition time and the parameters of a rigid-body motion were estimated for each time frame. Rigid-body motion was estimated within as well as between runs, using the median volume of the first run as a target. The median volume of one selected fMRI run for each subject was coregistered with a T1 individual scan using Minctracc (Collins and Evans, 1998), which was itself non-linearly transformed to the Montreal Neurological Institute (MNI) template (Fonov et al., 2011) using the CIVET pipeline (Ad-Dabbagh et al., 2006). The MNI symmetric template was generated from the ICBM152 sample of 152 young adults, after 40 iterations of non-linear coregistration. The rigid-body transform, fMRI-to-T1 transform and T1-to-stereotaxic transform were all combined, and the functional volumes were resampled in the MNI space at a 6 mm isotropic resolution.Note that a number of confounding variables were estimated and are made available as part of the release. WARNING: no confounds were actually regressed from the data, so it can be done interactively by the user who will be able to explore different analytical paths easily. The “scrubbing” method of (Power et al., 2012), was used to identify the volumes with excessive motion (frame displacement greater than 0.5 mm). A minimum number of 60 unscrubbed volumes per run, corresponding to ~120 s of acquisition, is recommended for further analysis. The following nuisance parameters were estimated: slow time drifts (basis of discrete cosines with a 0.01 Hz high-pass cut-off), average signals in conservative masks of the white matter and the lateral ventricles as well as the six rigid-body motion parameters (Giove et al., 2009), anatomical COMPCOR signal in the ventricles and white matter (Chai et al., 2012), PCA-based estimator of the global signal (Carbonell et al., 2011). The fMRI volumes were not spatially smoothed.ReferencesAd-Dab’bagh, Y., Einarson, D., Lyttelton, O., Muehlboeck, J. S., Mok, K., Ivanov, O., Vincent, R. D., Lepage, C., Lerch, J., Fombonne, E., Evans, A. C., 2006. The CIVET Image-Processing Environment: A Fully Automated Comprehensive Pipeline for Anatomical Neuroimaging Research. In: Corbetta, M. (Ed.), Proceedings of the 12th Annual Meeting of the Human Brain Mapping Organization. Neuroimage, Florence, Italy.Bellec, P., Rosa-Neto, P., Lyttelton, O. C., Benali, H., Evans, A. C., Jul. 2010. Multi-level bootstrap analysis of stable clusters in resting-state fMRI. NeuroImage 51 (3), 1126–1139. URL http://dx.doi.org/10.1016/j.neuroimage.2010.02.082F. Carbonell, P. Bellec, A. Shmuel. Validation of a superposition model of global and system-specific resting state activity reveals anti-correlated networks. Brain Connectivity 2011 1(6): 496-510. doi:10.1089/brain.2011.0065Chai, X. J., Castan, A. N. N., Ongr, D., Whitfield-Gabrieli, S., Jan. 2012. Anticorrelations in resting state networks without global signal regression. NeuroImage 59 (2), 1420-1428. http://dx.doi.org/10.1016/j.neuroimage.2011.08.048 Collins, D. L., Evans, A. C., 1997. Animal: validation and applications of nonlinear registration-based segmentation. International Journal of Pattern Recognition and Artificial Intelligence 11, 1271–1294.Fonov, V., Evans, A. C., Botteron, K., Almli, C. R., McKinstry, R. C., Collins, D. L., Jan. 2011. Unbiased average age-appropriate atlases for pediatric studies. NeuroImage 54 (1), 313–327. URLhttp://dx.doi.org/10.1016/j.neuroimage.2010.07.033Giove, F., Gili, T., Iacovella, V., Macaluso, E., Maraviglia, B., Oct. 2009. Images-based suppression of unwanted global signals in resting-state functional connectivity studies. Magnetic resonance imaging 27 (8), 1058–1064. URLhttp://dx.doi.org/10.1016/j.mri.2009.06.004Power, J. D., Barnes, K. A., Snyder, A. Z., Schlaggar, B. L., Petersen, S. E., Feb. 2012. Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. NeuroImage 59 (3), 2142–2154. URLhttp://dx.doi.org/10.1016/j.neuroimage.2011.10.018

  18. Z

    2D high-resolution synthetic MR images of Alzheimer's patients and healthy...

    • data.niaid.nih.gov
    Updated Dec 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marzi, Chiara (2023). 2D high-resolution synthetic MR images of Alzheimer's patients and healthy subjects using PACGAN [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8276785
    Explore at:
    Dataset updated
    Dec 13, 2023
    Dataset provided by
    Citi, Luca
    Diciotti, Stefano
    Lai, Matteo
    Marzi, Chiara
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset encompasses a NIfTI file containing a collection of 500 images, each capturing the central axial slice of a synthetic brain MRI.

    Accompanying this file is a CSV dataset that serves as a repository for the corresponding labels linked to each image:

    Label 0: Healthy Controls (HC)

    Label 1: Alzheimer's Disease (AD)

    Each image within this dataset has been generated by PACGAN (Progressive Auxiliary Classifier Generative Adversarial Network), a framework designed and implemented by the AI for Medicine Research Group at the University of Bologna.

    PACGAN is a generative adversarial network trained to generate high-resolution images belonging to different classes. In our work, we trained this framework on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, which contains brain MRI images of AD patients and HC.

    The implementation of the training algorithm can be found within our GitHub repository, with Docker containerization.

    For further exploration, the pre-trained models are available within the Code Ocean capsule. These models can facilitate the generation of synthetic images for both classes and also aid in classifying new brain MRI images.

  19. The Phantom of Bern: repeated scans of two volunteers with eight different...

    • openneuro.org
    Updated Apr 26, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    M. Rebsamen; D. Romascano; M. Capiglioni; R. Wiest; P. Radojewski; C. Rummel (2023). The Phantom of Bern: repeated scans of two volunteers with eight different combinations of MR sequence parameters [Dataset]. http://doi.org/10.18112/openneuro.ds004560.v1.0.0
    Explore at:
    Dataset updated
    Apr 26, 2023
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    M. Rebsamen; D. Romascano; M. Capiglioni; R. Wiest; P. Radojewski; C. Rummel
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Bern
    Description

    The Phantom of Bern: repeated scans of two volunteers with eight different combinations of MR sequence parameters

    The Phantom of Bern consists of eight same-session re-scans of T1-weighted MRI with different combinations of sequence parameters, acquired on two healthy subjects. The subjects have agreed in writing to the publication of these data, including the original anonymized DICOM files and waving the requirement of defacing. Usage is permitted under the terms of the data usage agreement stated below.

    CONTENT

    The BIDS directory is organized as follows:

    └── PhantomOfBern/
      ├─ code/
      │
      ├─ derivatives/
      │ ├─ dldirect_v1-0-0/
      │ │ ├─ results/ # Folder with flattened subject/session inputs and outputs of DL+DiReCT
      │ │ └─ stats2table/ # Folder with tables summarizing all DL+DiReCT outputs
      │ ├─ freesurfer_v6-0-0/
      │ │ ├─ results/ # Folder with flattened subject/session inputs and outputs of freesurfer
      │ │ └─ stats2table/ # Folder with tables summarizing all freesurfer outputs
      │ └─ siena_v2-6/
      │   ├─ SIENA_results.csv # Siena's main output
      │   └─ ... # Flattened subject/session inputs and outputs of SIENA
      │
      ├─ sourcedata/
      │ ├─ POBHC0001/
      │ │ └─ 17473A/
      │ │   └─ ... # Anonymized DICOM folders
      │ └─ POBHC0002/
      │   └─ 14610A/
      │    └─ ... # Anonymized DICOM folders
      │
      ├─ sub-<label>/
      │ └─ ses-<label>/
      │   └─ anat/ # Folder with scan's json and nifti files
      ├─ ...
    

    ACKNOWLEDGEMENT

    The dataset can be cited as:

    M. Rebsamen, D. Romascano, M. Capiglioni, R. Wiest, P. Radojewski, C. Rummel. The Phantom of Bern:
    repeated scans of two volunteers with eight different combinations of MR sequence parameters.
    OpenNeuro, 2023.
    

    If you use these data, please also cite the original paper:

    M. Rebsamen, M. Capiglioni, R. Hoepner, A. Salmen, R. Wiest, P. Radojewski, C. Rummel. Growing importance
    of brain morphometry analysis in the clinical routine: The hidden impact of MR sequence parameters.
    Journal of Neuroradiology, 2023.
    

    DATA USE AGREEMENT

    The Phantom of Bern is distributed under the following terms, to which you agree by downloading and/or using the dataset:

    1. Any intentional identification of a subject or disclosure of his or her confidential information violates the promise of confidentiality given to the providers of the information. Therefore, all users of the dataset agree:
    • To use these datasets solely for research and development or statistical purposes and not for investigation of specific subjects

    • To make no use of the identity of any subject discovered inadvertently, and to advise the providers of any such discovery (crummel@web.de)

    1. When publicly presenting any results or algorithms that benefited from the use of the Phantom of Bern, you should acknowledge it, see above. Papers, book chapters, books, posters, oral presentations, and all other printed and digital presentations of results derived from the Phantom of Bern data should cite the publications listed above.

    2. Redistribution of data (complete or in parts) in any manner without explicit inclusion of this data use agreement is prohibited.

    3. Usage of the data for testing commercial tools is explicitly allowed. Usage for military purposes is prohibited.

    4. The original collector and provider of the data (see acknowledgement) and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses.

    FUNDING

    This work was supported by the Swiss National Science Foundation under grant numbers 204593 (ScanOMetrics) and CRSII5_180365 (The Swiss-First Study).

  20. r

    Youth Mental Health Program

    • researchdata.edu.au
    Updated Jun 2, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The University of Sydney (2014). Youth Mental Health Program [Dataset]. https://researchdata.edu.au/youth-mental-health-program/400660
    Explore at:
    Dataset updated
    Jun 2, 2014
    Dataset provided by
    The University of Sydney
    Description

    This data collection contains:

    • Magnetic Resonance Imaging (MRI) brain image clinical research data;
    • Electroencephalography (EEG) clinical research data;
    • Neuropsychological test data

    Data is stored within the Brain and Mind Research Institute's Distributed and Reflective Informatics System (DaRIS)

    Magnetic Resonance Imaging Data

    2 dimensional (2D) DICOM (.dcm) format Magnetic Resonance Imaging data is collected using a GE Medical Systems Discovery MR750 MR instrument for the following modalities:

    • 3 dimensional (3D) T1 structural brain scans;
    • 4 dimensional (4D) T2 functional brain scans;
    • 4 dimensional (4D) DTI (Diffusion Tensor Image) brain scans

    Each modality; T1, T2 and DTI, is processed through kepler workflows. The kepler workflows utilise the following software:

    3D T1 Structural MR Scans

    Each T1 MR series contains 196 individual 2D .dcm format images. Images are pre-processed using a Kepler workflow to produce brain extracted images that are reoriented to (MNI152) standard template.

    3D T1 Structural MR workflow

    1. DICOM data is converted to FSL NifTI using the following command:
      $ mcverter -o {output directory} -f fsl -n {input T1 directory}
    2. NifTI data is reoriented to standard (MNI152) template using the following command:
      $ fslreorient2std {input file} {output file name}
    3. Brain extraction is then performed on the reoriented NifTI images using the following command:
      bet2 {input file} {output file} 0.27 –B
    4. The kepler workflow then packages the brain extracted and reoriented NifTI images in a .zip file and stores the file in DaRIS
    5. The processed file is related to the parent DICOM series through the metadata xpath(derivation/input)

    4D T2 Functional MR Scans

    Each T2 MR series contains 5460 individual 2D .dcm format images. The Kepler "Multivariate Exploratory Linear Optimized Decomposition into Independent Components (MELODIC)" workflow is used to process the data

    4D T2 Functional MR workflow

    1. DICOM data is converted to FSL NifTI using the following command:
      $ mcverter -o {output directory} -f fsl -n -d {input T2 directory}
    2. FSL NifTI data is processed using the following command:
      $ feat {design.fsf} where the design.fsf file contains the configuration information for the process.
    3. The kepler workflow then packages the MELODIC output files and folders in a .zip file and stores the file in DaRIS
    4. The processed file is related to the parent DICOM series through the metadata xpath(derivation/input)

    4D DTI MR Scans

    Each DTI MR series contains 4235 individual 2D .dcm format images. The Kepler DTI workflow is used to process the data

    4D DTI MR workflow

    1. DICOM data is converted to FSL NifTI using the following command:
      $ mcverter -o {output directory} -f fsl -n -d -s 7 {input DTI directory}
    2. FSL NifTI data is processed using the scripts contained within thefsl-dti file
    3. The kepler workflow then packages the DTI output files in a .zip file and stores the file in DaRIS
    4. The processed file is related to the parent DICOM series through the metadata xpath(derivation/input)

    Electroencephalography and Neuropsychological Data

    This data set contains electroencephalography and neuropsychological data collected from research subjects undergo a standard battery of neuropsychological tests (e.g. Cambridge Neuropsychological Test Automated Battery - CANTAB, which examines cognitive function). Data is also collected through the following forms which are completed by the subject and members of the research team:

    • Youth Mental Health - Self Report
    • Youth Mental Health - Assessment Protocol
    • Youth Mental Health - Psychiatrist Protocol
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
David Atkinson; David Atkinson (2021). NIfTI data files [Dataset]. http://doi.org/10.5281/zenodo.4940072
Organization logo

NIfTI data files

Explore at:
4 scholarly articles cite this dataset (View in Google Scholar)
zipAvailable download formats
Dataset updated
Jun 14, 2021
Dataset provided by
Zenodohttp://zenodo.org/
Authors
David Atkinson; David Atkinson
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

NIfTI files to support the SIRF Exercises regarding Geometry.

Data from static phantoms in MRI and a PET/MR phantom - see readme file.

Search
Clear search
Close search
Google apps
Main menu