Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
NIfTI files to support the SIRF Exercises regarding Geometry.
Data from static phantoms in MRI and a PET/MR phantom - see readme file.
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Yeh, F. C., Panesar, S., Fernandes, D., Meola, A., Yoshino, M., Fernandez-Miranda, J. C., ... & Verstynen, T. (2018). Population-averaged atlas of the macroscale human structural connectome and its network topology. NeuroImage, 178, 57-68.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Whole brain resting-state fMRI data from 227 healthy adults between 18 and 78 years old acquired at a 3T clinical MRI scanner. The tar file contains 227 compressed nifti files. The 1st alphabet in the file names indicates the gender of the volunteers (f for female and m for male). The 2nd and 3rd digits indicate the age of volunteers. The remaining alphabets are randomized to encode individual subjects. The original images were reconstructed into dicom format and converted into nifti file format.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Brain/MINDS Marmoset MRI NA216 and eNA91 datasets currently constitutes the largest public marmoset brain MRI resource (483 individuals), and includes in vivo and ex vivo data for large variety of image modalities covering a wide age range of marmoset subjects.
* The in vivo part corresponds to a total of 455 individuals, ranging in age from 0.6 to 12.7 years (mean age: 3.86 ± 2.63), and standard brain data (NA216) from 216 of these individuals (mean age: 4.46 ± 2.62).
T1WI, T2WI, T1WI/T2WI, DTI metrics (FA, FAc, MD, RD, AD), DWI, rs-fMRI in awake and anesthetized states, NIfTI files (.nii.gz) of label data, individual brain and population average connectome matrix (structural and functional) csv files are included.
* The ex vivo part is ex vivo data, mainly from a subset of 91 individuals with a mean age of 5.27 ± 2.39 years.
It includes NIfTI files (.nii.gz) of standard brain, T2WI, DTI metrics (FA, FAc, MD, RD, AD), DWI, and label data, and csv files of individual brain and population average structural connectome matrices.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset was obtained through Metro North Hospital and Health Service and the Royal Brisbane and Women's Hospital (RBWH). Description: This dataset comprises 63 patients with TOF-MRA scans, containing 85 intracranial aneurysms each with clinician-segmentation and clinical annotation files (measurements, demographics, imaging parameters). A unique feature of this dataset is that 24 patients contain interval surveillance imaging with clinical annotations for any aneurysm shape changes.
Please cite the following reference if you use this dataset: "Time-of-Flight MRA of Intracranial Aneurysms with Interval Surveillance, Clinical Segmentation and Annotations" doi: https://doi.org/10.1038/s41597-024-03397-8
Every original folder contains the TOF-MRA angiography of each subject.
The "derivatives" folder contains clinician segmentations and annotations within a given subject on a single scan session. The majority of subjects will include 5 sub-folders as below:
1) "3D Model": Contains an STL (standard tesselation language) model for the aneurysm-only, the aneurysm with parent artery, and the parent-artery-only. Subjects with multiple aneurysms will have an STL model per aneurysm. 2) "Mask": Contains the nifti angiography with an overlay of the aneurysm mask assigned a pixel value of zero and burnt into the image. 3) "Nifti Aneurysm Only": Contains a nifti file of only the aneurysm segmented mask. 4) "Nifti with Parent Artery": Contains a nifti file of the aneurysm with parent artery segmented mask. 5) "Slicer": Contains the 3D Slicer scene used for segmentation and annotation of the aneurysms and parent arteries.
Aneurysms in some subjects have been segmentated and annotated using DSA or CTA scans, and thus only STL models have been provided for these subjects as the original files require further de-identification.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset includes NIfTI files of MRI T2 ex-vivo data; reconstructed Nissl stained images of the same brain, registered to the shape of the MRI; brain region segmentation (with separate color lookup table); and gray, mid-cortical and white matter boundary segmentation. In addition, a 3D Slicer scene file is provided that can be used for testing the dataset within the freely downloadable 3D Slicer software (https://www.slicer.org/). The scene file can be dragged directly into 3D Slicer and the atlas can be used immediately. Files can be downloaded individually or as one zip file.
The atlas can be viewed online via the Zooming Atlas Viewer (ZAV) by clicking here.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Following data is an extension of MUG500(B) DOI: 10.6084/m9.figshare.9616319If using this dataset, please cite 10.6084/m9.figshare.24872079The MUG500(B) set has a "nrrd" defective skull scan file, with its respective implants in "stl" format.Front these 29 pairs, 27 pairs had single defects and 2 pairs had double defects, hence, two implants for one particular file.This dataset includes the converted "Nifti" file(float32) formatted 29 pairs of craniotomy defective skulls, and its manually designed implants, with a pair of implants and skull being resampled to one common ROI. Each pair with double defects was simplified to different samples by combining the volumes of the defective skull with one implant at a time. hence total available samples are 31.This standardized resampled nifti data is augmented with affine registration to obtain 930 pairs(31*30).Following is the folder structure of the augmented data:-----Augmented_data└── 03to01└── 4 files└── 03to02└── 4 files................
This dataset contains data, notebooks and code used in the publication: [1] Jeppesen, N., V.A. Dahl, A.N. Christensen, A.B. Dahl, L.P. Mikkelsen, Characterization of the fiber orientations in non-crimp glass fiber reinforced composites using structure tensor. IOP Conf. Ser.: Mater. Sci. Eng. 942, 012037, https://doi.org/10.1088/1757-899X/942/1/012037, 2020 If you reference this dataset, please also consider referencing the paper above. HF401TT-13_FoV16.5_Stitch.zip contains an X-ray CT scan of a non-crimp glass fiber composite sample saved in the TXM file format. HF401TT-13_FoV16.5_Stitch.txm.nii contains a cut-out of the TXM data, where air around the sample has been removed and the result has been saved in the NIfTI file format. The two notebooks, StructureTensorFiberAnalysisDemo and StructureTensorFiberAnalysisAdvancedDemo rely on the NIfTI scan data, HF401TT-13_FoV16.5_Stitch.txm.nii. They demonstrate how to do structure tensor orientation analysis on the data. The HF401TT-13_FoV16.5_Stitch notebook use the TXM scan data, HF401TT-13_FoV16.5_Stitch.zip. It can be used to recreate the results of the published experimental results. To run the notebooks, the Python file, structure_tensor_workers.py, must be in the same directory as the notebook. By default, the notebooks expect the following folder structure: /notebooks: Folder with the notebooks and Python files. /originals: Folder with the data (TXM/NIfTI files). /tmp: Folder for temporary files and output generated running the notebooks. /notebooks/figures: Folder for exporting figures as files (only needed if you want to save figures as files).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Example data accompanying the data format specification for the NIfTI-MRS file format. NIfTI-MRS is a NIfTI derived format for storing magnetic resonance spectroscopy (MRS) and spectroscopic imaging (MRSI) data.
Example data is generated from code at the NIfTI-MRS GitHub repository.
Each file is named example_{n}.nii.gz and corresponds to the following list:
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Summary:
This repository includes data related to the ERC Starting Grant project 677697: "Building Next-Generation Computational Tools for High Resolution Neuroimaging Studies" (BUNGEE-TOOLS). It includes: (a) Dense histological sections from five human hemispheres with manual delineations of >300 brain regions; (b) Corresponding ex vivo MRI scans; (c) Dissection photographs; (d) A spatially aligned version of the dataset; (e) A probabilistic atlas built from the hemispheres; and (f) Code to apply the atlas to automated segmentation of in vivo MRI scans.
More detailed description on what this dataset includes:
Data files and Python code for Bayesian segmentation of human brain MRI based on a next-generation, high-resolution histological atlas: "Next-Generation histological atlas for high-resolution segmentation of human brain MRI" A Casamitjana et al., in preparation. This repository contains a set of zip files, each corresponding to one directory. Once decompressed, each directory has a readme.txt file explaining its contents. The list of zip files / compressed directories is:
3dAtlas.zip: nifti files with summary imaging volumes of the probabilistic atlas.
BlockFacePhotoBlocks.zip: nifti files with the blackface photographs acquired during tissue sectioning, reconstructed into 3D volumes (in RGB).
Histology.zip: jpg files with the LFB and H&E stained sections.
HistologySegmentations.zip: 2D nifti files with the segmentations of the histological sections.
MRI.zip: ex vivo T2-weighted MRI scans and corresponding FreeSurfer processing files
SegmentationCode.zip: contains the the Python code and data files that we used to segment brain MRI scans and obtain the results presented in the article (for reproducibility purposes). Note that it requires an installation of FreeSurfer. Also, note that the code is also maintained in FreeSurfer (but may not produce exactly the same results): https://surfer.nmr.mgh.harvard.edu/fswiki/HistoAtlasSegmentation
WholeHemispherePhotos.zip: photographs of the specimens prior to dissection
WholeSlicePhotos.zip: photographs of the tissue slabs prior to blocking.
We also note that the registered images for the five cases can be found in GitHub:
https://github.com/UCL/BrainAtlas-P41-16
https://github.com/UCL/BrainAtlas-P57-16
https://github.com/UCL/BrainAtlas-P58-16
https://github.com/UCL/BrainAtlas-P85-18
https://github.com/UCL/BrainAtlas-EX9-19
These registered images can be interactively explored with the following web interface:
https://github-pages.ucl.ac.uk/BrainAtlas/#/atlas
The ckanext-papaya extension enhances CKAN by providing specialized viewers for medical imaging data. Specifically, it enables the display of NIFTI (.nii) and DICOM (.dcm) files directly within the CKAN interface using the Papaya JavaScript viewer. The extension supports both single DICOM files and DICOM archives uploaded as ZIP files, facilitating easy access to and visualization of medical imaging datasets. Key Features: NIFTI and DICOM Viewing: Renders NIFTI (.nii) and DICOM (.dcm) files directly in CKAN using the Papaya viewer. DICOM ZIP Archive Support: Allows users to upload ZIP archives containing DICOM files, which are then extracted and displayed using Papaya. Only files with the .dcm extension within the ZIP are read, ignoring other file types. Automatic View Creation: Automatically creates Papaya views for newly-uploaded NIFTI files, single DICOM files, and DICOM ZIP archives. Note existing resources may need the view to be added manually. Client-Side Rendering: Leverages the Papaya JavaScript framework for client-side rendering of medical images, eliminating the need for a separate server. This provides a streamlined visualization experience directly within the user's browser. Temporary Unzipping Mechanism: The extension unzips DICOM archives temporarily to extract DICOM files for viewing, and immediately deletes these extracted files to conserve server space and maintain security.. Technical Integration: The ckanext-papaya extension integrates with CKAN by adding a new view type that utilizes the Papaya JavaScript library. To enable it, the papaya plugin must be added to the ckan.plugins setting in CKAN's configuration file. To avoid enabling Papaya viewer for all zip files, configure ckan.views.default_views accordingly. No other configuration settings are currently needed. Benefits & Impact: By incorporating ckanext-papaya, CKAN instances which host medical imaging datasets can offer a streamlined, in-browser viewing experience for NIFTI and DICOM files. This eliminates the need for users to download and use external viewers, simplifying data exploration and improving accessibility to medical imaging data shared through CKAN. It lowers the barrier to entry to exploration of medical imaging data for users without requiring prior knowledge of the underlying datasets.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The synthetic predictor tables and fully synthetic neuroimaging data produced for the analysis of fully synthetic data in the current study are available as Research Data available from Mendeley Data. Ten fully synthetic datasets include synthetic gray matter images (nifti files) that were generated for analysis with simulated participant data (text files). An archive file predictor_tables.tar.gz contains ten fully synthetic predictor tables with information for 264 simulated subjects. Due to large file sizes, a separate archive was created for each set of synthetic gray matter image data: RBS001.tar.gz, …, RBS010.tar.gz. Regression analyses were performed for each synthetic dataset, then average statistic maps were made for each contrast, which were then smoothed (see accompanying paper for additional information).
The supplementary materials also include commented MATLAB and R code to implement the current neuroimaging data synthesis methods (SKexample.zip). The example data were selected from an earlier fMRI study (Kuchinsky et al., 2012) to demonstrate that the current approach can be used with other types of neuroimaging data. The example code can also be adapted to produce fully synthetic group-level datasets based on observed neuroimaging data from other sources. The zip archive includes a document with important information for performing the example analyses, and details that should be communicated with recipients of a synthetic neuroimaging dataset.
Kuchinsky, S.E., Vaden, K.I., Keren, N.I., Harris, K.C., Ahlstrom, J.B., Dubno, J.R., Eckert, M.A., 2012. Word intelligibility and age predict visual cortex activity during word listening. Cerebral Cortex 22, 1360–71. https://doi.org/10.1093/cercor/bhr211
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset contains created nifti files for the dataset as mentioned in the paper "Deep learning-driven pulmonary artery and vein segmentation reveals demography-associated vasculature anatomical differences". They also released the code and datasets at https://github.com/Arturia-Pendragon-Iris/HiPaS_AV_Segmentation. If you find this dataset be helpful to you, please city their paper "Chu, Y., Luo, G., Zhou, L. et al. Deep learning-driven pulmonary artery and vein segmentation reveals… See the full description on the dataset page: https://huggingface.co/datasets/JasperEppink/HiPaS_dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This tractography and registration mapping was created from the data that is in this PURL on the Stanford Digital Repository: https://searchworks.stanford.edu/view/yx282xq2090.The data were processed using the pyAFQ software (https://github.com/yeatmanlab/pyAFQ) and Dipy, to create tensor-based tracts. These were down-sampled by a factor of 1000, and stored in the trk file. The data were also registered to the MNI T2 template, using the SyN algorithm (Avants et al. 2008), implemented in Dipy. Both forward and backward transformation images were saved and are stored the nifti files.
Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
License information was derived automatically
This deposit contains the UK7T Network's harmonised MRI neuroimaging protocols. It also contains example datasets from a single identical subject from each of the Network's scanners collected using these protocols.
The protocols for each of the three types of scanner are supplied in .pdf or .chm format. Notes on the pulse sequence version and reconstruction code are contained in explanatory .txt files.
The example data is supplied in both DICOM (.ima) and NIfTI (.nii.gz) format. Due to the size and number of the "Multi-band EPI" Philips DICOM files, these are not included. They are available on request, whilst the NIfTI files for these scans are included.
NIfTI files were converted by dcm2niix (https://github.com/rordenlab/dcm2niix) and can be viewed using e.g. FSLeyes (https://fsl.fmrib.ox.ac.uk).
This data release is intended to accompany explanatory publications. Data collected as part of the "UK7T Network" (http://www.uk7t.org/). The UK7T Network is a consortium of 7 tesla (7T) MRI capable sites in the UK. It was established with the aim of promoting the use of 7T MRI and ensuring all sites in the consortium have the ability to collect and meaningfully compare high quality neuroimaging data. This deposit is the results of the Networks harmonisation efforts.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Menelik head voxel model in NIFTI format.Dimension: 396x514x442 (xyz)Voxel size: 0.5 0.5 0.5 (mm)Byte type: 1-Byte unsigned integerCompression: GZIPData Orientaion: Column(X+), Rows(Y-), Slices(Z+)File Format: NIFTI
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ContentThis work is a derivative from the COBRE sample found in the International Neuroimaging Data-sharing Initiative (INDI), originally released under Creative Commons -- Attribution Non-Commercial. It includes preprocessed resting-state functional magnetic resonance images for 72 patients diagnosed with schizophrenia (58 males, age range = 18-65 yrs) and 74 healthy controls (51 males, age range = 18-65 yrs). The fMRI dataset for each subject are single nifti files (.nii.gz), featuring 150 EPI blood-oxygenation level dependent (BOLD) volumes were obtained in 5 mns (TR = 2 s, TE = 29 ms, FA = 75°, 32 slices, voxel size = 3x3x4 mm3 , matrix size = 64x64, FOV = mm2 ). The data processing as well as packaging was implemented by Pierre Bellec, CRIUGM, Department of Computer Science and Operations Research, University of Montreal, 2016.The COBRE preprocessed fMRI release more specifically contains the following files:README.md: a markdown (text) description of the release.phenotypic_data.tsv.gz: A gzipped tabular-separated value file, with each column representing a phenotypic variable as well as measures of data quality (related to motions). Each row corresponds to one participant, except the first row which contains the names of the variables (see file below for a description).keys_phenotypic_data.json: a json file describing each variable found in phenotypic_data.tsv.gz.fmri_XXXXXXX.tsv.gz: A gzipped tabular-separated value file, with each column representing a confounding variable for the time series of participant XXXXXXX (which is the same participant ID found in phenotypic_data.tsv.gz). Each row corresponds to a time frame, except for the first row, which contains the names of the variables (see file below for a definition).keys_confounds.json: a json file describing each variable found in the files fmri_XXXXXXX.tsv.gz.fmri_XXXXXXX.nii.gz: a 3D+t nifti volume at 6 mm isotropic resolution, stored as short (16 bits) integers, in the MNI non-linear 2009a symmetric space (http://www.bic.mni.mcgill.ca/ServicesAtlases/ICBM152NLin2009). Each fMRI data features 150 volumes.Usage recommendationsIndividual analyses: You may want to remove some time frames with excessive motion for each subject, see the confounding variable called scrub in fmri_XXXXXXX.tsv.gz. Also, after removing these time frames there may not be enough usable data. We recommend a minimum number of 60 time frames. A fairly large number of confounds have been made available as part of the release (slow time drifts, motion paramaters, frame displacement, scrubbing, average WM/Vent signal, COMPCOR, global signal). We strongly recommend regression of slow time drifts. Everything else is optional.Group analyses: There will also be some residuals effect of motion, which you may want to regress out from connectivity measures at the group level. The number of acceptable time frames as well as a measure of residual motion (called frame displacement, as described by Power et al., Neuroimage 2012), can be found in the variables Frames OK and FD scrubbed in phenotypic_data.tsv.gz. Finally, the simplest use case with these data is to predict the overall presence of a diagnosis of schizophrenia (values Control or Patient in the phenotypic variable Subject Type). You may want to try to match the control and patient samples in terms of amounts of motion, as well as age and sex. Note that more detailed diagnostic categories are available in the variable Diagnosis.PreprocessingThe datasets were analysed using the NeuroImaging Analysis Kit (NIAK https://github.com/SIMEXP/niak) version 0.17, under CentOS version 6.3 with Octave (http://gnu.octave.org) version 4.0.2 and the Minc toolkit (http://www.bic.mni.mcgill.ca/ServicesSoftware/ServicesSoftwareMincToolKit) version 0.3.18. Each fMRI dataset was corrected for inter-slice difference in acquisition time and the parameters of a rigid-body motion were estimated for each time frame. Rigid-body motion was estimated within as well as between runs, using the median volume of the first run as a target. The median volume of one selected fMRI run for each subject was coregistered with a T1 individual scan using Minctracc (Collins and Evans, 1998), which was itself non-linearly transformed to the Montreal Neurological Institute (MNI) template (Fonov et al., 2011) using the CIVET pipeline (Ad-Dabbagh et al., 2006). The MNI symmetric template was generated from the ICBM152 sample of 152 young adults, after 40 iterations of non-linear coregistration. The rigid-body transform, fMRI-to-T1 transform and T1-to-stereotaxic transform were all combined, and the functional volumes were resampled in the MNI space at a 6 mm isotropic resolution.Note that a number of confounding variables were estimated and are made available as part of the release. WARNING: no confounds were actually regressed from the data, so it can be done interactively by the user who will be able to explore different analytical paths easily. The “scrubbing” method of (Power et al., 2012), was used to identify the volumes with excessive motion (frame displacement greater than 0.5 mm). A minimum number of 60 unscrubbed volumes per run, corresponding to ~120 s of acquisition, is recommended for further analysis. The following nuisance parameters were estimated: slow time drifts (basis of discrete cosines with a 0.01 Hz high-pass cut-off), average signals in conservative masks of the white matter and the lateral ventricles as well as the six rigid-body motion parameters (Giove et al., 2009), anatomical COMPCOR signal in the ventricles and white matter (Chai et al., 2012), PCA-based estimator of the global signal (Carbonell et al., 2011). The fMRI volumes were not spatially smoothed.ReferencesAd-Dab’bagh, Y., Einarson, D., Lyttelton, O., Muehlboeck, J. S., Mok, K., Ivanov, O., Vincent, R. D., Lepage, C., Lerch, J., Fombonne, E., Evans, A. C., 2006. The CIVET Image-Processing Environment: A Fully Automated Comprehensive Pipeline for Anatomical Neuroimaging Research. In: Corbetta, M. (Ed.), Proceedings of the 12th Annual Meeting of the Human Brain Mapping Organization. Neuroimage, Florence, Italy.Bellec, P., Rosa-Neto, P., Lyttelton, O. C., Benali, H., Evans, A. C., Jul. 2010. Multi-level bootstrap analysis of stable clusters in resting-state fMRI. NeuroImage 51 (3), 1126–1139. URL http://dx.doi.org/10.1016/j.neuroimage.2010.02.082F. Carbonell, P. Bellec, A. Shmuel. Validation of a superposition model of global and system-specific resting state activity reveals anti-correlated networks. Brain Connectivity 2011 1(6): 496-510. doi:10.1089/brain.2011.0065Chai, X. J., Castan, A. N. N., Ongr, D., Whitfield-Gabrieli, S., Jan. 2012. Anticorrelations in resting state networks without global signal regression. NeuroImage 59 (2), 1420-1428. http://dx.doi.org/10.1016/j.neuroimage.2011.08.048 Collins, D. L., Evans, A. C., 1997. Animal: validation and applications of nonlinear registration-based segmentation. International Journal of Pattern Recognition and Artificial Intelligence 11, 1271–1294.Fonov, V., Evans, A. C., Botteron, K., Almli, C. R., McKinstry, R. C., Collins, D. L., Jan. 2011. Unbiased average age-appropriate atlases for pediatric studies. NeuroImage 54 (1), 313–327. URLhttp://dx.doi.org/10.1016/j.neuroimage.2010.07.033Giove, F., Gili, T., Iacovella, V., Macaluso, E., Maraviglia, B., Oct. 2009. Images-based suppression of unwanted global signals in resting-state functional connectivity studies. Magnetic resonance imaging 27 (8), 1058–1064. URLhttp://dx.doi.org/10.1016/j.mri.2009.06.004Power, J. D., Barnes, K. A., Snyder, A. Z., Schlaggar, B. L., Petersen, S. E., Feb. 2012. Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. NeuroImage 59 (3), 2142–2154. URLhttp://dx.doi.org/10.1016/j.neuroimage.2011.10.018
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset encompasses a NIfTI file containing a collection of 500 images, each capturing the central axial slice of a synthetic brain MRI.
Accompanying this file is a CSV dataset that serves as a repository for the corresponding labels linked to each image:
Label 0: Healthy Controls (HC)
Label 1: Alzheimer's Disease (AD)
Each image within this dataset has been generated by PACGAN (Progressive Auxiliary Classifier Generative Adversarial Network), a framework designed and implemented by the AI for Medicine Research Group at the University of Bologna.
PACGAN is a generative adversarial network trained to generate high-resolution images belonging to different classes. In our work, we trained this framework on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, which contains brain MRI images of AD patients and HC.
The implementation of the training algorithm can be found within our GitHub repository, with Docker containerization.
For further exploration, the pre-trained models are available within the Code Ocean capsule. These models can facilitate the generation of synthetic images for both classes and also aid in classifying new brain MRI images.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Phantom of Bern: repeated scans of two volunteers with eight different combinations of MR sequence parameters
The Phantom of Bern consists of eight same-session re-scans of T1-weighted MRI with different combinations of sequence parameters, acquired on two healthy subjects. The subjects have agreed in writing to the publication of these data, including the original anonymized DICOM files and waving the requirement of defacing. Usage is permitted under the terms of the data usage agreement stated below.
The BIDS directory is organized as follows:
└── PhantomOfBern/
├─ code/
│
├─ derivatives/
│ ├─ dldirect_v1-0-0/
│ │ ├─ results/ # Folder with flattened subject/session inputs and outputs of DL+DiReCT
│ │ └─ stats2table/ # Folder with tables summarizing all DL+DiReCT outputs
│ ├─ freesurfer_v6-0-0/
│ │ ├─ results/ # Folder with flattened subject/session inputs and outputs of freesurfer
│ │ └─ stats2table/ # Folder with tables summarizing all freesurfer outputs
│ └─ siena_v2-6/
│ ├─ SIENA_results.csv # Siena's main output
│ └─ ... # Flattened subject/session inputs and outputs of SIENA
│
├─ sourcedata/
│ ├─ POBHC0001/
│ │ └─ 17473A/
│ │ └─ ... # Anonymized DICOM folders
│ └─ POBHC0002/
│ └─ 14610A/
│ └─ ... # Anonymized DICOM folders
│
├─ sub-<label>/
│ └─ ses-<label>/
│ └─ anat/ # Folder with scan's json and nifti files
├─ ...
The dataset can be cited as:
M. Rebsamen, D. Romascano, M. Capiglioni, R. Wiest, P. Radojewski, C. Rummel. The Phantom of Bern:
repeated scans of two volunteers with eight different combinations of MR sequence parameters.
OpenNeuro, 2023.
If you use these data, please also cite the original paper:
M. Rebsamen, M. Capiglioni, R. Hoepner, A. Salmen, R. Wiest, P. Radojewski, C. Rummel. Growing importance
of brain morphometry analysis in the clinical routine: The hidden impact of MR sequence parameters.
Journal of Neuroradiology, 2023.
The Phantom of Bern is distributed under the following terms, to which you agree by downloading and/or using the dataset:
To use these datasets solely for research and development or statistical purposes and not for investigation of specific subjects
To make no use of the identity of any subject discovered inadvertently, and to advise the providers of any such discovery (crummel@web.de)
When publicly presenting any results or algorithms that benefited from the use of the Phantom of Bern, you should acknowledge it, see above. Papers, book chapters, books, posters, oral presentations, and all other printed and digital presentations of results derived from the Phantom of Bern data should cite the publications listed above.
Redistribution of data (complete or in parts) in any manner without explicit inclusion of this data use agreement is prohibited.
Usage of the data for testing commercial tools is explicitly allowed. Usage for military purposes is prohibited.
The original collector and provider of the data (see acknowledgement) and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses.
This work was supported by the Swiss National Science Foundation under grant numbers 204593 (ScanOMetrics) and CRSII5_180365 (The Swiss-First Study).
This data collection contains:
Data is stored within the Brain and Mind Research Institute's Distributed and Reflective Informatics System (DaRIS)
2 dimensional (2D) DICOM (.dcm) format Magnetic Resonance Imaging data is collected using a GE Medical Systems Discovery MR750 MR instrument for the following modalities:
Each modality; T1, T2 and DTI, is processed through kepler workflows. The kepler workflows utilise the following software:
Each T1 MR series contains 196 individual 2D .dcm format images. Images are pre-processed using a Kepler workflow to produce brain extracted images that are reoriented to (MNI152) standard template.
$ mcverter -o {output directory} -f fsl -n {input T1 directory}
$ fslreorient2std {input file} {output file name}
bet2 {input file} {output file} 0.27 –B
Each T2 MR series contains 5460 individual 2D .dcm format images. The Kepler "Multivariate Exploratory Linear Optimized Decomposition into Independent Components (MELODIC)" workflow is used to process the data
$ mcverter -o {output directory} -f fsl -n -d {input T2 directory}
$ feat {design.fsf}
where the design.fsf
file contains the configuration information for the process.Each DTI MR series contains 4235 individual 2D .dcm format images. The Kepler DTI workflow is used to process the data
$ mcverter -o {output directory} -f fsl -n -d -s 7 {input DTI directory}
fsl-dti
fileThis data set contains electroencephalography and neuropsychological data collected from research subjects undergo a standard battery of neuropsychological tests (e.g. Cambridge Neuropsychological Test Automated Battery - CANTAB, which examines cognitive function). Data is also collected through the following forms which are completed by the subject and members of the research team:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
NIfTI files to support the SIRF Exercises regarding Geometry.
Data from static phantoms in MRI and a PET/MR phantom - see readme file.