Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The dataset used is the OASIS MRI dataset (https://sites.wustl.edu/oasisbrains/), which consists of 80,000 brain MRI images. The images have been divided into four classes based on Alzheimer's progression. The dataset aims to provide a valuable resource for analyzing and detecting early signs of Alzheimer's disease.
To make the dataset accessible, the original .img and .hdr files were converted into Nifti format (.nii) using FSL (FMRIB Software Library). The converted MRI images of 461 patients have been uploaded to a GitHub repository, which can be accessed in multiple parts.
For the neural network training, 2D images were used as input. The brain images were sliced along the z-axis into 256 pieces, and slices ranging from 100 to 160 were selected from each patient. This approach resulted in a comprehensive dataset for analysis.
Patient classification was performed based on the provided metadata and Clinical Dementia Rating (CDR) values, resulting in four classes: demented, very mild demented, mild demented, and non-demented. These classes enable the detection and study of different stages of Alzheimer's disease progression.
During the dataset preparation, the .nii MRI scans were converted to .jpg files. Although this conversion presented some challenges, the files were successfully processed using appropriate tools. The resulting dataset size is 1.3 GB.
With this comprehensive dataset, the project aims to explore various neural network models and achieve optimal results in Alzheimer's disease detection and analysis.
Acknowledgments: “Data were provided 1-12 by OASIS-1: Cross-Sectional: Principal Investigators: D. Marcus, R, Buckner, J, Csernansky J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382”
Citation: OASIS-1: Cross-Sectional: https://doi.org/10.1162/jocn.2007.19.9.1498
If you are looking for processed NifTi image version of this dataset please click here
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The early detection of Alzheimer’s Disease (AD) is thought to be important for effective intervention and management. Here, we explore deep learning methods for the early detection of AD. We consider both genetic risk factors and functional magnetic resonance imaging (fMRI) data. However, we found that the genetic factors do not notably enhance the AD prediction by imaging. Thus, we focus on building an effective imaging-only model. In particular, we utilize data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), employing a 3D Convolutional Neural Network (CNN) to analyze fMRI scans. Despite the limitations posed by our dataset (small size and imbalanced nature), our CNN model demonstrates accuracy levels reaching 92.8% and an ROC of 0.95. Our research highlights the complexities inherent in integrating multimodal medical datasets. It also demonstrates the potential of deep learning in medical imaging for AD prediction.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Alzheimer’s disease (AD) is one of the most common neurodegenerative diseases. In the last decade, studies on AD diagnosis has attached great significance to artificial intelligence-based diagnostic algorithms. Among the diverse modalities of imaging data, T1-weighted MR and FDG-PET are widely used for this task. In this paper, we propose a convolutional neural network (CNN) to integrate all the multi-modality information included in both T1-MR and FDG-PET images of the hippocampal area, for the diagnosis of AD. Different from the traditional machine learning algorithms, this method does not require manually extracted features, instead, it utilizes 3D image-processing CNNs to learn features for the diagnosis or prognosis of AD. To test the performance of the proposed network, we trained the classifier with paired T1-MR and FDG-PET images in the ADNI datasets, including 731 cognitively unimpaired (labeled as CN) subjects, 647 subjects with AD, 441 subjects with stable mild cognitive impairment (sMCI) and 326 subjects with progressive mild cognitive impairment (pMCI). We obtained higher accuracies of 90.10% for CN vs. AD task, 87.46% for CN vs. pMCI task, and 76.90% for sMCI vs. pMCI task. The proposed framework yields a state-of-the-art performance. Finally, the results have demonstrated that (1) segmentation is not a prerequisite when using a CNN for the classification, (2) the combination of two modality imaging data generates better results.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Beta-amyloid (Aß is a histopathological hallmark of Alzheimer's disease dementia, but high levels of Aß in the brain can also be found in a substantial proportion of nondemented subjects. Here we investigated which 2-year rate of brain and cognitive changes are present in nondemented subjects with high and low Aß levels, as assessed with cerebrospinal fluid and molecular positron emission tomography (PET)-based biomarkers of Aß. In subjects with mild cognitive impairment, increased brain Aß levels were associated with significantly faster cognitive decline, progression of gray matter atrophy within temporal and parietal brain regions, and a trend for a faster decline in parietal Fludeoxyglucose (FDG)-PET metabolism. Changes in gray matter and FDG-PET mediated the association between Aß and cognitive decline. In contrast, elderly cognitively healthy controls (HC) with high Aß levels showed only a faster medial temporal lobe and precuneus volume decline compared with HC with low Aß. In conclusion, the current results suggest not only that both functional and volumetric brain changes are associated with high Aß years before the onset of dementia but also that HC with substantial Aß levels show higher Aß pathology resistance, lack other pathologies that condition neurotoxic effects of Aß, or accumulated Aß for a shorter time period. Methods Subjects: The study included 465 subjects of which 124 were elderly cognitively HC subjects, 229 subjects were diagnosed with amnestic MCI and 112 subjects had probable AD, recruited within the North American multicenter Alzheimer's Disease Neuroimaging Initiative (ADNI, for database, see www.loni.ucla.edu/ADNI). ADNI was launched in 2003 by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering (NIBIB), the Food and Drug Administration, private pharmaceutical companies, and nonprofit organizations, as a $60 million, 5-year public-private partnership. The primary goal of ADNI has been to test whether serial magnetic resonance imaging (MRI), PET, other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of MCI and early Alzheimer's disease (AD). The initial goal of ADNI was to recruit 800 adults, ages 55 to 90, to participate in the research—approximately 200 cognitively normal older individuals to be followed for 3 years, 400 people with MCI to be followed for 3 years, and 200 people with early AD to be followed for 2 years. For up-to-date information, see www.adni-info.org. The current sample was restricted to those subjects who had either a PIB-PET assessment or a CSF-Aß-1-42 measurement. Within this subset, PIB-PET was available in 103 subjects including 19 HC, 65 MCI, and 19 AD subjects. The CSF-Aß-1-42 concentration was assessed in a total of 116 HC, 199 MCI, and 102 AD subjects (see Fig. 1 for further information on subjects and data inclusion). Within 55 subjects, both CSF Ab1--42 and PIB-PET were assessed. The observation interval covered 2 years, where neuropsychological assessment, FDG-PET scanning, and MRI acquisition was conducted at baseline, 6, 12, and 24 month. All collected data are freely accessible online to researchers (http://www.loni.ucla.edu/ADNI). General inclusion criteria included an age between 55 and 90 years, a modified Hachinski score =4, education of at least 6 grade level, and stable treatment of at least 4 weeks in case of treatment with permitted medication (for full list, see http://www.adni-info.org, Procedures Manual). The diagnosis of AD was made according to the NINCDS-ADRDA criteria (McKhann et al. 1984). Inclusion criteria for AD encompassed subjective memory complaint, memory impairment as assessed by an education adjusted score on delayed recall of a single paragraph recall from the Wechsler Logical Memory II Subscale as follows: 0–7 years of education, =2; for 8–15 years, =4; for 16 years or more, =8, a Mini Mental State Exam (MMSE) score between 20 and 26, and a clinical dementia rating (CDR) score of 0.5 or 1. For the diagnosis of amnestic MCI, the subjects had to show subjective memory impairment and objective memory impairment identical to that for AD, a CDR of 0.5 including the memory box score of 0.5 or greater, and a MMSE score between 24 and 30, with unimpaired general cognitive ability and functional performance such that they did not meet criteria for dementia. HC had to show normal performance on the Logical Memory II Subscale adjusted for education as follows: 0–7 years, =3, 8–15 years, =5; 16 or more years, =9, and absence of significant impairment on cognitive function or activities of daily living (Ewers et al. 2010). CSF Measurement: All CSF samples collected at the different centers were shipped on dry ice to the Penn ADNI Biomarker Core Laboratory at the University of Pennsylvania, Philadelphia, for storage at -80°C until further analysis at the laboratory. More details on data collection of the CSF samples can be found at http://www.adni-info.org, under "ADNI study procedures." The CSF concentration of Aß-1-42, t-tau, and p-tau181 were measured in the baseline CSF samples using the multiplex xMAP Luminex platform (Lumnix Corp, Austin, TX) at the Penn ADNI Biomarker Core Laboratory. For detailed description, see Shaw et al. (2009). PIB-PET, FDG-PET, MRI Acquisition, and ROI Measurement: All MRI data were acquired on 1.5-T MRI scanners using a volumetric T1-weighted sequences to map brain structures, optimized for the different scanners as indicated at http://www.loni.ucla.edu/ADNI/Research/Cores/index (Jack, Bernstein, et al. 2008). Freesurfer software version 4.5 (Dale et al. 1999; Fischl et al. 1999) was employed to measure longitudinal changes in regional brain volumes. Briefly, the image-processing pipeline using FreeSurfer consisted of five stages: an affine registration with Talairach space, an initial volumetric labeling, bias field correction, nonlinear alignment to the Talairach space, and a final labeling of the volume. The fully automated labeling of volumes is achieved by warping a population based brain atlas to the target brain and by maximizing an a posteriori probability of the labels given specific constraints. A full description of the FreeSurfer processing steps can be found in (Fischl et al. 2002). The procedures have been extensively validated. MRI-volume ROIs were selected based on the previous meta-analyses on MRI gray matter volume measures that were most predictive of AD, including the hippocampus, middle temporal gyrus, superior temporal gyrus, amygdala, parahippocampus, entorhinal cortex, inferior parietal lobe, precuneus, and thalamus (Schroeter et al. 2009). PET data were acquired on multiple instruments of varying resolution. PIB scans were collected as 4 × 5 min frames beginning 50 min after injection of tracer. FDG scans were collected as 6 × 5 min frames beginning 30 min after injection of approximately 5 mCi of tracer. Attenuation correction was performed either via transmission scan or computer tomography. Images were uploaded to the Laboratory of Neuroimaging where they were processed to provide standard orientation, voxel size, and resolution. FDG-PET ROIs were constructed based on a meta-analysis of the location of FDG-PET changes in the brain that are typically affected in AD as described previously (Jagust et al. 2009; Landau et al. 2009). FDG uptake was normalized to a reference region composed of the pons and cerebellum and measured in the target ROIs that included bilateral angular gyrus, posterior cingulate/precuneus, and inferior temporal cortex as described previously (Jagust et al. 2009). PIB-PET uptake was normalized to the cerebellum to generate maps of the PIB-PET score used for further statistical analysis. Target ROIs were drawn on a structural MRI template from a single 79-year-old MCI subject scanned at the University of Pittsburgh. This image was deemed an "average" older subject with typical atrophy and ventricular size. Each subject's PIB-PET score map was coregistered to the individual MRI with SPM5 that was normalized to the MCI template with SPM5 and permitted the transformation of the subject's PIB-PET to the template space. ROIs in which PIB uptake is known to predominate were averaged in left and right hemispheres and comprised of prefrontal, lateral temporal, anterior cingulate gyrus, parietal and posterior cingulate/precuneus. Further information is available at the ADNI webpage (http://www.loni.ucla.edu/ADNI/). Neuropsychological Tests: Global cognitive ability was assessed with the neuropsychological test battery Alzheimer's Disease Assessment Scale–cognitive section (ADAS-cog) (Rosen et al. 1984). The ADAS-cog score is the total score on a number of tests on learning and memory, language production, language comprehension, constructional praxis, ideational praxis, and orientation (see ADNI procedures manual for details at http://www.adni-info.org/Scientists/ProceduresManuals.aspx). A higher score on ADAS-cog scores indicates lower cognitive performance. Episodic memory was assessed with the Rey Auditory Verbal Learning Test (RAVLT), using the score on the 30-min delayed recall of a list of 15 words that had been repeatedly presented and recalled during the learning phase of 5 verbal presentations of the list (Rey 1964). The test score corresponds to the number of words recalled on the 30-min delayed test. For details on the administration and scoring, see the "Procedures Manual" (http://www.adni-info.org/Scientists/ProceduresManuals.aspx).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Mean (SD) of the volumes (in mm3) in the left hippocampus in the baseline images of the labelled ADNI data set of 30 images for method validation.
Facebook
Twitterhttps://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
This paper for the 20th anniversary of the Alzheimer’s Disease Neuroimaging Initiative (ADNI) provides an overview of magnetic resonance imaging (MRI) of medial temporal lobe (MTL) subregions in ADNI using a dedicated high-resolution T2-weighted sequence. A review of the work that supported the inclusion of this imaging modality into ADNI Phase 3 is followed by a brief description of the ADNI MTL imaging and analysis protocols and a summary of studies that have used these data. This review is supplemented by a novel study that uses novel surface-based tools to characterize MTL neurodegeneration across biomarker-defined AD stages. This analysis reveals a pattern of spreading cortical thinning associated with increasing levels of tau pathology in presence of elevated beta-amyloid, with apparent epicenters in the transentorhinal region and inferior hippocampal subfields. The paper concludes with an outlook for high-resolution imaging of the MTL in ADNI Phase 4. Methods This dataset contains the template and model package for use with CRASHS. CRASHS is a surface analysis pipeline for medial temporal lobe anatomical structures. It uses output of the ASHS pipeline (automatic segmentation of hippocampal subfields) as input. The CRASHS template contains geometrical models and deep learning networks used internally by the CRASHS software to match an individual's temporal lobe to a template. Please see CRASHS documentation for instructions on using the package.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
SUPERSEDED - THIS ITEM HAS BEEN REPLACED BY http://datashare.is.ed.ac.uk/handle/10283/2214. This dataset contains structural magnetic resonance imaging (MRI)-derived data from 20 participants enrolled in the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. These data are likelihood maps of cerebrospinal fluid, grey matter and normal-appearing white matter, and binary masks of white matter hyperintensities (WMH), all obtained from MRI acquired at three consecutive study visits spaced 12 months apart. ## Acknowledgements ## Investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in the analysis or processing which produced the derived data presented here. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf . Data collection and sharing for ADNI was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: AbbVie, Alzheimer's Association; Alzheimer's Drug Discovery Foundation; Araclon Biotech; BioClinica, Inc.; Biogen; Bristol-Myers Squibb Company; CereSpir, Inc.; Cogstate; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; EuroImmun; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; Fujirebio; GE Healthcare; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Lumosity; Lundbeck; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Neurotrack Technologies; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Takeda Pharmaceutical Company; and Transition Therapeutics. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Therapeutic Research Institute at the University of Southern California. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this study, structural images of 1048 healthy subjects from the Human Connectome Project Young Adult study and 94 from ADNI-3 study were processed by an in-house tractography pipeline and analyzed together with pre-processed data of the same subjects from braingraph.org. Whole brain structural connectome features were used to build a simple correlation-based regression machine learning model to predict intelligence and age of healthy subjects. Our results showed that different forms of intelligence as well as age are predictable to a certain degree from diffusion tensor imaging detecting anatomical fiber tracts in the living human brain. Though we did not identify significant differences in the prediction capability for the investigated features depending on the imaging feature extraction method, we did find that crystallized intelligence was consistently better predictable than fluid intelligence from structural connectivity data through all datasets. Our findings suggest a practical and scalable processing and analysis framework to explore broader research topics employing brain MR imaging.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Demographic data of patients in the database (ADNI 1075-T1).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The role of structural brain magnetic resonance imaging (MRI) is becoming more and more emphasized in the early diagnostics of Alzheimer's disease (AD). This study aimed to assess the improvement in classification accuracy that can be achieved by combining features from different structural MRI analysis techniques. Automatically estimated MR features used are hippocampal volume, tensor-based morphometry, cortical thickness and a novel technique based on manifold learning. Baseline MRIs acquired from all 834 subjects (231 healthy controls (HC), 238 stable mild cognitive impairment (S-MCI), 167 MCI to AD progressors (P-MCI), 198 AD) from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database were used for evaluation. We compared the classification accuracy achieved with linear discriminant analysis (LDA) and support vector machines (SVM). The best results achieved with individual features are 90% sensitivity and 84% specificity (HC/AD classification), 64%/66% (S-MCI/P-MCI) and 82%/76% (HC/P-MCI) with the LDA classifier. The combination of all features improved these results to 93% sensitivity and 85% specificity (HC/AD), 67%/69% (S-MCI/P-MCI) and 86%/82% (HC/P-MCI). Compared with previously published results in the ADNI database using individual MR-based features, the presented results show that a comprehensive analysis of MRI images combining multiple features improves classification accuracy and predictive power in detecting early AD. The most stable and reliable classification was achieved when combining all available features.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Alzheimer’s disease (AD) significantly impacts millions globally, causing progressive memory loss and cognitive decline. While a cure remains elusive, early detection can mitigate effects and improve quality of life. Recent AD research has shown promise using deep learning algorithms on brain MRI images for stage prediction. This paper introduces a novel approach integrating two CNN algorithms, ResNet and EfficientNet, with a post-processing algorithm to enhance AD diagnosis. Empirical analyses on public datasets ADNI and OASIS evaluate the technique, leveraging the complementary strengths of the CNN models and a weighted averaging ensemble learning method. The proposed approach’s uniqueness lies in combining multiple CNN architectures with a specialised post-processing algorithm. Notable accuracies achieved are 98.59% for EfficientNet, 94.59% for ResNet, and 98.97% with post-processing on ADNI, and 97.25% for EfficientNet, 99.36% for ResNet, and 99.41% with post-processing on OASIS. This work addresses existing methods’ limitations and demonstrates superior predictive performance, contributing to AD diagnosis advancements and highlighting deep learning’s potential in healthcare applications.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Imaging parameters and CNR efficiencies for our optimized parameters, FreeSurfer, Siemens default, and ADNI.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Deep convolutional neural networks (DCNNs) have achieved great success for image classification in medical research. Deep learning with brain imaging is the imaging method of choice for the diagnosis and prediction of Alzheimer’s disease (AD). However, it is also well known that DCNNs are “black boxes” owing to their low interpretability to humans. The lack of transparency of deep learning compromises its application to the prediction and mechanism investigation in AD. To overcome this limitation, we develop a novel general framework that integrates deep leaning, feature selection, causal inference, and genetic-imaging data analysis for predicting and understanding AD. The proposed algorithm not only improves the prediction accuracy but also identifies the brain regions underlying the development of AD and causal paths from genetic variants to AD via image mediation. The proposed algorithm is applied to the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset with diffusion tensor imaging (DTI) in 151 subjects (51 AD and 100 non-AD) who were measured at four time points of baseline, 6 months, 12 months, and 24 months. The algorithm identified brain regions underlying AD consisting of the temporal lobes (including the hippocampus) and the ventricular system.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We present an improved image analysis pipeline to detect the percent brain volume change (PBVC) using SIENA (Structural Image Evaluation, using Normalization, of Atrophy) in populations with Alzheimer’s dementia. Our proposed approach uses the improved brain extraction mask from BEaST (Brain Extraction based on nonlocal Segmentation Technique) instead of the conventional BET (Brain Extraction Tool) for SIENA. We compared four varying options of BET as well as BEaST and applied these five methods to analyze scan-rescan MRIs in ADNI from 332 subjects, longitudinal ADNI MRIs from the same 332 subjects, their repeat scans over time, and OASIS longitudinal MRIs from 123 subjects. The results showed that BEaST brain masks were consistent in scan-rescan reproducibility. The cross-sectional scan-rescan error in the absolute percent brain volume difference measured by SIENA was smallest (p≤0.0187) with the proposed BEaST-SIENA. We evaluated the statistical power in terms of effect size, and the best performance was achieved with BEaST-SIENA (1.2789 for ADNI and 1.095 for OASIS). The absolute difference in PBVC between scan-dataset (volume change from baseline to year-1) and rescan-dataset (volume change from baseline repeat scan to year-1 repeat scan) was also the smallest with BEaST-SIENA compared to the BET-based SIENA and had the highest correlation when compared to the BET-based SIENA variants. In conclusion, our study shows that BEaST was robust in terms of reproducibility and consistency and that SIENA’s reproducibility and statistical power are improved in multiple datasets when used in combination with BEaST.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of 5variations of SIENA.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Graphical, voxel, and region-based analysis has become a popular approach to studying neurodegenerative disorders such as Alzheimer’s disease (AD) and its prodromal stage [mild cognitive impairment (MCI)]. These methods have been used previously for classification or discrimination of AD in subjects in a prodromal stage called stable MCI (MCIs), which does not convert to AD but remains stable over a period of time, and converting MCI (MCIc), which converts to AD, but the results reported across similar studies are often inconsistent. Furthermore, the classification accuracy for MCIs vs. MCIc is limited. In this study, we propose combining different neuroimaging modalities (sMRI, FDG-PET, AV45-PET, DTI, and rs-fMRI) with the apolipoprotein-E genotype to form a multimodal system for the discrimination of AD, and to increase the classification accuracy. Initially, we used two well-known analyses to extract features from each neuroimage for the discrimination of AD: whole-brain parcelation analysis (or region-based analysis), and voxel-wise analysis (or voxel-based morphometry). We also investigated graphical analysis (nodal and group) for all six binary classification groups (AD vs. HC, MCIs vs. MCIc, AD vs. MCIc, AD vs. MCIs, HC vs. MCIc, and HC vs. MCIs). Data for a total of 129 subjects (33 AD, 30 MCIs, 31 MCIc, and 35 HCs) for each imaging modality were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) homepage. These data also include two APOE genotype data points for the subjects. Moreover, we used the 2-mm AICHA atlas with the NiftyReg registration toolbox to extract 384 brain regions from each PET (FDG and AV45) and sMRI image. For the rs-fMRI images, we used the DPARSF toolbox in MATLAB for the automatic extraction of data and the results for REHO, ALFF, and fALFF. We also used the pyClusterROI script for the automatic parcelation of each rs-fMRI image into 200 brain regions. For the DTI images, we used the FSL (Version 6.0) toolbox for the extraction of fractional anisotropy (FA) images to calculate a tract-based spatial statistic. Moreover, we used the PANDA toolbox to obtain 50 white-matter-region-parcellated FA images on the basis of the 2-mm JHU-ICBM-labeled template atlas. To integrate the different modalities and different complementary information into one form, and to optimize the classifier, we used the multiple kernel learning (MKL) framework. The obtained results indicated that our multimodal approach yields a significant improvement in accuracy over any single modality alone. The areas under the curve obtained by the proposed method were 97.78, 96.94, 95.56, 96.25, 96.67, and 96.59% for AD vs. HC, MCIs vs. MCIc, AD vs. MCIc, AD vs. MCIs, HC vs. MCIc, and HC vs. MCIs binary classification, respectively. Our proposed multimodal method improved the classification result for MCIs vs. MCIc groups compared with the unimodal classification results. Our study found that the (left/right) precentral region was present in all six binary classification groups (this region can be considered the most significant region). Furthermore, using nodal network topology, we found that FDG, AV45-PET, and rs-fMRI were the most important neuroimages, and showed many affected regions relative to other modalities. We also compared our results with recently published results.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The dataset used is the OASIS MRI dataset (https://sites.wustl.edu/oasisbrains/), which consists of 80,000 brain MRI images. The images have been divided into four classes based on Alzheimer's progression. The dataset aims to provide a valuable resource for analyzing and detecting early signs of Alzheimer's disease.
To make the dataset accessible, the original .img and .hdr files were converted into Nifti format (.nii) using FSL (FMRIB Software Library). The converted MRI images of 461 patients have been uploaded to a GitHub repository, which can be accessed in multiple parts.
For the neural network training, 2D images were used as input. The brain images were sliced along the z-axis into 256 pieces, and slices ranging from 100 to 160 were selected from each patient. This approach resulted in a comprehensive dataset for analysis.
Patient classification was performed based on the provided metadata and Clinical Dementia Rating (CDR) values, resulting in four classes: demented, very mild demented, mild demented, and non-demented. These classes enable the detection and study of different stages of Alzheimer's disease progression.
During the dataset preparation, the .nii MRI scans were converted to .jpg files. Although this conversion presented some challenges, the files were successfully processed using appropriate tools. The resulting dataset size is 1.3 GB.
With this comprehensive dataset, the project aims to explore various neural network models and achieve optimal results in Alzheimer's disease detection and analysis.
Acknowledgments: “Data were provided 1-12 by OASIS-1: Cross-Sectional: Principal Investigators: D. Marcus, R, Buckner, J, Csernansky J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382”
Citation: OASIS-1: Cross-Sectional: https://doi.org/10.1162/jocn.2007.19.9.1498
If you are looking for processed NifTi image version of this dataset please click here