100+ datasets found
  1. i

    Brain MRI ND-5 Dataset

    • ieee-dataport.org
    Updated Aug 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Md. Nasif Safwan (2025). Brain MRI ND-5 Dataset [Dataset]. https://ieee-dataport.org/documents/brain-mri-nd-5-dataset
    Explore at:
    Dataset updated
    Aug 23, 2025
    Authors
    Md. Nasif Safwan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    meningiomas

  2. h

    brain-mri-dataset

    • huggingface.co
    Updated Feb 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unique Data (2024). brain-mri-dataset [Dataset]. https://huggingface.co/datasets/UniqueData/brain-mri-dataset
    Explore at:
    Dataset updated
    Feb 16, 2024
    Authors
    Unique Data
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Brain Cancer MRI Object Detection & Segmentation Dataset

    The dataset consists of .dcm files containing MRI scans of the brain of the person with a cancer. The images are labeled by the doctors and accompanied by report in PDF-format. The dataset includes 10 studies, made from the different angles which provide a comprehensive understanding of a brain tumor structure.

      MRI study angles in the dataset
    
    
    
    
    
      💴 For Commercial Usage: Full version of the dataset includes… See the full description on the dataset page: https://huggingface.co/datasets/UniqueData/brain-mri-dataset.
    
  3. i

    Brain Tumor MRI Dataset

    • ieee-dataport.org
    Updated Apr 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jyotismita Chaki (2025). Brain Tumor MRI Dataset [Dataset]. https://ieee-dataport.org/documents/brain-tumor-mri-dataset
    Explore at:
    Dataset updated
    Apr 30, 2025
    Authors
    Jyotismita Chaki
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is collected from Kaggle ( https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset ). This dataset is a combination of the following three datasets :figshareSARTAJ datasetBr35H

  4. c

    RIDER NEURO MRI

    • cancerimagingarchive.net
    dicom, n/a
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Cancer Imaging Archive, RIDER NEURO MRI [Dataset]. http://doi.org/10.7937/K9/TCIA.2015.VOSN3HN1
    Explore at:
    dicom, n/aAvailable download formats
    Dataset authored and provided by
    The Cancer Imaging Archive
    License

    https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/

    Time period covered
    Nov 8, 2011
    Dataset funded by
    National Cancer Institutehttp://www.cancer.gov/
    Description

    RIDER Neuro MRI contains imaging data on 19 patients with recurrent glioblastoma who underwent repeat imaging sets. These images were obtained approximately 2 days apart (with the exception of one patient, RIDER Neuro MRI-1086100996, whose images were obtained one day apart).

    DCE‐MRI: All 19 patients had repeat dynamic contrast‐enhanced MRI (DCE‐MRI) datasets on the same 1.5T imaging magnet. On the basis of T2‐weighted images, technologists chose 16 image locations using 5mm thick contiguous slices for the imaging. For T1 mapping, multi‐flip 3D FLASH images were obtained using flip angles of 5, 10, 15, 20, 25 and 30 degrees, TR of 4.43 ms, TE of 2.1 ms, 2 signal averages. Dynamic images were obtained during the intravenous injection of 0.1mmol/kg of Magnevist intravenous at 3ccs/second, started 24 seconds after the scan had begun. The dynamic images were acquired using a 3D FLASH technique, using a flip angle of 25 degrees, TR of 3.8 ms, TE of 1.8 ms using a 1 x1 x 5mm voxel size. The 16 slice imaging set was obtained every 4.8 sec.

    DTI: Seventeen of the 19 patients also obtained repeat diffusion tensor imaging (DTI) sets. Whole brain DTI were obtained using TR 6000ms, TE 100 ms, 90 degree flip angle, 4 signal averages, matrix 128 x 128, 1.72 x 1.72 x 5 mm voxel size, 12 tensor directions, iPAT 2, b value of 1000 sec/mm2 .

    Contrast‐enhanced 3D FLASH: All 19 patients underwent whole brain 3D FLASH imaging in the sagittal plane after the administration of Magnevist. For this sequence, TR was 8.6 ms, TE 4.1 ms, 20 degree flip angle, 1 signal average, matrix 256 x 256; 1mm isotropic voxel size.

    Contrast‐enhanced 3D FLAIR: All 17 patients who had repeat DTI sets also had 3D FLAIR sequences in the sagittal plane after the administration of Magnevist. For this sequence, the TR was 6000 ms, TE 353 ms, and TI 2200ms; 180 degree flip angle, 1 signal average, matrix 256 x 216; 1 mm isotropic voxel size. Note: before transmission to NCIA, all image sets with 1mm isotropic voxel size were “defaced” using MIPAV software or manually.


    About the RIDER project

    The Reference Image Database to Evaluate Therapy Response (RIDER) is a targeted data collection used to generate an initial consensus on how to harmonize data collection and analysis for quantitative imaging methods applied to measure the response to drug or radiation therapy. The National Cancer Institute (NCI) has exercised a series of contracts with specific academic sites for collection of repeat "coffee break," longitudinal phantom, and patient data for a range of imaging modalities (currently computed tomography [CT] positron emission tomography [PET] CT, dynamic contrast-enhanced magnetic resonance imaging [DCE MRI], diffusion-weighted [DW] MRI) and organ sites (currently lung, breast, and neuro). The methods for data collection, analysis, and results are described in the new Combined RIDER White Paper Report (Sept 2008):

    The long term goal is to provide a resource to permit harmonized methods for data collection and analysis across different commercial imaging platforms to support multi-site clinical trials, using imaging as a biomarker for therapy response. Thus, the database should permit an objective comparison of methods for data collection and analysis as a national and international resource as described in the first RIDER white paper report (2006):

  5. brain tumor dataset

    • figshare.com
    zip
    Updated Dec 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jun Cheng (2024). brain tumor dataset [Dataset]. http://doi.org/10.6084/m9.figshare.1512427.v8
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 21, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Jun Cheng
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This brain tumor dataset contains 3064 T1-weighted contrast-inhanced images with three kinds of brain tumor. Detailed information of the dataset can be found in the readme file.The README file is updated:Add image acquisition protocolAdd MATLAB code to convert .mat file to jpg images

  6. Augmented Alzheimer MRI Dataset

    • kaggle.com
    Updated Sep 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    uraninjo (2022). Augmented Alzheimer MRI Dataset [Dataset]. https://www.kaggle.com/datasets/uraninjo/augmented-alzheimer-mri-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 20, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    uraninjo
    License

    http://www.gnu.org/licenses/lgpl-3.0.htmlhttp://www.gnu.org/licenses/lgpl-3.0.html

    Description

    The data consists of MRI images. The data has four classes of images both in training as well as a testing set:

    1. Mild Demented
    2. Moderate Demented
    3. Non Demented
    4. Very Mild Demented

    The data contains two folders. One of them is augmented ones and the other one is originals. Originals could be used for validation or test dataset...

    Data is augmented from an existing dataset. Original images can be seen in Data Explorer. https://www.kaggle.com/datasets/tourist55/alzheimers-dataset-4-class-of-images

    My purpose of the publish this dataset is to the usage of augmented images as well as originals. The importance of augmentation is can be a little underrated.

  7. Brain tumor multimodal image (CT & MRI)

    • kaggle.com
    Updated Dec 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Md. Golam Murtoza (2024). Brain tumor multimodal image (CT & MRI) [Dataset]. https://www.kaggle.com/datasets/murtozalikhon/brain-tumor-multimodal-image-ct-and-mri
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 3, 2024
    Dataset provided by
    Kaggle
    Authors
    Md. Golam Murtoza
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    This dataset contains a collection of multimodal medical images, specifically CT (Computed Tomography) and MRI (Magnetic Resonance Imaging) scans, for brain tumor detection and analysis. It is designed to assist researchers and healthcare professionals in developing AI models for the automatic detection, classification, and segmentation of brain tumors. The dataset features images from both modalities, providing comprehensive insight into the structural and functional variations in the brain associated with various types of tumors.

    The dataset includes high-resolution CT and MRI images captured from multiple patients, with each image labeled with the corresponding tumor type (e.g., glioma, meningioma, etc.) and its location within the brain. This combination of CT and MRI images aims to leverage the strengths of both imaging techniques: CT scans for clear bone structure visualization and MRI for soft tissue details, enabling a more accurate analysis of brain tumors.

    I collected these data from different sources and modified data for maximum accuracy.

    Brain Tumor CT scan Images source

    1. CT Brain Segmentation Computer Vision Project -- https://universe.roboflow.com/joshua-zgc7b/ct-brain-segmentation
    2. CT and MRI brain scans -- https://www.kaggle.com/datasets/darren2020/ct-to-mri-cgan
    3. CT Head Scans(jpg files) -- https://www.kaggle.com/datasets/clarksaben/ct-head-scans
    4. Head CT Images for Classification -- https://www.kaggle.com/datasets/nipaanjum/head-ct-images-for-classification
    5. Anonymous brain -- from private data
    6. Unpaired MR-CT Brain Dataset for Unsupervised Image Translation -- https://data.mendeley.com/datasets/z4wc364g79/1

    Brain Tumor MRI images source

    1. Brain Tumor (MRI Scans) -- https://www.kaggle.com/datasets/rm1000/brain-tumor-mri-scans
    2. Brain Tumor MRIs -- https://www.kaggle.com/datasets/vinayjayanti/brain-tumor-mris
    3. Siardataset -- https://www.kaggle.com/datasets/masoumehsiar/siardataset
    4. Brain tumors 256x256 -- https://www.kaggle.com/datasets/thomasdubail/brain-tumors-256x256
    5. Brain Tumor MRI Image Classification Dataset -- https://www.kaggle.com/datasets/iashiqul/brain-tumor-mri-image-classification-dataset
    6. Brain Tumor MRI (yes or no) -- https://www.kaggle.com/datasets/mohamada2274/brain-tumor-mri-yes-or-no
    7. BRAIN TUMOR CLASS CLASS Computer Vision Project -- https://universe.roboflow.com/college-sf5ih/brain-tumor-class-class
    8. Brain Tumor Detection Computer Vision Project -- https://universe.roboflow.com/tuan-nur-afrina-zahira/brain-tumor-detection-bmmqz
    9. Tumor Detection Computer Vision Project -- https://universe.roboflow.com/brain-tumor-detection-wsera/tumor-detection-ko5jp
  8. m

    Lumbar Spine MRI Dataset

    • data.mendeley.com
    • opendatalab.com
    Updated Apr 3, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sud Sudirman (2019). Lumbar Spine MRI Dataset [Dataset]. http://doi.org/10.17632/k57fr854j2.2
    Explore at:
    Dataset updated
    Apr 3, 2019
    Authors
    Sud Sudirman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data set contains anonymised clinical MRI study, or a set of scans, of 515 patients with symptomatic back pains. Each patient data can have one or more MRI studies associated with it. Each study contains slices, i.e., individual images taken from either sagittal or axial view, of the lowest three vertebrae and the lowest three IVDs. The axial view slices are mainly taken from the last three IVDs – including the one between the last vertebrae and the sacrum. The orientation of the slices of the last IVD are made to follow the spine curve whereas those of the other IVDs are usually made in blocks – i.e., parallel to each other. There are between four to five slices per IVD and they begin from the top of the IVD towards its bottom. Many of the top and bottom slices cut through the vertebrae leaving between one to three slices that cut the IVD cleanly and show purely the image of that IVD. In most cases, the total number of slices in axial view ranges from 12 to 15. However, in some cases, there may be up to 20 slices because the study contains slices of more than last three vertebrae. The scans in sagittal view also vary but all contain at least the last seven vertebrae and the sacrum. While the number of vertebrae varies, each scan always includes the first two sacral links.

    There are a total 48,345 MRI slices in our dataset. The majority of the slices have an image resolution of 320x320 pixels, however, there are slices from three studies with 320x310 pixel resolution. The pixels in all slices have 12-bit per pixel precision which is higher than the standard 8-bit greyscale images. Specifically for all axial-view slices, the slice thickness are uniformly 4 mm with centre-to-centre distance between adjacent slices to be 4.4 mm. The horizontal and vertical pixel spacing is 0.6875 mm uniformly across all axial-view slices.

    The majority of the MRI studies were taken with the patient in Head-First-Supine position with the rests were taken with the patient in in Feet-First-Supine position. Each study can last between 15 to 45 minutes and a patient may have one or more study associated with them taken at a different time or a few days apart.

    You can download and read the research papers detailing our methodology on boundary delineation for lumbar spinal stenosis detection using the URLs provided in the Related Links at the end of this page. You can also check out other dataset and source code related to this program from that section.

    We kindly request you to cite our papers when using our data or program in your research.

  9. u

    Brain MRI Dataset

    • unidata.pro
    dicom, json
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata L.L.C-FZ, Brain MRI Dataset [Dataset]. https://unidata.pro/datasets/brain-mri-image-dicom/
    Explore at:
    dicom, jsonAvailable download formats
    Dataset authored and provided by
    Unidata L.L.C-FZ
    Description

    Unidata’s Brain MRI dataset offers unique MRI scans and radiologist reports, aiding AI in detecting and diagnosing brain pathologies

  10. Large dataset of infancy and early childhood brain MRIs (T1w and T2w)

    • zenodo.org
    zip
    Updated Aug 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tugba Akinci D'Antonoli; Ramona-Alexandra Todea; Alexandre Datta; Bram Stieltjes; Nora Leu; Friederike Prüfer; Jakob Wasserthal; Tugba Akinci D'Antonoli; Ramona-Alexandra Todea; Alexandre Datta; Bram Stieltjes; Nora Leu; Friederike Prüfer; Jakob Wasserthal (2023). Large dataset of infancy and early childhood brain MRIs (T1w and T2w) [Dataset]. http://doi.org/10.5281/zenodo.8055666
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 2, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tugba Akinci D'Antonoli; Ramona-Alexandra Todea; Alexandre Datta; Bram Stieltjes; Nora Leu; Friederike Prüfer; Jakob Wasserthal; Tugba Akinci D'Antonoli; Ramona-Alexandra Todea; Alexandre Datta; Bram Stieltjes; Nora Leu; Friederike Prüfer; Jakob Wasserthal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains 833 brain MRI images (T1w and T2w) from infancy and early childhood. The age of the subjects is between 0 months and 36 months. It contains a wide range of pathologies as well as healthy subjects. It is a quite diverse dataset acquired in the clinical routine over several years (images acquired with same scanner, but different protocols).

    The T1w images are resampled to the shape of the T2w images. Then both are skull stripped.

    All details about this dataset can be found in the paper "Development and Evaluation of Deep Learning Models for Automated Estimation of Myelin Maturation Using Pediatric Brain MRI Scans". If you use this dataset please cite our paper: https://pubs.rsna.org/doi/10.1148/ryai.220292

    The metadata can be found in the table meta.csv.

    Description of columns:
    myelinisation: myelin maturation status in terms of delayed, normal or accelerated according to evaluation by an expert radiologist. For more detail please see the paper.
    age: the chronological age (in months) since birth.
    age_corrected: the corrected chronological age (in months), which corrected for the premature babies by the number of month the baby was born before 37 weeks of gestation (in month), hence a preterm newborn gets a negative age.
    doctor_predicted_age: the predicted age (in months) of the myelin maturation by expert radiologist (subjects with delayed myelin maturation will get lower values than their chronological age).
    diagnosis: list of pathologies found in this dataset according to expert radiology reports.

  11. s

    MRI Image Dataset

    • shaip.com
    • ky.shaip.com
    • +3more
    json
    Updated Feb 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shaip (2023). MRI Image Dataset [Dataset]. https://www.shaip.com/offerings/mri-images-datasets/
    Explore at:
    jsonAvailable download formats
    Dataset updated
    Feb 21, 2023
    Dataset authored and provided by
    Shaip
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Shaip offers the best in class MRI scan Image Datasets for accurately training machine learning model. We offer MRI scan datasets for different body parts like brain, abdomen, breast, head, hip, knee, spin, and more.

  12. h

    Alzheimer_MRI

    • huggingface.co
    • opendatalab.com
    Updated Jul 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Falahgs Saleh (2023). Alzheimer_MRI [Dataset]. https://huggingface.co/datasets/Falah/Alzheimer_MRI
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 4, 2023
    Authors
    Falahgs Saleh
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Alzheimer_MRI Disease Classification Dataset

    The Falah/Alzheimer_MRI Disease Classification dataset is a valuable resource for researchers and health medicine applications. This dataset focuses on the classification of Alzheimer's disease based on MRI scans. The dataset consists of brain MRI images labeled into four categories:

    '0': Mild_Demented '1': Moderate_Demented '2': Non_Demented '3': Very_Mild_Demented

      Dataset Information
    

    Train split:

    Name: train Number of… See the full description on the dataset page: https://huggingface.co/datasets/Falah/Alzheimer_MRI.

  13. i

    OpenBHB: a Multi-Site Brain MRI Dataset for Age Prediction and Debiasing

    • ieee-dataport.org
    Updated Feb 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Benoit Dufumier (2025). OpenBHB: a Multi-Site Brain MRI Dataset for Age Prediction and Debiasing [Dataset]. https://ieee-dataport.org/open-access/openbhb-multi-site-brain-mri-dataset-age-prediction-and-debiasing
    Explore at:
    Dataset updated
    Feb 5, 2025
    Authors
    Benoit Dufumier
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    GSP

  14. c

    The University of California San Francisco Preoperative Diffuse Glioma MRI

    • cancerimagingarchive.net
    • stage.cancerimagingarchive.net
    bval and zip +4
    Updated Apr 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Cancer Imaging Archive (2023). The University of California San Francisco Preoperative Diffuse Glioma MRI [Dataset]. http://doi.org/10.7937/tcia.bdgf-8v37
    Explore at:
    bvec and zip, bval and zip, n/a, csv, nifti and bvecAvailable download formats
    Dataset updated
    Apr 7, 2023
    Dataset authored and provided by
    The Cancer Imaging Archive
    License

    https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/

    Time period covered
    May 30, 2025
    Dataset funded by
    National Cancer Institutehttp://www.cancer.gov/
    Description

    Introduction

    MRI-based artificial intelligence (AI) research on patients with brain gliomas has been rapidly increasing in popularity in recent years in part due to a growing number of publicly available MRI datasets. Notable examples include The Cancer Genome Atlas Glioblastoma dataset (TCGA-GBM) consisting of 262 subjects and the International Brain Tumor Segmentation (BraTS) challenge dataset consisting of 542 subjects (including 243 preoperative cases from TCGA-GBM). The public availability of these glioma MRI datasets has fostered the growth of numerous emerging AI techniques including automated tumor segmentation, radiogenomics, and MRI-based survival prediction. Despite these advances, existing publicly available glioma MRI datasets have been largely limited to only 4 MRI contrasts (T2, T2/FLAIR, and T1 pre- and post-contrast) and imaging protocols vary significantly in terms of magnetic field strength and acquisition parameters. Here we present the University of California San Francisco Preoperative Diffuse Glioma MRI (UCSF-PDGM) dataset. The UCSF-PDGM dataset includes 501 subjects with histopathologically-proven diffuse gliomas who were imaged with a standardized 3 Tesla preoperative brain tumor MRI protocol featuring predominantly 3D imaging, as well as advanced diffusion and perfusion imaging techniques. The dataset also includes isocitrate dehydrogenase (IDH) mutation status for all cases and O[6]-methylguanine-DNA methyltransferase (MGMT) promotor methylation status for World Health Organization (WHO) grade III and IV gliomas. The UCSF-PDGM has been made publicly available in the hopes that researchers around the world will use these data to continue to push the boundaries of AI applications for diffuse gliomas.

    Methods

    Patient Population

    Data collection was performed in accordance with relevant guidelines and regulations and was approved by the University of California San Francisco institutional review board with a waiver for consent. The dataset population consisted of 501* adult patients with histopathologically confirmed grade II-IV diffuse gliomas who underwent preoperative MRI, initial tumor resection, and tumor genetic testing at a single medical center between 2015 and 2021. Patients with any prior history of brain tumor treatment were excluded; however, history of tumor biopsy was not considered an exclusion criterion.

    Genetic Biomarker Testing

    All subjects’ tumors were tested for IDH mutations by genetic sequencing of tissue acquired during biopsy or resection. All grade III and IV tumors were tested for MGMT methylation status using a methylation sensitive quantitative PCR assay.

    Study participant demographic data

    The 501* cases included in the UCSF-PDGM include 55 (11%) grade II, 42 (9%) grade III, and 403 (80%) grade IV tumors. There was a male predominance for all tumor grades (56%, 60%, and 60%, respectively for grades II-IV). IDH mutations were identified in a majority of grade II (83%) and grade III (67%) tumors and a small minority of grade IV tumors (8%). MGMT promoter hypermethylation was detected in 63% of grade IV gliomas and was not tested for in a majority of lower grade gliomas. 1p/19q codeletion was detected in 20% of grade II tumors and a small minority of grade III (5%) and IV (<1%) tumors. Tabulated details and glossary are available in the Data Access and Detailed Description tabs below.

    Image Acquisition

    All preoperative MRI was performed on a 3.0 tesla scanner (Discovery 750, GE Healthcare, Waukesha, Wisconsin, USA) and a dedicated 8-channel head coil (Invivo, Gainesville, Florida, USA). The imaging protocol included 3D T2-weighted, T2/FLAIR-weighted, susceptibility-weighted (SWI), diffusion-weighted (DWI), pre- and post-contrast T1-weighted images, 3D arterial spin labeling (ASL) perfusion images, and 2D 55-direction high angular resolution diffusion imaging (HARDI). Over the study period, two gadolinium-based contrast agents were used: gadobutrol (Gadovist, Bayer, LOC) at a dose of 0.1 mL/kg and gadoterate (Dotarem, Guerbet, Aulnay-sous-Bois, France) at a dose of 0.2 mL/kg.

    Image Pre-Processing

    HARDI data were eddy current corrected and processed using the Eddy and DTIFIT modules from FSL 6.0.2 yielding isotropic diffusion weighted images (DWI) and several quantitative diffusivity maps: mean diffusivity (MD), axial diffusivity (AD), radial diffusivity (RD), and fractional anisotropy (FA). Eddy correction was performed with outlier replacement on and topup correction off. DTIFIT was performed with simple least squares regression. Each image contrast was registered and resampled to the 3D space defined by the T2/FLAIR image (1 mm isotropic resolution) using automated non-linear registration (Advanced Normalization Tools). Resampled co-registered data were then skull stripped using a previously described and publicly available deep-learning algorithm: https://www.github.com/ecalabr/brain_mask/.

    Tumor Segmentation

    Multicompartment tumor segmentation of study data was undertaken as part of the 2021 BraTS challenge. Briefly, image data first underwent automated segmentation using an ensemble model consisting of prior BraTS challenge winning segmentation algorithms. Images were then manually corrected by trained radiologists and approved by 2 expert reviewers. Segmentation included three major tumor compartments: enhancing tumor, non-enhancing/necrotic tumor, and surrounding FLAIR abnormality (sometimes referred to as edema).

    The UCSF-PDGM adds to on an existing body of publicly available diffuse glioma MRI datasets that are commonly used in AI research applications. As MRI-based AI research applications continue to grow, new data are needed to foster development of new techniques and increase the generalizability of existing algorithms. The UCSF-PDGM not only significantly increases the total number of publicly available diffuse glioma MRI cases, but also provides a unique contribution in terms of MRI technique. The inclusion of 3D sequences and advanced MRI techniques like ASL and HARDI provides a new opportunity for researchers to explore the potential utility of cutting-edge clinical diagnostics for AI applications. In addition, these advanced imaging techniques may prove useful for radiogenomic studies focused on identification of IDH mutations or MGMT promoter methylation.

    The UCSF-PDGM dataset, particularly when combined with existing publicly available datasets, has the potential to fuel the next phase of radiologic AI research on diffuse gliomas. However, the UCSF-PDGM dataset’s potential will only be realized if the radiology AI research community takes advantage of this new data resource. We hope that this dataset sparks inspiration in the next generation of AI researchers, and we look forward to the new techniques and discoveries that the UCSF-PDGM will generate.

  15. c

    Brain Tumor MRI Dataset

    • cubig.ai
    zip
    Updated May 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CUBIG (2025). Brain Tumor MRI Dataset [Dataset]. https://cubig.ai/store/products/295/brain-tumor-mri-dataset
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 28, 2025
    Dataset authored and provided by
    CUBIG
    License

    https://cubig.ai/store/terms-of-servicehttps://cubig.ai/store/terms-of-service

    Measurement technique
    Privacy-preserving data transformation via differential privacy, Synthetic data generation using AI techniques for model training
    Description

    1) Data Introduction • The Brain Tumor MRI Dataset is a collection of Magnetic Resonance Imaging (MRI) images curated for the classification of brain tumors. The dataset consists of MRI scans categorized into four classes: glioma, meningioma, pituitary tumor, and no tumor.

    2) Data Utilization (1) Characteristics of the Brain Tumor MRI Dataset: • This dataset has been constructed as training data for artificial intelligence (AI) models aimed at the early detection and precise classification of brain tumors. It helps improve the accuracy and efficiency of medical diagnoses. • Each image is labeled with the tumor type, making the dataset well-suited for multiclass classification tasks.

    (2) Applications of the Brain Tumor MRI Dataset: • Development of tumor classification models: The dataset can be used to develop AI systems that automatically classify the type of brain tumor. • Detection of tumor location and boundaries: The dataset can be utilized to train models that not only detect the presence of a tumor but also identify its location and size, contributing to effective pre-surgical planning.

  16. m

    Medical Imagining (CT scan, MRI, X-ray, and Microscopic Imagery) Data

    • data.mendeley.com
    Updated Jul 11, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sibtain Syed (2024). Medical Imagining (CT scan, MRI, X-ray, and Microscopic Imagery) Data [Dataset]. http://doi.org/10.17632/5kbjrgsncf.3
    Explore at:
    Dataset updated
    Jul 11, 2024
    Authors
    Sibtain Syed
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The respective data is comprised of 5 different datasets of medical images collected by the contributors, which can be used for classifying Lung Cancer, Bone Fracture, Brain tumor, Skin Lesions, and Renal Malignancy, respectively. The data also includes multiple disease and malignancy images for the respective dataset. The classification for the diseases can be done by using ResNet50 CNN architecture and other DCNN models. This data is also been used in a research article by the contributor.

  17. h

    FOMO-MRI

    • huggingface.co
    Updated Jul 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FOMO25 Challenge MICCAI 2025 (2025). FOMO-MRI [Dataset]. https://huggingface.co/datasets/FOMO25/FOMO-MRI
    Explore at:
    Dataset updated
    Jul 23, 2025
    Dataset authored and provided by
    FOMO25 Challenge MICCAI 2025
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    FOMO-60K: Brain MRI Dataset for Large-Scale Self-Supervised Learning with Clinical Data

    Dataset paper preprint: A large-scale heterogeneous 3D magnetic resonance brain imaging dataset for self-supervised learning. https://arxiv.org/pdf/2506.14432. This repo contains FOMO-60K, a large-scale dataset which includes MRIs from the clinic, containing 60K+ brain MRI scans. This dataset is released in conjunction with the FOMO25: Foundation Model Challenge for Brain MRI hosted at MICCAI… See the full description on the dataset page: https://huggingface.co/datasets/FOMO25/FOMO-MRI.

  18. u

    Spine MRI Dataset

    • unidata.pro
    dicom, jpg
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata L.L.C-FZ, Spine MRI Dataset [Dataset]. https://unidata.pro/datasets/spine-mri-image-dicom/
    Explore at:
    jpg, dicomAvailable download formats
    Dataset authored and provided by
    Unidata L.L.C-FZ
    Description

    Unidata Spine MRI dataset provides comprehensive spinal scans, improving AI’s ability to detect and diagnose spinal conditions

  19. Data from: T2-weighted Kidney MRI Segmentation

    • zenodo.org
    • kaggle.com
    csv, zip
    Updated Aug 17, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander J Daniel; Alexander J Daniel; Charlotte E Buchanan; Charlotte E Buchanan; Thomas Allcock; Daniel Scerri; Eleanor F Cox; Eleanor F Cox; Benjamin L Prestwich; Susan T Francis; Susan T Francis; Thomas Allcock; Daniel Scerri; Benjamin L Prestwich (2021). T2-weighted Kidney MRI Segmentation [Dataset]. http://doi.org/10.5281/zenodo.5153568
    Explore at:
    zip, csvAvailable download formats
    Dataset updated
    Aug 17, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Alexander J Daniel; Alexander J Daniel; Charlotte E Buchanan; Charlotte E Buchanan; Thomas Allcock; Daniel Scerri; Eleanor F Cox; Eleanor F Cox; Benjamin L Prestwich; Susan T Francis; Susan T Francis; Thomas Allcock; Daniel Scerri; Benjamin L Prestwich
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A dataset containing 100 T2-weighted abdominal MRI scans and manually defined kidney masks. This MRI sequence is designed to optimise contrast between the kidneys and surrounding tissue to increase the accuracy of segmentation. Half of the acquisitions were acquired of healthy control subjects while the other half were acquired from Chronic Kidney Disease (CKD) patients. Ten of the subjects were scanned five times in the same session to enable assessment of the precision of Total Kidney Volume (TKV) measurements. More information about each subject can be found in the included csv file. This dataset was used to train a Convolutional Neural Network (CNN) to automatically segment the kidneys.

    For more information about the dataset please refer to this article.

    For an executable that allows automated segmentation of the kidneys from this dataset please refer to this software.

  20. Postnatal Affective MRI Dataset

    • openneuro.org
    Updated Sep 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PhD Heidemarie Laurent; Megan K. Finnegan; Katherine Haigler (2020). Postnatal Affective MRI Dataset [Dataset]. http://doi.org/10.18112/openneuro.ds003136.v1.0.0
    Explore at:
    Dataset updated
    Sep 12, 2020
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    PhD Heidemarie Laurent; Megan K. Finnegan; Katherine Haigler
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Postnatal Affective MRI Dataset

    Authors Heidemarie Laurent, Megan K. Finnegan, and Katherine Haigler

    The Postnatal Affective MRI Dataset (PAMD) includes MRI and psych data from 25 mothers at three months postnatal, with additional psych data collected at three additional timepoints (six, twelve, and eighteen months postnatal). Mother-infant dyad psychosocial tasks and cortisol samples were also collected at all four timepoints, but this data is not included in this dataset. In-scanner tasks involved viewing own- and other-infant affective videos and viewing and labeling adult affective faces. This repository includes de-identified MRI, in-scanner task, demographic, and psych data from this study.

    Citation Laurent, H., Finnegan, M. K., & Haigler, K. (2020). Postnatal Affective MRI Dataset. OpenNeuro. Retrieved from OpenNeuro.org.

    Acknowledgments Saumya Agrawal was instrumental in getting the PAMD dataset into a BIDS-compliant structure.

    Funding This work was supported by the Society for Research in Child Development Victoria Levin Award "Early Calibration of Stress Systems: Defining Family Influences and Health Outcomes" to Heidemarie Laurent and by the University of Oregon College of Arts and Sciences

    Contact For questions about this dataset or to request access to alcohol- and tobacco-related psych data, please contact Dr. Heidemarie Laurent, hlaurent@illinois.edu.

    References Laurent, H. K., Wright, D., & Finnegan, M. K. (2018). Mindfulness-related differences in neural response to own-infant negative versus positive emotion contexts. Developmental Cognitive Neuroscience 30: 70-76. https://doi.org/10.1016/j.dcn.2018.01.002.

    Finnegan, M. K., Kane, S., Heller, W., & Laurent, H. (2020). Mothers' neural response to valenced infant interactions predicts postnatal depression and anxiety. PLoS One (under review).

    MRI Acquisition The PAMD dataset was acquired in 2015 at the University of Oregon Robert and Beverly Lewis Center for Neuroimaging with a 3T Siemens Allegra 3 magnet. A standard 32-channel phase array birdcage coil was used to acquire data from the whole brain. Sessions began with a shimming routine to optimize signal-to-noise ratio, followed by a fast localizer scan (FISP) and Siemens Autoalign routine, a field map, then the 4 functional runs and anatomical scan.

    Anatomical: T1*-weighted 3D MPRAGE sequence, TI=1100 ms, TR=2500 ms, TE=3.41 ms, flip angle=7°, 176 sagittal slices, 1.0mm thick, 256×176 matrix, FOV=256mm.

    Fieldmap: gradient echo sequence TR=.4ms, TE=.00738 ms, deltaTE=2.46 ms, 4mm thick, 64x64x32x2 matrix.

    Task: T2-weighted gradient echo sequence, TR=2000 ms, TE=30 ms, flip angle=90°, 32 contiguous slices acquired ascending and interleaved, 4 mm thick, 64×64 voxel matrix, 226 vols per run.

    Participants Mothers (n=25) of 3-month-old infants were recruited from the Women, Infants, and Children program and other community agencies serving low-income women in a midsize Pacific Northwest city. Mothers' ages ranged from 19 to 33 (M=26.4, SD=3.8). Most mothers were Caucasian (72%, 12% Latina, 8% Asian American, 8% other) and married or living with a romantic partner (88%). Although most reported some education past high school (84%), only 24% had completed college or received a graduate degree, and their median household income was between $20,000 and $29,999. For more than half of the mothers (56%), this was their first child (36% second child, 8% third child). Most infants were born on time (4% before 37 weeks and 8% after 41 weeks of pregnancy), and none had serious health problems. A vaginal delivery was reported by 56% of mothers, with 88% breastfeeding and 67% bed-sharing with their infant at the time of assessment. Over half of the mothers (52%) reported having engaged in some form of contemplative practice (mostly yoga and only 8% indicated some form of meditation), and 31% reported currently engaging in that practice. All women gave informed consent prior to participation, and all study procedures were approved by the University of Oregon Institutional Review Board. Due to a task malfunction, participant 178's scanning session was split over two days, with the anatomical acquired in ses-01, and the field maps and tasks acquired in ses-02.

    Study overview Mothers visited the lab to complete assessments at four timepoints postnatal: the first session occurred when mothers were approximately three months postnatal (T1), the second session at approximately six months postnatal (T2), the third session at approximately twelve months postnatal (T3), and the fourth and last session at approximately eighteen months postnatal (T4). MRI scans were acquired shortly after their first session (T1).

    Asssessment data Assessments collected during sessions include demographic, relationship, attachment, mental health, and infant-related questionnaires. For a full list of included measures and timepoints at which they were acquired, please refer to PAMD_codebook.tsv in the phenotype folder. Data has been made available and included in the phenotype folder as 'PAMD_T1_psychdata', 'PAMD_T2_psychdata', 'PAMD_T3_psychdata', 'PAMD_T4_psychdata'. To protect participants' privacy, all identifiers and questions relating to drugs or alcohol have been removed. If you would like access to drug- and alcohol-related questions, please contact the principle investigator, Dr. Heidemarie Laurent, to request access. Assessment data will be uploaded shortly.

    Post-scan ratings After the scan session, mothers watched all of the infant videos and rated the infant's and their own emotional valence and intensity for each video. For valence, mothers were asked "In this video clip, how positive or negative is your baby's emotion?" and "While watching this video clip, how positive or negative is your emotion? from -100 (negative) to +100 (positive). For emotional intensity, mothers were asked "In this video clip, how intense is your baby's emotion?" and "While watching this video clip, how intense is your emotion?"" on a scale of 0 (no intensity) to 100 (maximum intensity). Post-scan ratings are available in the phenotype folder as "PAMD_Post-ScanRatings."

    MRI Tasks

    Neural Reactivity to Own- and Other-Infant Affect

    File Name: task-infant 
    

    Approximately three months postnatal, a graduate research assistant visited mothers’ homes to conduct a structured clinical interview and video-record the mother interacting with her infant during a peekaboo and arm-restraint task, designed to elicit positive and negative emotions, respectively. The mother and infant were face-to-face for both tasks. For the peekaboo task, the mother covered her face with her hands and said "baby," then opened her hands and said "peekaboo" (Montague and Walker-Andrews, 2001). This continued for three minutes, or until the infant showed expressions of joy. For the arm-restraint task, the mother changed their baby's diaper and then held the infant's arms to their side for up to two minutes (Moscardino and Axia, 2006). The mother was told to keep her face neutral and not talk to her infant during this task. This procedure was repeated with a mother-infant dyad that were not included in the rest of the study to generate other-infant videos. Videos were edited to 15-second clips that showed maximum positive and negative affect. Presentation® software (Version 14.7, Neurobehavioral Systems, Inc. Berkeley, CA, www.neurobs.com) was used to present positive and negative own- and other-infant clips and rest blocks in counterbalanced order during two 7.5-minute runs. Participants were instructed to watch the videos and respond as they normally would without additional task demands. To protect participants' and their infants' privacy, infant videos will not be made publicly available. However, the mothers' post-scan rating of their infant's, the other infant's, and their own emotional valence and intensity can be found in the phenotype folder as "PAMD_Post-ScanRatings."

    Observing and Labeling Affective Faces

    File Name: task-affect 
    

    Face stimuli were selected from a standardized set of images (Tottenham, Borscheid, Ellersten, Markus, & Nelson, 2002). Presentation Software (version 14.7, Neurobehavioral Systems, Inc., Berkeley, CA, www.neurobs.com) was used to show participants race-matched adult target faces displaying emotional expressions (positive: three happy faces; negative: one fear, one sad, one anger; two from each category were open-mouthed; one close-mouthed) and were instructed to "observe" or choose the correct affect label for the target image. In the observe task, subjects viewed an emotionally evocative face without making a response. During the affect-labeling task, subjects chose the correct affect label (e.g., "scared," "angry," "happy," "surprised") from a pair of words shown at the bottom of the screen (Lieberman et al., 2007). Each block was preceded by a 3-second instruction screen cueing participants for the current task ("observe" and "affect labeling") and consisted of five affective faces presented for 5 seconds each, with a 1- to 3-second jittered fixation cross between stimuli. Each run consisted of twelve blocks (six observe; six label) counterbalanced within the run and in a semi-random order of trials within blocks (no more than four in a row of positive or negative and, in the affect-labeling task, of the correct label on the right or left side).

    .Nii to BIDs

    The raw DICOMs were anonymized and converted to BIDS format using the following procedure (for more details, seehttps://github.com/Haigler/PAMD_BIDS/).

    1. Deidentifying DICOMS: Batch Anonymization of the DICOMS using DicomBrowser (https://nrg.wustl.edu/software/dicom-browser/)

    2. Conversion to .nii and BIDS structure: Anonymized DICOMs were converted to

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Md. Nasif Safwan (2025). Brain MRI ND-5 Dataset [Dataset]. https://ieee-dataport.org/documents/brain-mri-nd-5-dataset

Brain MRI ND-5 Dataset

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Aug 23, 2025
Authors
Md. Nasif Safwan
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

meningiomas

Search
Clear search
Close search
Google apps
Main menu