100+ datasets found
  1. MARIO: Monitoring Age-related Macular Degeneration Progression In Optical...

    • zenodo.org
    bin
    Updated Apr 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gwenolé Quellec; Gwenolé Quellec; Rachid Zeghlache; Rachid Zeghlache (2025). MARIO: Monitoring Age-related Macular Degeneration Progression In Optical Coherence Tomography [Dataset]. http://doi.org/10.5281/zenodo.15270469
    Explore at:
    binAvailable download formats
    Dataset updated
    Apr 25, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Gwenolé Quellec; Gwenolé Quellec; Rachid Zeghlache; Rachid Zeghlache
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description
    This dataset was created for the MARIO challenge, held as a satellite event of the MICCAI 2024 conference.

    Context

    Age-related Macular Degeneration (AMD) is a progressive degeneration of the macula, the central part of the retina, affecting nearly 196 million people worldwide. It can appear from the age of 50, and more frequently from the age of 65 onwards, causing a significant weakening of visual capacities, without destroying them. It is a complex and multifactorial pathology in which genetic and environmental risk factors are intertwined. Advanced stages of the disease (atrophy and neovascularization) affect nearly 20% of patients: they are the first cause of severe visual impairment and blindness in developed countries. Since their introduction in 2007, Anti–vascular endothelial growth factor (anti-VEGF) treatments have proven their ability to slow disease progression and even improve visual function in neovascular forms of AMD. This effectiveness is optimized by ensuring a short time between the diagnosis of the pathology and the start of treatment as well as by performing regular checks and retreatment as soon as necessary. It is now widely accepted that the indication for anti-VEGF treatments is based on the presence of exudative signs (subretinal and intraretinal fluid, intraretinal hyperreflective spots, etc.) visible on optical coherence tomography (OCT), a 3-D imaging modality.The use of AI for the prediction of AMD mainly focus on the first onset of early/intermediate (iAMD), atrophic (GA), and neovascular (nAMD) stage. And there is no current work on the prediction of the development of the AMD in close monitoring for patient in anti-VEGF treatments plan. Therefore, being able to reliably detect an evolution in neovascular activity by monitoring exudative signs is crucial for the correct implementation of anti-VEGF treatment strategies, which are now individualized.

    Objectives

    The objective of the MARIO dataset, and of the associated challenge, is to evaluate existing and new algorithms to recognize the evolution of neovascular activity in OCT scans of patients with exudative AMD, for the purpose of improving the planning of anti-VEGF treatments.

    Two tasks have been proposed:

    • The first task focuses on pairs of 2D slices (B-scans) from two consecutive OCT acquisitions. The goal is to classify the evolution between these two slices (before and after), which clinicians typically examine side by side on their screens.
    • The second task focuses on 2D slices level. The goal is to predict the future evolution within 3 months with close monitoring of patients that are enrolled in an anti-VEGF treatments plan.

    See details on the https://youvenz.github.io/MARIO_challenge.github.io/">MARIO challenge webpage.

  2. c

    RSNA-ASNR-MICCAI-BraTS-2021

    • cancerimagingarchive.net
    • dev.cancerimagingarchive.net
    dicom and nifti, n/a +1
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Cancer Imaging Archive, RSNA-ASNR-MICCAI-BraTS-2021 [Dataset]. http://doi.org/10.7937/jc8x-9874
    Explore at:
    n/a, xlsx, dicom and niftiAvailable download formats
    Dataset authored and provided by
    The Cancer Imaging Archive
    License

    https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/

    Time period covered
    Aug 25, 2023
    Dataset funded by
    National Cancer Institutehttp://www.cancer.gov/
    Description

    This dataset includes brain MRI scans of adult brain glioma patients, comprising of 4 structural modalities (i.e., T1, T1c, T2, T2-FLAIR) and associated manually generated ground truth labels for each tumor sub-region (enhancement, necrosis, edema), as well as their MGMT promoter methylation status. These scans are a collection of data from existing TCIA collections, but also cases provided by individual institutions and willing to share with a cc-by license. The BraTS dataset describes a retrospective collection of brain tumor structural mpMRI scans of 2,040 patients (1,480 here), acquired from multiple different institutions under standard clinical conditions, but with different equipment and imaging protocols, resulting in a vastly heterogeneous image quality reflecting diverse clinical practice across different institutions. The 4 structural mpMRI scans included in the BraTS challenge describe a) native (T1) and b) post-contrast T1-weighted (T1Gd (Gadolinium)), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) volumes, acquired with different protocols and various scanners from multiple institutions. Furthermore, data on the O[6]-methylguanine-DNA methyltransferase (MGMT) promoter methylation status is provided as a binary label. Notably, MGMT is a DNA repair enzyme that the methylation of its promoter in newly diagnosed glioblastoma has been identified as a favorable prognostic factor and a predictor of chemotherapy response. It is curated for computational image analysis of segmentation and prediction of the MGMT promoter methylation status.

    A note about available TCIA data which were converted for use in this Challenge: (Training, Validation, Test)

    Dr. Bakas's group here provides brain-extracted Segmentation task BraTS 2021 challenge TRAINING and VALIDATION set data in NIfTI that do not pose DUA-level risk of potential facial reidentification, and segmentations to go with them. This group has provided some of the brain-extracted BraTS challenge TEST data in NIfTI, and segmentations to go with them (here and here, from the 2018 challenge, request via TCIA's Helpdesk. This group here provides brain-extracted Classification task BraTS 2021 challenge TRAINING and VALIDATION set data includes DICOM→ NIfTI→ dcm files, registered to original orientation, data files that do not strictly adhere to the DICOM standard. BraTS 2021 Classification challenge TEST files are unavailable at this time. You may want the original corresponding DICOM-format files drawn from TCIA Collections; please note that these original data are not brain-extracted and may pose enough reidentification risk that TCIA must keep them behind an explicit usage agreement. Please also note that specificity of which exact series in DICOM became which exact volume in NIfTI has, unfortunately, been lost to time but the available lists below represent our best effort at reconstructing the link to the BraTS source files.

  3. a

    MICCAI 2015 Challenge on Multimodal Brain Tumor Segmentation (BraTS2015)

    • academictorrents.com
    bittorrent
    Updated Sep 19, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    None (2017). MICCAI 2015 Challenge on Multimodal Brain Tumor Segmentation (BraTS2015) [Dataset]. https://academictorrents.com/details/c4f39a0a8e46e8d2174b8a8a81b9887150f44d50
    Explore at:
    bittorrent(5340438240)Available download formats
    Dataset updated
    Sep 19, 2017
    Authors
    None
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    Brain tumor image data used in this article were obtained from the MICCAI Challenge on Multimodal Brain Tumor Segmentation. The challenge database contain fully anonymized images from the Cancer Imaging Archive. 1 for necrosis 2 for edema 3 for non-enhancing tumor 4 for enhancing tumor 0 for everything else here are 3 requirements for the successfull upload and validation of your segmentation: Use the MHA filetype to store your segmentations (not mhd) [use short or ushort if you experience any upload problems] Keep the same labels as the provided truth.mha (see above) Name your segmentations according to this template: VSD.your_description.###.mha replace the ### with the ID of the corresponding Flair MR images. This allows the system to relate your segmentation to the correct training truth. Download an example list for the training data and testing data. ![]() ### Publications B. H. Menze et al., "The Multimodal Brain Tumor Image Segmentati

  4. o

    Dataset TrainBatch2 for the MICCAI-2022-Challenge: Airway Tree Modeling...

    • explore.openaire.eu
    • zenodo.org
    Updated May 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Minghui Zhang; Yangqian Wu; Hanxiao Zhang; Weihao Yu; Yun Gu (2022). Dataset TrainBatch2 for the MICCAI-2022-Challenge: Airway Tree Modeling (ATM'22) [Dataset]. http://doi.org/10.5281/zenodo.6590774
    Explore at:
    Dataset updated
    May 29, 2022
    Authors
    Minghui Zhang; Yangqian Wu; Hanxiao Zhang; Weihao Yu; Yun Gu
    Description

    {"references": ["Zhang M, Yu X, Zhang H, et al. FDA: Feature Decomposition and Aggregation for Robust Airway Segmentation[M]//Domain Adaptation and Representation Transfer, and Affordable Healthcare and AI for Resource Diverse Global Health. Springer, Cham, 2021: 25-34.", "Zheng H, Qin Y, Gu Y, et al. Alleviating class-wise gradient imbalance for pulmonary airway segmentation[J]. IEEE Transactions on Medical Imaging, 2021, 40(9): 2452-2462.", "Yu W, Zheng H, Zhang M, et al. BREAK: Bronchi Reconstruction by gEodesic transformation And sKeleton embedding[C]//2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI). IEEE, 2022: 1-5.", "Qin Y, Chen M, Zheng H, et al. Airwaynet: a voxel-connectivity aware approach for accurate airway segmentation using convolutional neural networks[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2019: 212-220."]} [Attention]: ATM_164_0000.nii.gz has misaligned label to CT image, please discard this case! Dataset for the MICCAI-2022-Challenge: Airway Tree Modeling (ATM'22) This is the TrainBatch2. If you use this dataset in your research, you must cite the papers in the References below !!!

  5. R

    Miccai Dataset

    • universe.roboflow.com
    zip
    Updated Apr 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    cholec (2023). Miccai Dataset [Dataset]. https://universe.roboflow.com/cholec/miccai-npxmk
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 15, 2023
    Dataset authored and provided by
    cholec
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Surgical Instruments Bounding Boxes
    Description

    MICCAI

    ## Overview
    
    MICCAI is a dataset for object detection tasks - it contains Surgical Instruments annotations for 670 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  6. o

    Data from: MICCAI Brain Tumor Segmentation (BraTS) 2020 Benchmark:...

    • explore.openaire.eu
    Updated Mar 20, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Spyridon Bakas; Bjoern Menze; Christos Davatzikos; Jayashree Kalpathy-Cramer; Keyvan Farahani; Michel Bilello; Suyash Mohan; John B. Freymann; Justin S. Kirby; Manmeet Ahluwalia; Volodymyr Statsevych; Raymond Huang; Hassan Fathallah-Shaykh; Roland Wiest; Andras Jakab; Rivka R. Colen; Aikaterini Kotrotsou; Daniel Marcus; Mikhail Milchenko; Arash Nazeri; Marc-Andre Weber; Abhishek Mahajan; Ujjwal Baid (2020). MICCAI Brain Tumor Segmentation (BraTS) 2020 Benchmark: "Prediction of Survival and Pseudoprogression" [Dataset]. http://doi.org/10.5281/zenodo.3718904
    Explore at:
    Dataset updated
    Mar 20, 2020
    Authors
    Spyridon Bakas; Bjoern Menze; Christos Davatzikos; Jayashree Kalpathy-Cramer; Keyvan Farahani; Michel Bilello; Suyash Mohan; John B. Freymann; Justin S. Kirby; Manmeet Ahluwalia; Volodymyr Statsevych; Raymond Huang; Hassan Fathallah-Shaykh; Roland Wiest; Andras Jakab; Rivka R. Colen; Aikaterini Kotrotsou; Daniel Marcus; Mikhail Milchenko; Arash Nazeri; Marc-Andre Weber; Abhishek Mahajan; Ujjwal Baid
    Description

    This is the challenge design document for the "MICCAI Brain Tumor Segmentation (BraTS) 2020 Benchmark: 'Prediction of Survival and Pseudoprogression' ", accepted for MICCAI 2020. BraTS 2020 utilizes multi-institutional MRI scans and focuses on the segmentation of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas. Compared to BraTS'17-'19, this year BraTS includes both pre-operative and post-operative scans (i.e., including surgically imposed cavities) and attempts to quantify the uncertainty of the predicted segmentations. Furthermore, to pinpoint the clinical relevance of the segmentation task, BraTS’20 also focuses on 1) the prediction of patient overall survival from pre-operative scans (Task 2) and 2) the distinction between true tumor recurrence and treatment related effects on the post-operative scans (Task 3), via integrative analyses of quantitative imaging phenomic features and machine learning algorithms. Ground truth annotations are created and approved by expert neuroradiologists for every subject included in the training, validation, and testing datasets to quantitatively evaluate the predicted tumor segmentations (Task 1). Furthermore, the quantitative evaluation of the clinically-relevant tasks (i.e., overall survival (Task 2) and distinction between tumor recurrence and treatment related effects (Task 3)), is performed according to real clinical data. Participants are free to choose whether they want to focus only on one or multiple tasks.

  7. f

    MICCAI 2018 – Computational Precision Medicine Challenge: 18F-FDG PET...

    • figshare.com
    png
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Clifton D. Fuller; Hesham Elhalawani; Abdallah Mohamed (2023). MICCAI 2018 – Computational Precision Medicine Challenge: 18F-FDG PET Radiomics Risk Stratifiers in Head and Neck Cancer [Dataset]. http://doi.org/10.6084/m9.figshare.15075195.v2
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    figshare
    Authors
    Clifton D. Fuller; Hesham Elhalawani; Abdallah Mohamed
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The Computational Precision Medicine (CPM) 2018 was held on September 16, in Granada (Spain), in conjunction with MICCAI 2018. As part of the CPM program, a series of imaging Grand Challenges were offered, hosted by kaggle. This competition, "18F-FDG PET Radiomics Risk Stratifiers in Head and Neck Cancer", was organized as a Medical Image Computing and Computer Assisted Intervention (MICCAI) Computational Precision Medicine (CPM) grand challenge. Contestants weretasked to predict, using primary tumor 18F-FDG PET-derived radiomics features +/- matched clinical data, whether a tumor arising from the oropharynx will be controlled by definitive radiation treatment (RT). The head and neck radiation oncology team from University of Texas MD Anderson Cancer Center (MDACC) have curated and harmonized a multi-institutional dataset of 248 oropharynx cancer (OPC) patients, using our in-house 'LAMBDA-RAD' data management platform. Scans came from six different institutes from: the US (MDACC), Canada [four different clinical institutions in Québec: Hôpital Général Juif de Montréal (HGJ), Centre Hospitalier Universitaire de Sherbrooke (CHUS), Centre Hopitalier de l'Université de Montréal (CHUM) and Hôpital Maisonneuve-Rosemont de Montréal (HMR)], and Europe (MAASTRO Clinic, The Netherlands).The challenge was open from June 15, 2018, 11:59 p.m. toAug. 30, 2018, midnight UT; this repository serves as FAIR (re)use durable repository for challenge data.Details on the 2018 CPM Challenges can be found at: https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=37224869 The "18F-FDG PET Radiomics Risk Stratifiers in Head and Neck Cancer" challenge website (archived) can be viewed at: https://web.archive.org/web/20190106050801/http://miccai.cloudapp.net/competitions/77and at: https://web.archive.org/web/20210418000112/https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=37224869The kaggle-in-class host page for the challenge and results can be found at: https://www.kaggle.com/c/pet-radiomics-challenges.

  8. Trackerless 3D Freehand Ultrasound Reconstruction Challenge 2024 - Train...

    • zenodo.org
    zip
    Updated Jun 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Qi Li; Shaheer U. Saeed; Yuliang Huang; Dean C. Barratt; Matthew J. Clarkson; Tom Vercauteren; Yipeng Hu; Qi Li; Shaheer U. Saeed; Yuliang Huang; Dean C. Barratt; Matthew J. Clarkson; Tom Vercauteren; Yipeng Hu (2025). Trackerless 3D Freehand Ultrasound Reconstruction Challenge 2024 - Train Dataset (Part 3) [Dataset]. http://doi.org/10.5281/zenodo.11355500
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 30, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Qi Li; Shaheer U. Saeed; Yuliang Huang; Dean C. Barratt; Matthew J. Clarkson; Tom Vercauteren; Yipeng Hu; Qi Li; Shaheer U. Saeed; Yuliang Huang; Dean C. Barratt; Matthew J. Clarkson; Tom Vercauteren; Yipeng Hu
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    This Challenge will be an open-ended challenge, and we welcome your submission. Please register your team via this ⁠https://forms.office.com/e/dPg47ktV7M" href="https://forms.office.com/e/dPg47ktV7M" target="_blank" rel="noopener">form. You can submit the algorithm via this https://forms.office.com/e/QChhNkLYiu" href="https://forms.office.com/e/QChhNkLYiu" target="_blank" rel="noopener noreferrer">form for TUS-REC2024 Challenge, and we will test your submitted docker on the test set.

    We are organising TUS-REC2025 at MICCAI2025. More information is available on the TUS-REC2025 challenge website and Baseline code repo.

    This is the third part of the Challenge dataset. Link to first part; Link to second part. Link to validation dataset.

    For detailed information please refer to the Challenge website. Baseline code is also provided, which can be found at this repo.

    Dataset structure: The dataset contains 50 .h5 files. Each corresponds to one subject, storing coordinates of landmarks for 24 scans of this subject. For each scan, the coordinates are stored in numpy array with shape of [20,3]. The first column is the index of frames; the second and third columns denote the coordinates of landmarks in the image coordinate system.

    Data Usage Policy:

    • The training and validation data provided may be utilized within the research scope of this challenge and in subsequent research-related publications. However, commercial use of the training and validation data is prohibited. In cases where the intended use is ambiguous, participants accessing the data are requested to abstain from further distribution or use outside the scope of this challenge.
    • If you use our dataset in your publication, please cite the challenge paper and some of the following optional articles:
      • Challenge paper:
      • Optional articles:
        • Qi Li, Ziyi Shen, Qianye Yang, Dean C. Barratt, Matthew J. Clarkson, Tom Vercauteren, and Yipeng Hu. "Nonrigid Reconstruction of Freehand Ultrasound without a Tracker." In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 689-699. Cham: Springer Nature Switzerland, 2024. doi: 10.1007/978-3-031-72083-3_64.
        • Qi Li, Ziyi Shen, Qian Li, Dean C. Barratt, Thomas Dowrick, Matthew J. Clarkson, Tom Vercauteren, and Yipeng Hu. "Long-term Dependency for 3D Reconstruction of Freehand Ultrasound Without External Tracker." IEEE Transactions on Biomedical Engineering, vol. 71, no. 3, pp. 1033-1042, 2024. doi: 10.1109/TBME.2023.3325551.
        • Qi Li, Ziyi Shen, Qian Li, Dean C. Barratt, Thomas Dowrick, Matthew J. Clarkson, Tom Vercauteren, and Yipeng Hu. "Trackerless freehand ultrasound with sequence modelling and auxiliary transformation over past and future frames." In 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), pp. 1-5. IEEE, 2023. doi: 10.1109/ISBI53787.2023.10230773.
        • Qi Li, Ziyi Shen, Qian Li, Dean C. Barratt, Thomas Dowrick, Matthew J. Clarkson, Tom Vercauteren, and Yipeng Hu. "Privileged Anatomical and Protocol Discrimination in Trackerless 3D Ultrasound Reconstruction." In International Workshop on Advances in Simplifying Medical Ultrasound, pp. 142-151. Cham: Springer Nature Switzerland, 2023. doi: https://doi.org/10.1007/978-3-031-44521-7_14.
  9. a

    MICCAI 2013 Challenge on Multimodal Brain Tumor Segmentation (BraTS2013)

    • academictorrents.com
    bittorrent
    Updated Mar 9, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MICCAI 2013 Challenge on Multimodal Brain Tumor Segmentation (BraTS2013) [Dataset]. https://academictorrents.com/details/39c5a52bda7b5b701cecfc454a79d385868d4f3d
    Explore at:
    bittorrent(19706785133)Available download formats
    Dataset updated
    Mar 9, 2018
    Authors
    None
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    A publicly available set of training data can be downloaded for algorithmic tweaking and tuning from the Virtual Skeleton Database. The training data consists of multi-contrast MR scans of 30 glioma patients (both low-grade and high-grade, and both with and without resection) along with expert annotations for "active tumor" and "edema". For each patient, T1, T2, FLAIR, and post-Gadolinium T1 MR images are available. All volumes were linearly co-registered to the T1 contrast image, skull stripped, and interpolated to 1mm isotropic resolution. No attempt was made to put the individual patients in a common reference space. The MR scans, as well as the corresponding reference segmentations, are distributed in the ITK- and VTK-compatible MetaIO file format. Patients with high- and low-grade gliomas have file names "BRATS_HG" and "BRATS_LG", respectively. All images are stored as signed 16-bit integers, but only positive values are used. The manual seg

  10. i

    Annotations for Body Organ Localization based on MICCAI LiTS Dataset

    • ieee-dataport.org
    Updated Jun 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xuanang Xu (2025). Annotations for Body Organ Localization based on MICCAI LiTS Dataset [Dataset]. https://ieee-dataport.org/documents/annotations-body-organ-localization-based-miccai-lits-dataset
    Explore at:
    Dataset updated
    Jun 17, 2025
    Authors
    Xuanang Xu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    liver (131/70)

  11. P

    MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge Dataset

    • paperswithcode.com
    • opendatalab.com
    Updated Sep 30, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge Dataset [Dataset]. https://paperswithcode.com/dataset/miccai-2015-multi-atlas-abdomen-labeling
    Explore at:
    Dataset updated
    Sep 30, 2020
    Description

    Under Institutional Review Board (IRB) supervision, 50 abdomen CT scans of were randomly selected from a combination of an ongoing colorectal cancer chemotherapy trial, and a retrospective ventral hernia study. The 50 scans were captured during portal venous contrast phase with variable volume sizes (512 x 512 x 85 - 512 x 512 x 198) and field of views (approx. 280 x 280 x 280 mm3 - 500 x 500 x 650 mm3). The in-plane resolution varies from 0.54 x 0.54 mm2 to 0.98 x 0.98 mm2, while the slice thickness ranges from 2.5 mm to 5.0 mm. The standard registration data was generated by NiftyReg.

  12. BraTS21 - Preprocessed train images with Torchio

    • kaggle.com
    zip
    Updated Sep 17, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Rico (2021). BraTS21 - Preprocessed train images with Torchio [Dataset]. https://www.kaggle.com/rickandjoe/brats21-preprocessed-train-images-with-torchio
    Explore at:
    zip(20667279 bytes)Available download formats
    Dataset updated
    Sep 17, 2021
    Authors
    Jonathan Rico
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    RSNA-MICCAI Brain Tumor Radiogenomic Classification Competition

    Content

    1. I preprocessed the train images of the competition using Torchio such that all have 'Axial' view.
    2. Then combined the FLAIR, T2w, and T1wCE into an image with 3 channels
    3. The size is 224x224 so you can use Efficientnetb0 with input size (224, 224, 3).
    4. There is also a folder for the tumor segmentation. I used simple thresholding and median filtering on the FLAIR MRI for this segmentation. So the segmentation/localization are not so clean.
    5. The tumor segmentation masks are 2D numpy arrays

    Acknowledgements

    Thanks to everyone!

  13. P

    MICCAI'2015 Gland Segmentation Challenge Contest Dataset Dataset

    • paperswithcode.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MICCAI'2015 Gland Segmentation Challenge Contest Dataset Dataset [Dataset]. https://paperswithcode.com/dataset/miccai-2015-gland-segmentation-challenge
    Explore at:
    Description

    MICCAI'2015 Gland Segmentation Challenge Contest Dataset Welcome to the challenge on gland segmentation in histology images. This challenge was held in conjuction with MICCAI 2015, Munich, Germany.

    Objective of the Challenge We aim to bring together researchers who are interested in the gland segmentation problem, to validate the performance of their existing or newly invented algorithms on the same standard dataset. In this challenge, we will provide the participants with images of Haematoxylin and Eosin (H&E) stained slides, consisting of a wide range of histologic grades.

  14. P

    BraTs Peds 2024 Dataset

    • paperswithcode.com
    Updated Dec 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). BraTs Peds 2024 Dataset [Dataset]. https://paperswithcode.com/dataset/brats-peds-2024
    Explore at:
    Dataset updated
    Dec 4, 2024
    Description

    Click to add a brief description of the dataset (Markdown and LaTeX enabled).

    Provide:

    a high-level explanation of the dataset characteristics explain motivations and summary of its content potential use cases of the dataset

  15. MICCAI 2016 MS lesion segmentation challenge: supplementary results

    • zenodo.org
    • data.niaid.nih.gov
    bin, png, txt
    Updated Aug 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Olivier Commowick; Olivier Commowick; Audrey Istace; Michael Kain; Baptiste Laurent; Florent Leray; Mathieu Simon; Sorina Camarasu-Pop; Pascal Girard; Roxana Ameli; Jean-Christophe Ferré; Anne Kerbrat; Thomas Tourdias; Frédéric Cervenansky; Tristan Glatard; Jérémy Beaumont; Senan Doyle; Florence Forbes; Jesse Knight; April Khademi; Amirreza Mahbod; Chunliang Wang; Richard Mc Kinley; Franca Wagner; John Muschelli; Elizabeth Sweeney; Eloy Roura; Xavier Lladò; Michel M. Santos; Wellington P. Santos; Abel G. Silva-Filho; Xavier Tomas-Fernandez; Hélène Urien; Isabelle Bloch; Sergi Valverde; Mariano Cabezas; Francisco Javier Vera-Olmos; Norberto Malpica; Charles Guttmann; Sandra Vukusic; Gilles Edan; Michel Dojat; Martin Styner; Simon K. Warfield; François Cotton; Christian Barillot; Christian Barillot; Audrey Istace; Michael Kain; Baptiste Laurent; Florent Leray; Mathieu Simon; Sorina Camarasu-Pop; Pascal Girard; Roxana Ameli; Jean-Christophe Ferré; Anne Kerbrat; Thomas Tourdias; Frédéric Cervenansky; Tristan Glatard; Jérémy Beaumont; Senan Doyle; Florence Forbes; Jesse Knight; April Khademi; Amirreza Mahbod; Chunliang Wang; Richard Mc Kinley; Franca Wagner; John Muschelli; Elizabeth Sweeney; Eloy Roura; Xavier Lladò; Michel M. Santos; Wellington P. Santos; Abel G. Silva-Filho; Xavier Tomas-Fernandez; Hélène Urien; Isabelle Bloch; Sergi Valverde; Mariano Cabezas; Francisco Javier Vera-Olmos; Norberto Malpica; Charles Guttmann; Sandra Vukusic; Gilles Edan; Michel Dojat; Martin Styner; Simon K. Warfield; François Cotton (2024). MICCAI 2016 MS lesion segmentation challenge: supplementary results [Dataset]. http://doi.org/10.5281/zenodo.1307653
    Explore at:
    png, bin, txtAvailable download formats
    Dataset updated
    Aug 2, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Olivier Commowick; Olivier Commowick; Audrey Istace; Michael Kain; Baptiste Laurent; Florent Leray; Mathieu Simon; Sorina Camarasu-Pop; Pascal Girard; Roxana Ameli; Jean-Christophe Ferré; Anne Kerbrat; Thomas Tourdias; Frédéric Cervenansky; Tristan Glatard; Jérémy Beaumont; Senan Doyle; Florence Forbes; Jesse Knight; April Khademi; Amirreza Mahbod; Chunliang Wang; Richard Mc Kinley; Franca Wagner; John Muschelli; Elizabeth Sweeney; Eloy Roura; Xavier Lladò; Michel M. Santos; Wellington P. Santos; Abel G. Silva-Filho; Xavier Tomas-Fernandez; Hélène Urien; Isabelle Bloch; Sergi Valverde; Mariano Cabezas; Francisco Javier Vera-Olmos; Norberto Malpica; Charles Guttmann; Sandra Vukusic; Gilles Edan; Michel Dojat; Martin Styner; Simon K. Warfield; François Cotton; Christian Barillot; Christian Barillot; Audrey Istace; Michael Kain; Baptiste Laurent; Florent Leray; Mathieu Simon; Sorina Camarasu-Pop; Pascal Girard; Roxana Ameli; Jean-Christophe Ferré; Anne Kerbrat; Thomas Tourdias; Frédéric Cervenansky; Tristan Glatard; Jérémy Beaumont; Senan Doyle; Florence Forbes; Jesse Knight; April Khademi; Amirreza Mahbod; Chunliang Wang; Richard Mc Kinley; Franca Wagner; John Muschelli; Elizabeth Sweeney; Eloy Roura; Xavier Lladò; Michel M. Santos; Wellington P. Santos; Abel G. Silva-Filho; Xavier Tomas-Fernandez; Hélène Urien; Isabelle Bloch; Sergi Valverde; Mariano Cabezas; Francisco Javier Vera-Olmos; Norberto Malpica; Charles Guttmann; Sandra Vukusic; Gilles Edan; Michel Dojat; Martin Styner; Simon K. Warfield; François Cotton
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This package contains supplementary material for our article prepared for publication and under revision. It contains omitted results due to space limits of the article as well as detailed, patient per patient and team per team results for all metrics. Additional figures redundant with those of the article are also provided.

    The readme file Readme_SupplementalMaterial.txt provides details about each individual file content.

  16. miccai brain tumor

    • kaggle.com
    Updated Aug 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Liam Nguyen (2023). miccai brain tumor [Dataset]. https://www.kaggle.com/datasets/namgalielei/miccai-brain-tumor/versions/1
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 6, 2023
    Dataset provided by
    Kaggle
    Authors
    Liam Nguyen
    Description

    Dataset

    This dataset was created by Liam Nguyen

    Contents

  17. RSNA-N4BiasFieldCorrection

    • kaggle.com
    Updated Jul 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jose Pérez Cano (2021). RSNA-N4BiasFieldCorrection [Dataset]. https://www.kaggle.com/josepc/rsnan4biasfieldcorrection
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 22, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Jose Pérez Cano
    Description

    RSNA MICCAI NPY

    Transformed PNG images to NPY format

    This dataset contains the "DICOM data" of the training dataset of the RSNA-MICCAI Brain Tumor Radiogenomic Classification challenge in NPY format. It is a bit bigger than the original.

    Notes

    • Images sizes have been resized to 256 x 256 x 256 to have a uniform dataset.
    • The images that weren't in the dataset are simply substituted by a zero-valued 3D image.
    • The structure of the dataset has changed to have four subfolders with all the data instead of 500+ subfolders. Changing from index to view.

    References:

    U.Baid, et al., “The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification”, arXiv:2107.02314, 2021.

  18. o

    Cross-Modality Domain Adaptation for Medical Image Segmentation and...

    • explore.openaire.eu
    Updated Mar 16, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Reuben Dorent; Aaron Kujawa; Jonathan Shapey; Stefan Cornelissen; Patrick Langenhuizen; Samuel Joutard; Nicola Rieke; Spyridon Bakas; Ben Glocker; Tom Vercauteren (2022). Cross-Modality Domain Adaptation for Medical Image Segmentation and Classification [Dataset]. http://doi.org/10.5281/zenodo.6361885
    Explore at:
    Dataset updated
    Mar 16, 2022
    Authors
    Reuben Dorent; Aaron Kujawa; Jonathan Shapey; Stefan Cornelissen; Patrick Langenhuizen; Samuel Joutard; Nicola Rieke; Spyridon Bakas; Ben Glocker; Tom Vercauteren
    Description

    Domain Adaptation (DA) has recently raised strong interests in the medical imaging community. By encouraging algorithms to be robust to unseen situations or different input data domains, Domain Adaptation improves the applicability of machine learning approaches to various clinical settings. While a large variety of DA techniques has been proposed, most of these techniques have been validated either on private datasets [4,5] or on small publicly available datasets [6,7,8,9]. Moreover, these datasets mostly address single-class problems. To tackle these limitations, the crossMoDA challenge introduced the first large and multi-class dataset for unsupervised crossmodality Domain Adaptation. Compared to the previous crossMoDA instance, which made use of singleinstitution data and featured a single segmentation task, the 2022 edition extends the segmentation task by including multi-institutional data and introduces a new classification task. The goal of the segmentation task (Task 1) is to segment two key brain structures (tumour and cochlea) involved in the follow-up and treatment planning of vestibular schwannoma (VS). The segmentation of these two structures is required for radiosurgery, a common VS treatment. Moreover, tumour volume measurement has also been shown to be the most accurate measurement for the evaluation of VS growth. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging, as it mitigates the risks associated with gadolinium-containing contrast agents [1,2]. In addition to improving patient safety, hrT2 imaging is 10 times more cost-efficient than ceT1 imaging [17]. For this reason, we proposed an unsupervised cross-modality segmentation benchmark (from ceT1 to hrT2) that aims to automatically perform VS and cochlea segmentation on hrT2 scans. The training source and target sets are respectively unpaired annotated ceT1 and non-annotated hrT2 scans from both pre-operative and post-operative time points. To validate the robustness of the proposed approaches on different hrT2 settings, multi-institutional scans from centres in London, UK and Tilburg, NL are used in this task. The goal of the classification task (Task 2) is to automatically classify hrT2 images with VS according to the Koos grade [14]. The Koos grading scale is a classification system for VS that characterises the tumour and its impact on adjacent brain structures (e.g., brain stem, cerebellum). The Koos classification is commonly determined to decide on the treatment plan (surveillance, radiosurgery, open surgery). Similarly to the VS segmentation, Koos grading is currently performed on ceT1 scans, but hrT2 could be used. For this reason, we propose an unsupervised crossmodality classification benchmark (from ceT1 to hrT2) that aims to automatically determine the Koos grade on hrT2 scans. Only pre-operative data is used for this task. Again, multi-institutional scans from centres in London, UK and Tilburg, NL are used in this task. Participants are free to choose whether they want to focus only on one or both tasks and use the data from one task for the other task. References [1] Shapey, J., et al: An artificial intelligence framework for automatic segmentation and volumetry of Vestibular Schwannomas from contrast-enhanced t1-weighted and high-resolution t2-weighted MRI. In: Journal of Neurosurgery JNS. (2019) [2] Wang, G., et al: Automatic segmentation of Vestibular Schwannoma from T2-weighted MRI by deep spatial attention with hardness-weighted loss. In: MICCAI 2019. (2019) [3] Van de Langenberg, R., et al: Follow-up assessment of vestibular schwannomas: volume quantification versus two-dimensional measurements. In: Neuroradiology 51, 517 (2009). [4] Kamnitsas, K., et al: Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In: IPMI (2017) [5] Yang, J., et al: Unsupervised Domain Adaptation via Disentangled Representations: Application to Cross- Modality Liver Segmentation. In: MICCAI 2019. (2019) [6] Chen, C., el al: Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation. In: AAAI-19. (2019) [7] Dou, Q., et al: Pnp-adanet: Plug-and-play adversarial domain adaptation network with a benchmark at crossmodality cardiac segmentation. ArXiv. (2018) [8] Orbes-Arteaga, et al: Multi-domain adaptation in brain MRI through paired consistency and adversarial learning. In: DART - Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. (2019) [9] Xue, Y., el al: Dual-task Self-supervision for Cross-Modality Domain Adaptation. In: MICCAI 2020. (2020) [10] Maier-Hein, L., Eisenmann, M., et al: ���Is the winner really the best? A critical analysis of common research practice in biomedical image analysis com...

  19. MICCAI 2012

    • figshare.com
    zip
    Updated Feb 7, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emirhan Kurtuluş (2021). MICCAI 2012 [Dataset]. http://doi.org/10.6084/m9.figshare.13728088.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 7, 2021
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Emirhan Kurtuluş
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    MICCAI 2012 dataset

  20. MICCAI 2019

    • kaggle.com
    Updated Nov 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bard2024 (2023). MICCAI 2019 [Dataset]. https://www.kaggle.com/datasets/bard2024/miccai-2019/versions/1
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 4, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Bard2024
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Dataset

    This dataset was created by Bard2024

    Released under MIT

    Contents

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Gwenolé Quellec; Gwenolé Quellec; Rachid Zeghlache; Rachid Zeghlache (2025). MARIO: Monitoring Age-related Macular Degeneration Progression In Optical Coherence Tomography [Dataset]. http://doi.org/10.5281/zenodo.15270469
Organization logo

MARIO: Monitoring Age-related Macular Degeneration Progression In Optical Coherence Tomography

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
binAvailable download formats
Dataset updated
Apr 25, 2025
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Gwenolé Quellec; Gwenolé Quellec; Rachid Zeghlache; Rachid Zeghlache
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description
This dataset was created for the MARIO challenge, held as a satellite event of the MICCAI 2024 conference.

Context

Age-related Macular Degeneration (AMD) is a progressive degeneration of the macula, the central part of the retina, affecting nearly 196 million people worldwide. It can appear from the age of 50, and more frequently from the age of 65 onwards, causing a significant weakening of visual capacities, without destroying them. It is a complex and multifactorial pathology in which genetic and environmental risk factors are intertwined. Advanced stages of the disease (atrophy and neovascularization) affect nearly 20% of patients: they are the first cause of severe visual impairment and blindness in developed countries. Since their introduction in 2007, Anti–vascular endothelial growth factor (anti-VEGF) treatments have proven their ability to slow disease progression and even improve visual function in neovascular forms of AMD. This effectiveness is optimized by ensuring a short time between the diagnosis of the pathology and the start of treatment as well as by performing regular checks and retreatment as soon as necessary. It is now widely accepted that the indication for anti-VEGF treatments is based on the presence of exudative signs (subretinal and intraretinal fluid, intraretinal hyperreflective spots, etc.) visible on optical coherence tomography (OCT), a 3-D imaging modality.The use of AI for the prediction of AMD mainly focus on the first onset of early/intermediate (iAMD), atrophic (GA), and neovascular (nAMD) stage. And there is no current work on the prediction of the development of the AMD in close monitoring for patient in anti-VEGF treatments plan. Therefore, being able to reliably detect an evolution in neovascular activity by monitoring exudative signs is crucial for the correct implementation of anti-VEGF treatment strategies, which are now individualized.

Objectives

The objective of the MARIO dataset, and of the associated challenge, is to evaluate existing and new algorithms to recognize the evolution of neovascular activity in OCT scans of patients with exudative AMD, for the purpose of improving the planning of anti-VEGF treatments.

Two tasks have been proposed:

  • The first task focuses on pairs of 2D slices (B-scans) from two consecutive OCT acquisitions. The goal is to classify the evolution between these two slices (before and after), which clinicians typically examine side by side on their screens.
  • The second task focuses on 2D slices level. The goal is to predict the future evolution within 3 months with close monitoring of patients that are enrolled in an anti-VEGF treatments plan.

See details on the https://youvenz.github.io/MARIO_challenge.github.io/">MARIO challenge webpage.

Search
Clear search
Close search
Google apps
Main menu