Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Age-related Macular Degeneration (AMD) is a progressive degeneration of the macula, the central part of the retina, affecting nearly 196 million people worldwide. It can appear from the age of 50, and more frequently from the age of 65 onwards, causing a significant weakening of visual capacities, without destroying them. It is a complex and multifactorial pathology in which genetic and environmental risk factors are intertwined. Advanced stages of the disease (atrophy and neovascularization) affect nearly 20% of patients: they are the first cause of severe visual impairment and blindness in developed countries. Since their introduction in 2007, Anti–vascular endothelial growth factor (anti-VEGF) treatments have proven their ability to slow disease progression and even improve visual function in neovascular forms of AMD. This effectiveness is optimized by ensuring a short time between the diagnosis of the pathology and the start of treatment as well as by performing regular checks and retreatment as soon as necessary. It is now widely accepted that the indication for anti-VEGF treatments is based on the presence of exudative signs (subretinal and intraretinal fluid, intraretinal hyperreflective spots, etc.) visible on optical coherence tomography (OCT), a 3-D imaging modality.The use of AI for the prediction of AMD mainly focus on the first onset of early/intermediate (iAMD), atrophic (GA), and neovascular (nAMD) stage. And there is no current work on the prediction of the development of the AMD in close monitoring for patient in anti-VEGF treatments plan. Therefore, being able to reliably detect an evolution in neovascular activity by monitoring exudative signs is crucial for the correct implementation of anti-VEGF treatment strategies, which are now individualized.
The objective of the MARIO dataset, and of the associated challenge, is to evaluate existing and new algorithms to recognize the evolution of neovascular activity in OCT scans of patients with exudative AMD, for the purpose of improving the planning of anti-VEGF treatments.
Two tasks have been proposed:
See details on the https://youvenz.github.io/MARIO_challenge.github.io/">MARIO challenge webpage.
https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
This dataset includes brain MRI scans of adult brain glioma patients, comprising of 4 structural modalities (i.e., T1, T1c, T2, T2-FLAIR) and associated manually generated ground truth labels for each tumor sub-region (enhancement, necrosis, edema), as well as their MGMT promoter methylation status. These scans are a collection of data from existing TCIA collections, but also cases provided by individual institutions and willing to share with a cc-by license. The BraTS dataset describes a retrospective collection of brain tumor structural mpMRI scans of 2,040 patients (1,480 here), acquired from multiple different institutions under standard clinical conditions, but with different equipment and imaging protocols, resulting in a vastly heterogeneous image quality reflecting diverse clinical practice across different institutions. The 4 structural mpMRI scans included in the BraTS challenge describe a) native (T1) and b) post-contrast T1-weighted (T1Gd (Gadolinium)), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) volumes, acquired with different protocols and various scanners from multiple institutions. Furthermore, data on the O[6]-methylguanine-DNA methyltransferase (MGMT) promoter methylation status is provided as a binary label. Notably, MGMT is a DNA repair enzyme that the methylation of its promoter in newly diagnosed glioblastoma has been identified as a favorable prognostic factor and a predictor of chemotherapy response. It is curated for computational image analysis of segmentation and prediction of the MGMT promoter methylation status.
https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
Brain tumor image data used in this article were obtained from the MICCAI Challenge on Multimodal Brain Tumor Segmentation. The challenge database contain fully anonymized images from the Cancer Imaging Archive. 1 for necrosis 2 for edema 3 for non-enhancing tumor 4 for enhancing tumor 0 for everything else here are 3 requirements for the successfull upload and validation of your segmentation: Use the MHA filetype to store your segmentations (not mhd) [use short or ushort if you experience any upload problems] Keep the same labels as the provided truth.mha (see above) Name your segmentations according to this template: VSD.your_description.###.mha replace the ### with the ID of the corresponding Flair MR images. This allows the system to relate your segmentation to the correct training truth. Download an example list for the training data and testing data. ![]() ### Publications B. H. Menze et al., "The Multimodal Brain Tumor Image Segmentati
{"references": ["Zhang M, Yu X, Zhang H, et al. FDA: Feature Decomposition and Aggregation for Robust Airway Segmentation[M]//Domain Adaptation and Representation Transfer, and Affordable Healthcare and AI for Resource Diverse Global Health. Springer, Cham, 2021: 25-34.", "Zheng H, Qin Y, Gu Y, et al. Alleviating class-wise gradient imbalance for pulmonary airway segmentation[J]. IEEE Transactions on Medical Imaging, 2021, 40(9): 2452-2462.", "Yu W, Zheng H, Zhang M, et al. BREAK: Bronchi Reconstruction by gEodesic transformation And sKeleton embedding[C]//2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI). IEEE, 2022: 1-5.", "Qin Y, Chen M, Zheng H, et al. Airwaynet: a voxel-connectivity aware approach for accurate airway segmentation using convolutional neural networks[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2019: 212-220."]} [Attention]: ATM_164_0000.nii.gz has misaligned label to CT image, please discard this case! Dataset for the MICCAI-2022-Challenge: Airway Tree Modeling (ATM'22) This is the TrainBatch2. If you use this dataset in your research, you must cite the papers in the References below !!!
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
MICCAI is a dataset for object detection tasks - it contains Surgical Instruments annotations for 670 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
This is the challenge design document for the "MICCAI Brain Tumor Segmentation (BraTS) 2020 Benchmark: 'Prediction of Survival and Pseudoprogression' ", accepted for MICCAI 2020. BraTS 2020 utilizes multi-institutional MRI scans and focuses on the segmentation of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas. Compared to BraTS'17-'19, this year BraTS includes both pre-operative and post-operative scans (i.e., including surgically imposed cavities) and attempts to quantify the uncertainty of the predicted segmentations. Furthermore, to pinpoint the clinical relevance of the segmentation task, BraTS’20 also focuses on 1) the prediction of patient overall survival from pre-operative scans (Task 2) and 2) the distinction between true tumor recurrence and treatment related effects on the post-operative scans (Task 3), via integrative analyses of quantitative imaging phenomic features and machine learning algorithms. Ground truth annotations are created and approved by expert neuroradiologists for every subject included in the training, validation, and testing datasets to quantitatively evaluate the predicted tumor segmentations (Task 1). Furthermore, the quantitative evaluation of the clinically-relevant tasks (i.e., overall survival (Task 2) and distinction between tumor recurrence and treatment related effects (Task 3)), is performed according to real clinical data. Participants are free to choose whether they want to focus only on one or multiple tasks.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Computational Precision Medicine (CPM) 2018 was held on September 16, in Granada (Spain), in conjunction with MICCAI 2018. As part of the CPM program, a series of imaging Grand Challenges were offered, hosted by kaggle. This competition, "18F-FDG PET Radiomics Risk Stratifiers in Head and Neck Cancer", was organized as a Medical Image Computing and Computer Assisted Intervention (MICCAI) Computational Precision Medicine (CPM) grand challenge. Contestants weretasked to predict, using primary tumor 18F-FDG PET-derived radiomics features +/- matched clinical data, whether a tumor arising from the oropharynx will be controlled by definitive radiation treatment (RT). The head and neck radiation oncology team from University of Texas MD Anderson Cancer Center (MDACC) have curated and harmonized a multi-institutional dataset of 248 oropharynx cancer (OPC) patients, using our in-house 'LAMBDA-RAD' data management platform. Scans came from six different institutes from: the US (MDACC), Canada [four different clinical institutions in Québec: Hôpital Général Juif de Montréal (HGJ), Centre Hospitalier Universitaire de Sherbrooke (CHUS), Centre Hopitalier de l'Université de Montréal (CHUM) and Hôpital Maisonneuve-Rosemont de Montréal (HMR)], and Europe (MAASTRO Clinic, The Netherlands).The challenge was open from June 15, 2018, 11:59 p.m. toAug. 30, 2018, midnight UT; this repository serves as FAIR (re)use durable repository for challenge data.Details on the 2018 CPM Challenges can be found at: https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=37224869 The "18F-FDG PET Radiomics Risk Stratifiers in Head and Neck Cancer" challenge website (archived) can be viewed at: https://web.archive.org/web/20190106050801/http://miccai.cloudapp.net/competitions/77and at: https://web.archive.org/web/20210418000112/https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=37224869The kaggle-in-class host page for the challenge and results can be found at: https://www.kaggle.com/c/pet-radiomics-challenges.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This Challenge will be an open-ended challenge, and we welcome your submission. Please register your team via this https://forms.office.com/e/dPg47ktV7M" href="https://forms.office.com/e/dPg47ktV7M" target="_blank" rel="noopener">form. You can submit the algorithm via this https://forms.office.com/e/QChhNkLYiu" href="https://forms.office.com/e/QChhNkLYiu" target="_blank" rel="noopener noreferrer">form for TUS-REC2024 Challenge, and we will test your submitted docker on the test set.
We are organising TUS-REC2025 at MICCAI2025. More information is available on the TUS-REC2025 challenge website and Baseline code repo.
This is the third part of the Challenge dataset. Link to first part; Link to second part. Link to validation dataset.
For detailed information please refer to the Challenge website. Baseline code is also provided, which can be found at this repo.
Dataset structure: The dataset contains 50 .h5 files. Each corresponds to one subject, storing coordinates of landmarks for 24 scans of this subject. For each scan, the coordinates are stored in numpy array with shape of [20,3]. The first column is the index of frames; the second and third columns denote the coordinates of landmarks in the image coordinate system.
Data Usage Policy:
https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
A publicly available set of training data can be downloaded for algorithmic tweaking and tuning from the Virtual Skeleton Database. The training data consists of multi-contrast MR scans of 30 glioma patients (both low-grade and high-grade, and both with and without resection) along with expert annotations for "active tumor" and "edema". For each patient, T1, T2, FLAIR, and post-Gadolinium T1 MR images are available. All volumes were linearly co-registered to the T1 contrast image, skull stripped, and interpolated to 1mm isotropic resolution. No attempt was made to put the individual patients in a common reference space. The MR scans, as well as the corresponding reference segmentations, are distributed in the ITK- and VTK-compatible MetaIO file format. Patients with high- and low-grade gliomas have file names "BRATS_HG" and "BRATS_LG", respectively. All images are stored as signed 16-bit integers, but only positive values are used. The manual seg
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
liver (131/70)
Under Institutional Review Board (IRB) supervision, 50 abdomen CT scans of were randomly selected from a combination of an ongoing colorectal cancer chemotherapy trial, and a retrospective ventral hernia study. The 50 scans were captured during portal venous contrast phase with variable volume sizes (512 x 512 x 85 - 512 x 512 x 198) and field of views (approx. 280 x 280 x 280 mm3 - 500 x 500 x 650 mm3). The in-plane resolution varies from 0.54 x 0.54 mm2 to 0.98 x 0.98 mm2, while the slice thickness ranges from 2.5 mm to 5.0 mm. The standard registration data was generated by NiftyReg.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
RSNA-MICCAI Brain Tumor Radiogenomic Classification Competition
Thanks to everyone!
MICCAI'2015 Gland Segmentation Challenge Contest Dataset Welcome to the challenge on gland segmentation in histology images. This challenge was held in conjuction with MICCAI 2015, Munich, Germany.
Objective of the Challenge We aim to bring together researchers who are interested in the gland segmentation problem, to validate the performance of their existing or newly invented algorithms on the same standard dataset. In this challenge, we will provide the participants with images of Haematoxylin and Eosin (H&E) stained slides, consisting of a wide range of histologic grades.
Click to add a brief description of the dataset (Markdown and LaTeX enabled).
Provide:
a high-level explanation of the dataset characteristics explain motivations and summary of its content potential use cases of the dataset
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This package contains supplementary material for our article prepared for publication and under revision. It contains omitted results due to space limits of the article as well as detailed, patient per patient and team per team results for all metrics. Additional figures redundant with those of the article are also provided.
The readme file Readme_SupplementalMaterial.txt provides details about each individual file content.
This dataset was created by Liam Nguyen
Transformed PNG images to NPY format
This dataset contains the "DICOM data" of the training dataset of the RSNA-MICCAI Brain Tumor Radiogenomic Classification challenge in NPY format. It is a bit bigger than the original.
U.Baid, et al., “The RSNA-ASNR-MICCAI BraTS 2021 Benchmark on Brain Tumor Segmentation and Radiogenomic Classification”, arXiv:2107.02314, 2021.
Domain Adaptation (DA) has recently raised strong interests in the medical imaging community. By encouraging algorithms to be robust to unseen situations or different input data domains, Domain Adaptation improves the applicability of machine learning approaches to various clinical settings. While a large variety of DA techniques has been proposed, most of these techniques have been validated either on private datasets [4,5] or on small publicly available datasets [6,7,8,9]. Moreover, these datasets mostly address single-class problems. To tackle these limitations, the crossMoDA challenge introduced the first large and multi-class dataset for unsupervised crossmodality Domain Adaptation. Compared to the previous crossMoDA instance, which made use of singleinstitution data and featured a single segmentation task, the 2022 edition extends the segmentation task by including multi-institutional data and introduces a new classification task. The goal of the segmentation task (Task 1) is to segment two key brain structures (tumour and cochlea) involved in the follow-up and treatment planning of vestibular schwannoma (VS). The segmentation of these two structures is required for radiosurgery, a common VS treatment. Moreover, tumour volume measurement has also been shown to be the most accurate measurement for the evaluation of VS growth. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging, as it mitigates the risks associated with gadolinium-containing contrast agents [1,2]. In addition to improving patient safety, hrT2 imaging is 10 times more cost-efficient than ceT1 imaging [17]. For this reason, we proposed an unsupervised cross-modality segmentation benchmark (from ceT1 to hrT2) that aims to automatically perform VS and cochlea segmentation on hrT2 scans. The training source and target sets are respectively unpaired annotated ceT1 and non-annotated hrT2 scans from both pre-operative and post-operative time points. To validate the robustness of the proposed approaches on different hrT2 settings, multi-institutional scans from centres in London, UK and Tilburg, NL are used in this task. The goal of the classification task (Task 2) is to automatically classify hrT2 images with VS according to the Koos grade [14]. The Koos grading scale is a classification system for VS that characterises the tumour and its impact on adjacent brain structures (e.g., brain stem, cerebellum). The Koos classification is commonly determined to decide on the treatment plan (surveillance, radiosurgery, open surgery). Similarly to the VS segmentation, Koos grading is currently performed on ceT1 scans, but hrT2 could be used. For this reason, we propose an unsupervised crossmodality classification benchmark (from ceT1 to hrT2) that aims to automatically determine the Koos grade on hrT2 scans. Only pre-operative data is used for this task. Again, multi-institutional scans from centres in London, UK and Tilburg, NL are used in this task. Participants are free to choose whether they want to focus only on one or both tasks and use the data from one task for the other task. References [1] Shapey, J., et al: An artificial intelligence framework for automatic segmentation and volumetry of Vestibular Schwannomas from contrast-enhanced t1-weighted and high-resolution t2-weighted MRI. In: Journal of Neurosurgery JNS. (2019) [2] Wang, G., et al: Automatic segmentation of Vestibular Schwannoma from T2-weighted MRI by deep spatial attention with hardness-weighted loss. In: MICCAI 2019. (2019) [3] Van de Langenberg, R., et al: Follow-up assessment of vestibular schwannomas: volume quantification versus two-dimensional measurements. In: Neuroradiology 51, 517 (2009). [4] Kamnitsas, K., et al: Unsupervised domain adaptation in brain lesion segmentation with adversarial networks. In: IPMI (2017) [5] Yang, J., et al: Unsupervised Domain Adaptation via Disentangled Representations: Application to Cross- Modality Liver Segmentation. In: MICCAI 2019. (2019) [6] Chen, C., el al: Synergistic Image and Feature Adaptation: Towards Cross-Modality Domain Adaptation for Medical Image Segmentation. In: AAAI-19. (2019) [7] Dou, Q., et al: Pnp-adanet: Plug-and-play adversarial domain adaptation network with a benchmark at crossmodality cardiac segmentation. ArXiv. (2018) [8] Orbes-Arteaga, et al: Multi-domain adaptation in brain MRI through paired consistency and adversarial learning. In: DART - Domain Adaptation and Representation Transfer and Medical Image Learning with Less Labels and Imperfect Data. (2019) [9] Xue, Y., el al: Dual-task Self-supervision for Cross-Modality Domain Adaptation. In: MICCAI 2020. (2020) [10] Maier-Hein, L., Eisenmann, M., et al: ���Is the winner really the best? A critical analysis of common research practice in biomedical image analysis com...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
MICCAI 2012 dataset
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset was created by Bard2024
Released under MIT
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Age-related Macular Degeneration (AMD) is a progressive degeneration of the macula, the central part of the retina, affecting nearly 196 million people worldwide. It can appear from the age of 50, and more frequently from the age of 65 onwards, causing a significant weakening of visual capacities, without destroying them. It is a complex and multifactorial pathology in which genetic and environmental risk factors are intertwined. Advanced stages of the disease (atrophy and neovascularization) affect nearly 20% of patients: they are the first cause of severe visual impairment and blindness in developed countries. Since their introduction in 2007, Anti–vascular endothelial growth factor (anti-VEGF) treatments have proven their ability to slow disease progression and even improve visual function in neovascular forms of AMD. This effectiveness is optimized by ensuring a short time between the diagnosis of the pathology and the start of treatment as well as by performing regular checks and retreatment as soon as necessary. It is now widely accepted that the indication for anti-VEGF treatments is based on the presence of exudative signs (subretinal and intraretinal fluid, intraretinal hyperreflective spots, etc.) visible on optical coherence tomography (OCT), a 3-D imaging modality.The use of AI for the prediction of AMD mainly focus on the first onset of early/intermediate (iAMD), atrophic (GA), and neovascular (nAMD) stage. And there is no current work on the prediction of the development of the AMD in close monitoring for patient in anti-VEGF treatments plan. Therefore, being able to reliably detect an evolution in neovascular activity by monitoring exudative signs is crucial for the correct implementation of anti-VEGF treatment strategies, which are now individualized.
The objective of the MARIO dataset, and of the associated challenge, is to evaluate existing and new algorithms to recognize the evolution of neovascular activity in OCT scans of patients with exudative AMD, for the purpose of improving the planning of anti-VEGF treatments.
Two tasks have been proposed:
See details on the https://youvenz.github.io/MARIO_challenge.github.io/">MARIO challenge webpage.