Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Info: This is version 2 of the TotalSegmentator dataset.
In 1228 CT images we segmented 117 anatomical structures covering a majority of relevant classes for most use cases. The CT images were randomly sampled from clinical routine, thus representing a real world dataset which generalizes to clinical application. The dataset contains a wide range of different pathologies, scanners, sequences and institutions.
Link to a copy of this dataset on Dropbox for much quicker download: Dropbox Link
Overview of differences to v1 of this dataset: here
A small subset of this dataset with only 102 subjects for quick download+exploration can be found here: here
You can find a segmentation model trained on this dataset here.
More details about the dataset can be found in the corresponding paper (the paper describes v1 of the dataset). Please cite this paper if you use the dataset.
This dataset was created by the department of Research and Analysis at University Hospital Basel.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
GrazPedWri Full Seg Corrected 2 is a dataset for instance segmentation tasks - it contains Fracture 2l7r annotations for 8,930 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
Twitterthere are some dataset error, please see discussion: https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/436096
Apply total segmentator[1] on rsna 2023 abdominal trauma dataset[2]. The command used is based on public notebook[3]
!TotalSegmentator \
-i /kaggle/input/rsna-2023-abdominal-trauma-detection/train_images/10104/27573 \
-o /kaggle/temp/masks \
-ot 'nifti' \
-rs spleen kidney_left kidney_right liver esophagus colon duodenum small_bowel stomach
[1] https://github.com/wasserth/TotalSegmentator
[2] https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection
[3] https://www.kaggle.com/code/enriquezaf/totalsegmentator-offline
Facebook
TwitterAttribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
The dataset comprises 3,100 images from 775 individuals, featuring male alopecia cases captured from two angles (front + top views) with corresponding segmentation masks. Designed for learning algorithms for detecting hair disorders, evaluating hair restoration techniques, and training models for early diagnosis of alopecia. — Get the data
| Characteristic | Data |
|---|---|
| Description | Photos of men with varying degrees of hair loss for segmentation tasks |
| Data types | Image |
| Tasks | Classification, Machine Learning |
| Number of images | 3,100 |
| Number of files in a set | 4 images per person (image from the top + mask, image from front + mask) |
| Total number of people | 775 |
| Labeling | Metadata (gender, age, ethnicity) |
| Age | Min = 18, max = 80, mean = 45 |
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
RibFrac dataset is a benchmark for developping algorithms on rib fracture detection, segmentation and classification. We hope this large-scale dataset could facilitate both clinical research for automatic rib fracture detection and diagnoses, and engineering research for 3D detection, segmentation and classification.
Due to size limit of zenodo.org, we split the whole RibFrac Training Set into 2 parts; This is the Training Set Part 2 of RibFrac dataset, including 120 CTs and the corresponding annotations. Files include:
ribfrac-train-images-2.zip: 120 chest-abdomen CTs in NII format (nii.gz).
ribfrac-train-labels-2.zip: 120 annotations in NII format (nii.gz).
ribfrac-train-info-2.csv: labels in the annotation NIIs.
public_id: anonymous patient ID to match images and annotations.
label_id: discrete label value in the NII annotations.
label_code: 0, 1, 2, 3, 4, -1
0: it is background
1: it is a displaced rib fracture
2: it is a non-displaced rib fracture
3: it is a buckle rib fracture
4: it is a segmental rib fracture
-1: it is a rib fracture, but we could not define its type due to ambiguity, diagnosis difficulty, etc. Ignore it in the classification task.
If you find this work useful in your research, please acknowledge the RibFrac project teams in the paper and cite this project as:
Liang Jin, Jiancheng Yang, Kaiming Kuang, Bingbing Ni, Yiyi Gao, Yingli Sun, Pan Gao, Weiling Ma, Mingyu Tan, Hui Kang, Jiajun Chen, Ming Li. Deep-Learning-Assisted Detection and Segmentation of Rib Fractures from CT Scans: Development and Validation of FracNet. EBioMedicine (2020). (DOI)
or using bibtex
@article{ribfrac2020, title={Deep-Learning-Assisted Detection and Segmentation of Rib Fractures from CT Scans: Development and Validation of FracNet}, author={Jin, Liang and Yang, Jiancheng and Kuang, Kaiming and Ni, Bingbing and Gao, Yiyi and Sun, Yingli and Gao, Pan and Ma, Weiling and Tan, Mingyu and Kang, Hui and Chen, Jiajun and Li, Ming}, journal={EBioMedicine}, year={2020}, publisher={Elsevier} }
The RibFrac dataset is a research effort of thousands of hours by experienced radiologists, computer scientists and engineers. We kindly ask you to respect our effort by appropriate citation and keeping data license.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Humans in the Loop is excited to publish a new open access dataset for Teeth segmentation on dental radiology scans. The segmentation is done manually by 12 Humans in the Loop trainees in the Democratic Republic of Congo as part of their trainings, using the Panoramic radiography database published by Lopez et al. The dataset consists of 598 images with a total of 15,318 polygons, where each tooth is segmented with a different class.
This Teeth segmentation dataset is dedicated to the public domain by Humans in the Loop under CC0 1.0 license.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The PENGWIN segmentation challenge is designed to advance the development of automated pelvic fracture segmentation techniques in both 3D CT scans (Task 1) and 2D X-ray images (Task 2), aiming to enhance their accuarcy and robustness. The full 3D dataset comprises CT scans from 150 patients scheduled for pelvic reduction surgery, collected from multiple institutions using a variety of scanning devices. This dataset represents a diverse range of patient cohorts and fracture types. Ground-truth segmentations for sacrum and hipbone fragments have been semi-automatically annotated and subsequently validated by medical experts, and are available here. From this 3D data, we have generated high-quality, realistic X-ray images and corresponding 2D labels from the CT data using DeepDRR, incorporating a range of virtual C-arm camera positions and surgical tools. This dataset contains the training set for fragment segmentation in synthetic X-ray (task 2).
The training set is derived from 100 CTs, with 500 images each, for a total of 50,000 training images and segmentations. The C-arm geometry is randomly sampled for each CT within reasonable parameters for a full-size C-arm. The virtual patient is assumed to be in a head-first supine position. Imaging centers are randomly sampled within 50 mm of a fragment, ensuring good visibility. Viewing directions are sampled uniformly on the sphere within 45 degrees of vertical. Half of the images (IDs XXX_0250 - XXX_0500) contain up to 10 simulated K-wires and/or orthopaedic screws oriented randomly in the field of view.
The input images are raw intensity images without any windowing or normalization applied. It is standard practice to first apply the negative log transformation and then window each image appropriately for feeding into a model. See the included augmentation pipeline in pengwin_utils.py for one approach. For viewing raw images, the FIJI image viewer is a viable option, but it is recommended to use the included visualization functions in pengwin_utilities.py to first apply CLAHE normalization and save to a universally readable PNG (see example usage below).
Because X-ray images feature overlapping segmentation maks, the segmentations have been encoded as multi-label uint32 images, where each pixel should be treated as a binary vector with bits 1 - 10 for SA fragments, 11 - 20 for LI, and 21 - 30 for RI. Thus, the raw segmentation files are not viewable with standard image viewing software. pengwin_utilities.py includes functions for converting to and from this format and for visualizing masks overlaid onto the original image (see below).
To use the utilities, first install dependencies with pip install -r requirement.txt. Then, to visualize an image with its segmentation, you can do the following (assuming the training set has been downloaded and unzipped in the same folder):
import pengwin_utils from PIL import Image
image_path = "train/input/images/x-ray/001_0000.tif" seg_path = "train/output/images/x-ray/001_0000.tif"
image = pengwin_utils.load_image(image_path) # raw intensity image masks, category_ids, fragment_ids = pengwin_utils.load_masks(seg_path)
vis_image = pengwin_utils.visualize_sample(image, masks, category_ids, fragment_ids) vis_path = "vis_image.png" Image.fromarray(vis_image).save(vis_path) print(f"Wrote visualization to {vis_path}")
pred_masks, pred_category_ids, pred_fragment_ids = masks, category_ids, fragment_ids # replace with your model
pred_seg = pengwin_utils.masks_to_seg(pred_masks, pred_category_ids, pred_fragment_ids) pred_seg_path = "pred/train/output/images/x-ray/001_0000.tif" # ensure dir exists! Image.fromarray(pred_seg).save(pred_seg_path) print(f"Wrote segmentation to {pred_seg_path}")
The pengwin_utils.Dataset class is provided as an example of a Pytorch dataset, with strong domain randomization included to facilitate sim-to-real performance, but it is recommended to write your own as needed.
Facebook
TwitterThis dataset contains 2D image slices extracted from the publicly available Pancreas-CT-SEG dataset, which provides manually segmented pancreas annotations for contrast-enhanced 3D abdominal CT scans. The original dataset was curated by the National Institutes of Health Clinical Center (NIH) and was made available through the NCI Imaging Data Commons (IDC). The dataset consists of 82 CT scans from 53 male and 27 female subjects, converted into 2D slices for segmentation tasks.
Dataset Details:
Modality: Contrast-enhanced CT (portal-venous phase, ~70s post-injection)
Number of Subjects: 82
Age Range: 18 to 76 years (Mean: 46.8 ± 16.7 years)
Scan Resolution: 512 × 512 pixels per slice
Slice Thickness: Varies between 1.5 mm and 2.5 mm
Scanners Used: Philips and Siemens MDCT scanners (120 kVp tube voltage)
Segmentation: Manually performed by a medical student and verified by an expert radiologist
Data Format: Converted from 3D DICOM/NIfTI to 2D PNG/JPEG slices for segmentation tasks
Total Dataset Size: ~1.85 GB
Category: Non-cancerous healthy controls (No pancreatic cancer lesions or major abdominal pathologies)
Preprocessing and Conversion:
The original 3D CT scans and corresponding pancreas segmentation masks (available in NIfTI format) were converted into 2D slices to facilitate 2D medical image segmentation tasks. The conversion steps include:
Extracting axial slices from each 3D CT scan.
Normalizing pixel intensities for consistency.
Saving images in PNG/JPEG format for compatibility with deep learning frameworks.
Generating corresponding binary segmentation masks where the pancreas region is labeled.
Dataset Structure:
Applications
This dataset is ideal for medical image segmentation tasks such as:
Deep learning-based pancreas segmentation (e.g., using U-Net, DeepLabV3+)
Automated organ detection and localization
AI-assisted diagnosis and analysis of abdominal CT scans
Acknowledgments & References
This dataset is derived from:
National Cancer Institute Imaging Data Commons (IDC) [1]
The Cancer Imaging Archive (TCIA) [2]
Original dataset DOI: https://doi.org/10.7937/K9/TCIA.2016.tNB1kqBU
Citations: If you use this dataset, please cite the following:
Roth, H., Farag, A., Turkbey, E. B., Lu, L., Liu, J., & Summers, R. M. (2016). Data From Pancreas-CT (Version 2). The Cancer Imaging Archive. DOI: 10.7937/K9/TCIA.2016.tNB1kqBU
Fedorov, A., Longabaugh, W. J. R., Pot, D., Clunie, D. A., Pieper, S. D., Gibbs, D. L., et al. (2023). National Cancer Institute Imaging Data Commons: Toward Transparency, Reproducibility, and Scalability in Imaging Artificial Intelligence. Radiographics 43.
License: This dataset is provided under the Creative Commons Attribution 4.0 International (CC-BY-4.0) license. Users must abide by the TCIA Data Usage Policy and Restrictions.
Additional Resources: Imaging Data Commons (IDC) Portal: https://portal.imaging.datacommons.cancer.gov/explore/
OHIF DICOM Viewer: https://viewer.ohif.org/
This dataset provides a high-quality, well-annotated resource for researchers and developers working on medical image analysis, segmentation, and AI-based pancreas detection.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This composite dataset comprising of mid-sagittal views of lumbar spine is composed of images of lumbar spine with ground truth images duly labelled/annotated as well the spinal measurements. The purpose of creating this dataset was to establish a strong correlation in the images with the spinal measurements being clinically relevant. Presently, these measurements are being taken either completely through manual methods or by the use of computer assisted tools. The spinal measurements are clinically significant for a spinal surgeon before suggesting or shortlisting suitable surgical intervention procedure. Traditionally, the spinal surgeon evaluates the condition of the patient before surgical procedure in order to ascertain the usefulness of the adopted procedure. It also helps the surgeon in establishing a relation regarding effectiveness of the procedure adopted. For example, in case of spinal fusion procedure, will the fusion procedure be able to restore the spinal balance is a question for which the answered is obtained through making relevant spinal measurements, including lumbar lordotic curve angle, both segmental and for whole lumbar spine, lumbosacral angle, spinal heights, dimensions of vertebral bodies etc.
The Composite Dataset is acquired in following steps:- 1. Exporting mid-sagittal view from the MRI dataset. (Originally taken from Sudirman, Sud; Al Kafri, Ala; natalia, friska; Meidia, Hira; Afriliana, Nunik; Al-Rashdan, Wasfi; Bashtawi, Mohammad; Al-Jumaily, Mohammed (2019), “Label Image Ground Truth Data for Lumbar Spine MRI Dataset”, Mendeley Data, V2, doi: 10.17632/zbf6b4pttk.2). The original dataset comprises of axial views with annotations however, to determine the efficacy of spinal deformities and analyzing spinal balance sagittal views are used instead. 2. Manual labelling of lumbar vertebral bodies from L1 to L5 and first sacrum bone. Total 6 regions were labelled in consultation with expert radiologists followed by validation by expert spinal surgeon. 3. Performing fully automatic spinal measurements, including, vertebral bodies identification and labelling, lumbar height, lumbosacral angle, lumbar lordotic angle, estimation of spinal curve, intervertebral body dimensions, vertebral body dimensions. All the angular measurements are in degrees, whereas the distance measurements are in millimeters.
A total of 514 images and annotations with spinal measurements can be downloaded with request to please cite out work in your research.
Facebook
Twitterhttps://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
The study included 96 consecutive treatment naïve patients with intracranial meningiomas treated with surgical resection from 2010 to 2019. All patients had pre-operative T1, T1-CE, and T2-FLAIR MR images with subsequent subtotal or gross total resection of pathologically confirmed grade I or grade II meningiomas. A neuropathology team reviewed histopathology, including two subspecialty trained neuropathologists and one neuropathology fellow. The meningioma grade was confirmed based on current classification guidelines, most recently described in the 2016 WHO Bluebook. Clinical information includes grade, subtype, type of surgery, tumor location, and atypical features. Meningioma labels on T1-CE and T2-FLAIR images will also be provided in DICOM format. The hyperintense T1-contrast enhancing tumor and hyperintense T2-FLAIR and tumor were manually contoured on each MRI and reviewed by a central nervous system radiation oncologist specialist.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset with examples of Artefacts in Digital Pathology.
The dataset contains 22 Whole-Slide Images, with H&E or IHC staining, showing various types and levels of defect to the slides. Annotations were made by a biomedical engineer based on examples given by an expert.
The dataset is split in different folders:
train
18 whole-slide images (extracted at 1.25x & 2.5x magnification)
All from the same Block (colorectal cancer tissue)
1/2 with H&E & 1/2 with anti-pan-cytokeratin IHC staining.
validation
3 whole-slide images (1.25x + 2.5x mag)
2 from the same Block as the training set (1 IHC, 1 H&E)
1 from another Block (IHC anti-pan-cytokerating, gastroesophageal junction lesion)
validation_tiles
patches of varying sizes taken from the 3 validation whole-slide images @1.25x magnification.
7 patches from each slide.
test
1 whole-slide image (1.25x + 2.5x mag)
From another block: IHC staining (anti-NR2F2), mouth cancer
For the train, validation and test whole-slide images, each slide has: - The RGB images @1.25x & 2.5x mag - The corresponding background/tissue masks - The corresponding annotation masks containing examples of artefacts (note that a majority of artefacts are not annotated. In total, 918 artefacts are in the train set)
For the validation tiles, the following table gives the "patch-level" supervision:
tile# Artefact(s) 00 None/Few 01 Tear&Fold 02 Ink 03 None/Few 04 None/Few 05 Tear&Fold 06 Tear&Fold + Blur 07 Knife damage 08 Knife damage 09 Ink 10 None/Few 11 Tear&Fold 12 Tear&Fold 13 None/Few 14 None/Few 15 Knife damage 16 Tear&Fold 17 None/Few 18 None/Few 19 Blur 20 Knife damage
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The dataset is comprised of 38 chemically stained Whole slide image samples along with their corresponding ground truth annotated by histopathologists for 12 classes indicating skin layers (Epidermis, Reticular dermis, Papillary dermis, Dermis, Keratin), Skin tissues (Inflammation, Hair follicles, Glands), skin cancer (Basal cell carcinoma, Squamous cell carcinoma, Intraepidermal carcinoma) and background (BKG).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
DSC, HD, and MSD performance evaluation of total model in each patient.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
BraTS has always been focusing on the evaluation of state-of-the-art methods for the segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) scans. BraTS 2020 utilizes multi-institutional pre-operative MRI scans and primarily focuses on the segmentation (Task 1) of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas. Furthemore, to pinpoint the clinical relevance of this segmentation task, BraTS’20 also focuses on the prediction of patient overall survival (Task 2), and the distinction between pseudoprogression and true tumor recurrence (Task 3), via integrative analyses of radiomic features and machine learning algorithms. Finally, BraTS'20 intends to evaluate the algorithmic uncertainty in tumor segmentation (Task 4).
In this year's challenge, 4 reference standards are used for the 4 tasks of the challenge:
1. Manual segmentation labels of tumor sub-regions,
2. Clinical data of overall survival,
3. Clinical evaluation of progression status,
4. Uncertainty estimation for the predicted tumor sub-regions.
All BraTS multimodal scans are available as NIfTI files (.nii.gz) and describe a) native (T1) and b) post-contrast T1-weighted (T1Gd), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) volumes, and were acquired with different clinical protocols and various scanners from multiple (n=19) institutions, mentioned as data contributors here.
All the imaging datasets have been segmented manually, by one to four raters, following the same annotation protocol, and their annotations were approved by experienced neuro-radiologists. Annotations comprise the GD-enhancing tumor (ET — label 4), the peritumoral edema (ED — label 2), and the necrotic and non-enhancing tumor core (NCR/NET — label 1), as described both in the BraTS 2012-2013 TMI paper and in the latest BraTS summarizing paper. The provided data are distributed after their pre-processing, i.e., co-registered to the same anatomical template, interpolated to the same resolution (1 mm^3) and skull-stripped.
Participants are allowed to use additional public and/or private data (from their own institutions) for data augmentation, only if they also report results using only the BraTS'20 data and discuss any potential difference in their papers and results. This is due to our intentions to provide a fair comparison among the participating methods.
****You are free to use and/or refer to the BraTS datasets in your own research, provided that you always cite the following three manuscripts:****
[1] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, et al. "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)", IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI: 10.1109/TMI.2014.2377694
[2] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al., "Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features", Nature Scientific Data, 4:170117 (2017) DOI: 10.1038/sdata.2017.117
[3] S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, et al., "Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge", arXiv preprint arXiv:1811.02629 (2018)
****In addition, if there are no restrictions imposed from the journal/conference you submit your paper about citing "Data Citations", please be specific and also cite the following:****
[4] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al., "Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-GBM collection", The Cancer Imaging Archive, 2017. DOI: 10.7937/K9/TCIA.2017.KLXWJJ1Q
[5] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al., "Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-LGG collection", The Cancer Imaging Archive, 2017. DOI: 10.7937/K9/TCIA.2017.GJQ7R0EF
Facebook
TwitterLipid nanoparticles (LNPs) were prepared as described (https://doi.org/10.1038/s42003-021-02441-2) using the lipids DLin-KC2-DMA, DSPC, cholesterol, and PEG-DMG2000 at mol ratios of 50:10:38.5:1.5. Four sample types were prepared: LNPs in the presence and absence of RNA, and with LNPs ejected into pH 4 and pH 7.4 buffer after microfluidic assembly. To prepare samples for imaging, 3 ?L of LNP formulation was applied to holey carbon grids (Quantifoil, R3.5/1, 200 mesh copper). Grids were then incubated for 30 s at 298 K and 100% humidity before blotting and plunge-freezing into liquid ethane using a Vitrobot Mark IV (Thermo Fisher Scientific). Grids were imaged at 200 kV using a Talos Arctica system equipped with a Falcon 3EC detector (Thermo Fisher Scientific). A nominal magnification of 45,000x was used, corresponding to images with a pixel count of 4096x4096 and a calibrated pixel spacing of 0.223 nm. Micrographs were collected as dose-fractionated ?movies? at nominal defocus values between -1 and -3 ?m, with 10 s total exposures consisting of 66 frames with a total electron dose of 12,000 electrons per square nanometer. Movies were motion-corrected using MotionCor2 (https://doi.org/10.1038/nmeth.4193), resulting in flattened micrographs suitable for downstream particle segmentation. A total of 38 images were manually segmented into particle and non-particle regions. Segmentation masks and their corresponding images are deposited in this data set.
Facebook
TwitterThis record contains raw data related to article “Deep learning and atlas-based models to streamline the segmentation workflow of Total Marrow and Lymphoid Irradiation"
Abstract:
Purpose: To improve the workflow of Total Marrow and Lymphoid Irradiation (TMLI) by enhancing the delineation of organs-at-risk (OARs) and clinical target volume (CTV) using deep learning (DL) and atlas-based (AB) segmentation models.
Materials and Methods: Ninety-five TMLI plans optimized in our institute were analyzed. Two commercial DL software were tested for segmenting 18 OARs. An AB model for lymph node CTV (CTV_LN) delineation was built using 20 TMLI patients. The AB model was evaluated on 20 independent patients and a semi-automatic approach was tested by correcting the automatic contours. The generated OARs and CTV_LN contours were compared to manual contours in terms of topological agreement, dose statistics, and time workload. A clinical decision tree was developed to define a specific contouring strategy for each OAR.
Results: The two DL models achieved a median Dice Similarity Coefficient (DSC) of 0.84 [0.73;0.92] and 0.84 [0.77;0.93] across the OARs. The absolute median dose (Dmedian) difference between manual and the two DL models was 2% [1%;5%] and 1% [0.2%;1%]. The AB model achieved a median DSC of 0.70 [0.66;0.74] for CTV_LN delineation, increasing to 0.94 [0.94;0.95] after manual revision, with minimal Dmedian differences. Since September 2022, our institution has implemented DL and AB models for all TMLI patients, reducing from 5 to 2 hours the time required to complete the entire segmentation process.
Conclusion: DL models can streamline the TMLI contouring process of OARs. Manual revision is still necessary for lymph node delineation using AB models.
Statements & Declarations
Funding: This work was funded by the Italian Ministry of Health, grant AuToMI (GR-2019-12370739).
Competing Interests: The authors have no conflict of interests to disclose.
Author Contributions: All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by D.D., N.L., L.C., R.C.B., D.L., and P.M. The first draft of the manuscript was written by D.D. and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.
Ethics approval: The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Ethics Committee of IRCCS Humanitas Research Hospital (ID 2928, 26 January 2021). ClinicalTrials.gov identifier: NCT04976205.
Consent to participate: Informed consent was obtained from all individual participants included in the study.
Facebook
TwitterAnatomical segmentation of brain scans is highly relevant for diagnostics and neuroradiology research. Conventionally, segmentation is performed on T1-weighted MRI scans, due to the strong soft-tissue contrast. In this work, we report on a comparative study of automated, learning-based brain segmentation on various other contrasts of MRI and also computed tomography (CT) scans and investigate the anatomical soft-tissue information contained in these imaging modalities. A large database of in total 853 MRI/CT brain scans enables us to train convolutional neural networks (CNNs) for segmentation. We benchmark the CNN performance on four different imaging modalities and 27 anatomical substructures. For each modality we train a separate CNN based on a common architecture. We find average Dice scores of 86.7 ± 4.1% (T1-weighted MRI), 81.9 ± 6.7% (fluid-attenuated inversion recovery MRI), 80.8 ± 6.6% (diffusion-weighted MRI) and 80.7 ± 8.2% (CT), respectively. The performance is assessed relative to labels obtained using the widely-adopted FreeSurfer software package. The segmentation pipeline uses dropout sampling to identify corrupted input scans or low-quality segmentations. Full segmentation of 3D volumes with more than 2 million voxels requires <1s of processing time on a graphical processing unit.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of MRI sequence parameters for manual segmentation dataset (Dataset A).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset provides intermonthly mapping of land cover changes from the period 2017 to 2021 for the region of Graz, Austria, and the coastal region of Portorož, Izola, and Koper in Slovenia.
In the Graz region, images were procured within the WGS84 bounding box defined by the coordinates [15.390816°, 46.942176°, 15.515785°, 47.015961°], accounting for a total of 40 images. The region of Portorož, Izola, and Koper in Slovenia, contained within the WGS84 bounding box [13.590260°, 45.506948°, 13.744411°, 45.554449°], yielded a total of 41 images. All images obtained maintain minimal cloud coverage and have a spatial resolution of 10 meters.
Comprised within this dataset are raw Sentinel-2 images in numpy format in conjunction with True Color (RGB) images in PNG format, each procured from Sentinel Hub. The ground truth label data is preserved in numpy format and has been additionally rendered in color-coded PNGs. The dataset also includes land cover maps predicted for the test set (2020-2021), as outlined in the research article, available at https://doi.org/10.3390/s23146648. Each file adheres to a nomenclature denoting the year and the month (e.g., 2017_1 corresponds to an image/ground truth/prediction for the January 2017).
Initial ground truth was obtained using the ESRI's UNet model, available at https://www.arcgis.com/home/item.html?id=afd124844ba84da69c2c533d4af10a58 (accessed on 25 July 2023). Subsequent manual corrections were administered to enhance the accuracy and integrity of the data. The Graz region contains 12 distinct classes, while the region of Portorož-Izola-Koper comprises 13 classes.
The dataset is structured as follows: - 'classes.txt' contains a list of land cover classes, - '/data' hosts the Sentinel-2 imagery, -- '/data/numpy' retains Sentinel-2 images featuring 13 basic spectral layers (B01–B12) in numpy format, -- '/data/true_color_png' stores True Color (RGB) images in PNG format, - '/ground_truth' contains ground truth, -- '/ground_truth/numpy' houses ground truth in numpy format with values ranging from 0 to 14 representing distinct classes, -- '/ground_truth/color_labeled_png' contains color-labeled images in PNG format. - '/predictions' contains predicted land cover maps for the test set from the associated research paper, -- '/predictions/numpy' has predictions in numpy format with values ranging from 0 to 14 representing distinct classes, -- '/predictions/color_labeled_png' contains color-labeled images in PNG format.
All these directories further include subdirectories '/graz' and '/portoroz_izola_koper' corresponding to the two regions covered in the datasets.
Acknowledgments: Should you find this dataset useful in your work, we kindly request that you acknowledge its origin by citing the following article: Kavran, D.; Mongus, D.; Žalik, B.; Lukač, N. Graph Neural Network-Based Method of Spatiotemporal Land Cover Mapping Using Satellite Imagery. Sensors 2023, 23, 6648. https://doi.org/10.3390/s23146648.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data acquisition: Magnet resonance imaging (MRI) of the right shoulder was performed using a 3 Tesla MRI scanner (Siemens MAGNETOM Prisma Fit, Erlangen, Germany) with a dedicated 16-channel shoulder coil in the head-first supine position. The right shoulder was placed in a neutral position with the arm adducted and the hand supinated. The MR protocol consisted of a 3D coronal T1-weighted (T1w) and a sagittal DTI sequence from distal to proximal. The total scan time was approximately twelve minutes. The T1w sequence was acquired with the following parameters: repetition and echo time TR/TE = 492/20 ms, slice thickness = 0.7 mm, flip angle = 120°, field of view FOV = 180 x 180 mm2, matrix = 256 x 256 mm2. For DTI a commercial Siemens 2D echo planar diffusion image sequence was acquired with the following parameters: repetition and echo time TR/TE = 6100/69 ms, slice thickness = 4 mm, flip angle = 90°, field of view FOV = 240 x 240 mm2, matrix = 122 x 122 mm2, 48 diffusion sampling directions with b = 400 s/mm2. Muscle segmentation Manual segmentation was based on common mDTI methods described previously. Segmentation was performed using Mimics Materialise (v.24.0, Leuven, Belgium). Two independent and operators (SBA 1 and SBA 2) segmented each M. supraspinatus. Segmentation was based on the recorded T1w sequence of each subject. To compare individual differences in segmentation, each operator generated an individual segmentation routine for the whole data set. The first segmentation step was to generate a base mask by setting a threshold on the grey values of the images to separate muscle-tendons from bony structures. Then, both operators split the basic muscle mask to separate the M. supraspinatus from the other surrounding tissues and to proceed with manual segmentation and correction. While SBA 1 preferred manual segmentation, operator two (SBA 2) focused on interpolation using the integrated multi-slice editing function (Figure 1). However, both operators used semi-automatic segmentation functions and differed in time spent on each step. Finally, each surface model was smoothed by a factor of 0.5 and exported as a ROI for fiber tracking.
Figure 1. Workflow of methods. Workflow displays different processing steps and each methods duration in minutes (‘). Segmentation-based analysis by operator 1 (SBA 1) included four major segmentation steps, operator 2 (SBA 2) displayed three steps. Model-free analysis (MFA) did not include a segmentation and used the entire field of view as seeding area for deterministic fiber tracking. Within MFA the red cross symbolises the manual exclusion of tracts outside of the highlighted M. supraspinatus (blue color).
DTI data processing and fiber tracking DSI Studio (v. 3th of December 2021. http://dsi-studio.labsolver.org) was used for DTI processing, deterministic fiber tracking and tract calculations. To perform tractography for the M. supraspinatus, we registered and resampled the DTI images to the T1w images. The quality of the DTI and FA maps was first visually checked by two experts using DSI Studio. In addition, the DTI images were corrected for motion and eddy current distortion using DSI Studio's integrated FSL eddy current correction. To ensure plausible fiber tracking results, we used the following stopping criteria recommended: maximum angle between tract segments 15°, 20 mm ≤ tract length ≤ 130 mm; step size = 1.5 mm. These settings were oriented to FL results of cadaveric dissectionsand recommendations for deterministic muscle fiber tracking stopping criteria. Fiber tracking was then performed either within a model ROI (SBA methods) or for the entire DTI images without using a segmented model (MFA). After a reconstruction of ~10.000 tracts for the M. supraspinatus region, tractography was terminated and duplicates were deleted. Since MFA used the entire DTI image as a seeding area for tractography we removed all tracts outside the M. supraspinatus. Next, clearly implausible tracts and tracts crossing the muscle boundary within the SBA and MFA were reviewed and removed by two experts. Finally, DTI tensor parameters (FA, AD, MD and RD) and muscle parameters (MV, FL and FV) were calculated based on a deterministic fiber tracking algorithm and specific tracking strategies using DSI Studio. Since MFA did not include a muscle segmentation step, it took approximately 30 minutes. In contrast, SBA 1 and SBA 2, including segmentation, took approximately 90 and 60 minutes respectively.
Abbreviations dataset: SBA 1 Segmentation-based analysis by operator 1 SBA 2 Segmentation-based analysis by operator 2 MFA model-free analysis FL Fascicle length (mm) Fiber volume, FV (mm^3) MV (mm^3) muscle model volume FA (10^-3 mm/s) Fractional Anisotropy MD Mean Diffusivity RD Radial Diffusivity AD Axial Diffusivity
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Info: This is version 2 of the TotalSegmentator dataset.
In 1228 CT images we segmented 117 anatomical structures covering a majority of relevant classes for most use cases. The CT images were randomly sampled from clinical routine, thus representing a real world dataset which generalizes to clinical application. The dataset contains a wide range of different pathologies, scanners, sequences and institutions.
Link to a copy of this dataset on Dropbox for much quicker download: Dropbox Link
Overview of differences to v1 of this dataset: here
A small subset of this dataset with only 102 subjects for quick download+exploration can be found here: here
You can find a segmentation model trained on this dataset here.
More details about the dataset can be found in the corresponding paper (the paper describes v1 of the dataset). Please cite this paper if you use the dataset.
This dataset was created by the department of Research and Analysis at University Hospital Basel.