50 datasets found
  1. Dataset with segmentations of 117 important anatomical structures in 1228 CT...

    • zenodo.org
    zip
    Updated Oct 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jakob Wasserthal; Jakob Wasserthal (2023). Dataset with segmentations of 117 important anatomical structures in 1228 CT images [Dataset]. http://doi.org/10.5281/zenodo.8367088
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 3, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jakob Wasserthal; Jakob Wasserthal
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Info: This is version 2 of the TotalSegmentator dataset.

    In 1228 CT images we segmented 117 anatomical structures covering a majority of relevant classes for most use cases. The CT images were randomly sampled from clinical routine, thus representing a real world dataset which generalizes to clinical application. The dataset contains a wide range of different pathologies, scanners, sequences and institutions.

    Link to a copy of this dataset on Dropbox for much quicker download: Dropbox Link

    Overview of differences to v1 of this dataset: here

    A small subset of this dataset with only 102 subjects for quick download+exploration can be found here: here

    You can find a segmentation model trained on this dataset here.

    More details about the dataset can be found in the corresponding paper (the paper describes v1 of the dataset). Please cite this paper if you use the dataset.

    This dataset was created by the department of Research and Analysis at University Hospital Basel.

  2. R

    Grazpedwri Full Seg Corrected 2 Dataset

    • universe.roboflow.com
    zip
    Updated Nov 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Capstone Project (2025). Grazpedwri Full Seg Corrected 2 Dataset [Dataset]. https://universe.roboflow.com/capstone-project-qiyij/grazpedwri-full-seg-corrected-2-aavkz
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 3, 2025
    Dataset authored and provided by
    Capstone Project
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Fracture 2l7r Polygons
    Description

    GrazPedWri Full Seg Corrected 2

    ## Overview
    
    GrazPedWri Full Seg Corrected 2 is a dataset for instance segmentation tasks - it contains Fracture 2l7r annotations for 8,930 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. total-segmentator-on-rsna-2023-abdominal-trauma

    • kaggle.com
    zip
    Updated Sep 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    hengck23 (2023). total-segmentator-on-rsna-2023-abdominal-trauma [Dataset]. https://www.kaggle.com/datasets/hengck23/total-segmentator-on-rsna-2023-abdominal-trauma
    Explore at:
    zip(22 bytes)Available download formats
    Dataset updated
    Sep 1, 2023
    Authors
    hengck23
    Description

    there are some dataset error, please see discussion: https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection/discussion/436096

    Apply total segmentator[1] on rsna 2023 abdominal trauma dataset[2]. The command used is based on public notebook[3]

    !TotalSegmentator \
    -i /kaggle/input/rsna-2023-abdominal-trauma-detection/train_images/10104/27573 \
    -o /kaggle/temp/masks \
    -ot 'nifti' \
    -rs spleen kidney_left kidney_right liver esophagus colon duodenum small_bowel stomach
    
    

    NOTE: there are probably error (about 5%?) in the total segmentator results. Please do a check before using this dataset!!!

    [1] https://github.com/wasserth/TotalSegmentator

    [2] https://www.kaggle.com/competitions/rsna-2023-abdominal-trauma-detection

    [3] https://www.kaggle.com/code/enriquezaf/totalsegmentator-offline

  4. Bald men segmentation dataset - 3k images

    • kaggle.com
    Updated Oct 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    simon graves (2025). Bald men segmentation dataset - 3k images [Dataset]. https://www.kaggle.com/datasets/simongraves/men-hair-loss-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 5, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    simon graves
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Hair Loss Segmentation Dataset - 3 100 images

    The dataset comprises 3,100 images from 775 individuals, featuring male alopecia cases captured from two angles (front + top views) with corresponding segmentation masks. Designed for learning algorithms for detecting hair disorders, evaluating hair restoration techniques, and training models for early diagnosis of alopecia. — Get the data

    Dataset characteristics:

    CharacteristicData
    DescriptionPhotos of men with varying degrees of hair loss for segmentation tasks
    Data typesImage
    TasksClassification, Machine Learning
    Number of images3,100
    Number of files in a set4 images per person (image from the top + mask, image from front + mask)
    Total number of people775
    LabelingMetadata (gender, age, ethnicity)
    AgeMin = 18, max = 80, mean = 45

    Here's a sample dataset to check out. For full access, go here

    Dataset structure

    • 1 — images of first person
    • 2 — images of second person
    • 3 — images of third person
    • 4 — images of fourth person
    • 5 — images of fifth person
    • Hair Loss in Men Segmentation Dataset.csv —file contains metadata and labels for all individuals in the dataset.

    Similar Datasets:

    1. Hair Loss Male Ludwig Scale
    2. Female Hair Loss Dataset
    3. Hair Loss in Women Segmentation Dataset
  5. Z

    RibFrac Dataset: A Benchmark for Rib Fracture Detection, Segmentation and...

    • data.niaid.nih.gov
    • data-staging.niaid.nih.gov
    • +1more
    Updated Dec 2, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiancheng Yang; Liang Jin; Bingbing Ni; Ming Li (2020). RibFrac Dataset: A Benchmark for Rib Fracture Detection, Segmentation and Classification (Training Set Part 2) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3893497
    Explore at:
    Dataset updated
    Dec 2, 2020
    Dataset provided by
    Shanghai Jiao Tong University
    Huadong Hospital Affiliated to Fudan University
    Authors
    Jiancheng Yang; Liang Jin; Bingbing Ni; Ming Li
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    RibFrac dataset is a benchmark for developping algorithms on rib fracture detection, segmentation and classification. We hope this large-scale dataset could facilitate both clinical research for automatic rib fracture detection and diagnoses, and engineering research for 3D detection, segmentation and classification.

    Due to size limit of zenodo.org, we split the whole RibFrac Training Set into 2 parts; This is the Training Set Part 2 of RibFrac dataset, including 120 CTs and the corresponding annotations. Files include:

    ribfrac-train-images-2.zip: 120 chest-abdomen CTs in NII format (nii.gz).

    ribfrac-train-labels-2.zip: 120 annotations in NII format (nii.gz).

    ribfrac-train-info-2.csv: labels in the annotation NIIs.

    public_id: anonymous patient ID to match images and annotations.

    label_id: discrete label value in the NII annotations.

    label_code: 0, 1, 2, 3, 4, -1

    0: it is background

    1: it is a displaced rib fracture

    2: it is a non-displaced rib fracture

    3: it is a buckle rib fracture

    4: it is a segmental rib fracture

    -1: it is a rib fracture, but we could not define its type due to ambiguity, diagnosis difficulty, etc. Ignore it in the classification task.

    If you find this work useful in your research, please acknowledge the RibFrac project teams in the paper and cite this project as:

    Liang Jin, Jiancheng Yang, Kaiming Kuang, Bingbing Ni, Yiyi Gao, Yingli Sun, Pan Gao, Weiling Ma, Mingyu Tan, Hui Kang, Jiajun Chen, Ming Li. Deep-Learning-Assisted Detection and Segmentation of Rib Fractures from CT Scans: Development and Validation of FracNet. EBioMedicine (2020). (DOI)

    or using bibtex

    @article{ribfrac2020, title={Deep-Learning-Assisted Detection and Segmentation of Rib Fractures from CT Scans: Development and Validation of FracNet}, author={Jin, Liang and Yang, Jiancheng and Kuang, Kaiming and Ni, Bingbing and Gao, Yiyi and Sun, Yingli and Gao, Pan and Ma, Weiling and Tan, Mingyu and Kang, Hui and Chen, Jiajun and Li, Ming}, journal={EBioMedicine}, year={2020}, publisher={Elsevier} }

    The RibFrac dataset is a research effort of thousands of hours by experienced radiologists, computer scientists and engineers. We kindly ask you to respect our effort by appropriate citation and keeping data license.

    This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

  6. Teeth Segmentation on dental X-ray images

    • kaggle.com
    • datasetninja.com
    zip
    Updated Jun 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Humans In The Loop (2023). Teeth Segmentation on dental X-ray images [Dataset]. https://www.kaggle.com/datasets/humansintheloop/teeth-segmentation-on-dental-x-ray-images
    Explore at:
    zip(4447156673 bytes)Available download formats
    Dataset updated
    Jun 9, 2023
    Authors
    Humans In The Loop
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Humans in the Loop is excited to publish a new open access dataset for Teeth segmentation on dental radiology scans. The segmentation is done manually by 12 Humans in the Loop trainees in the Democratic Republic of Congo as part of their trainings, using the Panoramic radiography database published by Lopez et al. The dataset consists of 598 images with a total of 15,318 polygons, where each tooth is segmented with a different class.

    This Teeth segmentation dataset is dedicated to the public domain by Humans in the Loop under CC0 1.0 license.

  7. Z

    PENGWIN Task 2: Pelvic Fragment Segmentation on Synthetic X-ray Images

    • data.niaid.nih.gov
    Updated Apr 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Killeen, Benjamin; Liu, Mingxu; Ku, Ping-Cheng; Yudi, Sang; Liu, Yanzhen; Yibulayimu, Sutuke; Zhu, Gang; Wu, Xinbao; Zhao, Chunpeng; Wang, Yu; Armand, Mehran; Unberath, Mathias (2024). PENGWIN Task 2: Pelvic Fragment Segmentation on Synthetic X-ray Images [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10913195
    Explore at:
    Dataset updated
    Apr 4, 2024
    Dataset provided by
    Beijing Jishuitan Hospital
    Johns Hopkins University
    Rossum Robot
    Beihang University
    Authors
    Killeen, Benjamin; Liu, Mingxu; Ku, Ping-Cheng; Yudi, Sang; Liu, Yanzhen; Yibulayimu, Sutuke; Zhu, Gang; Wu, Xinbao; Zhao, Chunpeng; Wang, Yu; Armand, Mehran; Unberath, Mathias
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The PENGWIN segmentation challenge is designed to advance the development of automated pelvic fracture segmentation techniques in both 3D CT scans (Task 1) and 2D X-ray images (Task 2), aiming to enhance their accuarcy and robustness. The full 3D dataset comprises CT scans from 150 patients scheduled for pelvic reduction surgery, collected from multiple institutions using a variety of scanning devices. This dataset represents a diverse range of patient cohorts and fracture types. Ground-truth segmentations for sacrum and hipbone fragments have been semi-automatically annotated and subsequently validated by medical experts, and are available here. From this 3D data, we have generated high-quality, realistic X-ray images and corresponding 2D labels from the CT data using DeepDRR, incorporating a range of virtual C-arm camera positions and surgical tools. This dataset contains the training set for fragment segmentation in synthetic X-ray (task 2).

    The training set is derived from 100 CTs, with 500 images each, for a total of 50,000 training images and segmentations. The C-arm geometry is randomly sampled for each CT within reasonable parameters for a full-size C-arm. The virtual patient is assumed to be in a head-first supine position. Imaging centers are randomly sampled within 50 mm of a fragment, ensuring good visibility. Viewing directions are sampled uniformly on the sphere within 45 degrees of vertical. Half of the images (IDs XXX_0250 - XXX_0500) contain up to 10 simulated K-wires and/or orthopaedic screws oriented randomly in the field of view.

    The input images are raw intensity images without any windowing or normalization applied. It is standard practice to first apply the negative log transformation and then window each image appropriately for feeding into a model. See the included augmentation pipeline in pengwin_utils.py for one approach. For viewing raw images, the FIJI image viewer is a viable option, but it is recommended to use the included visualization functions in pengwin_utilities.py to first apply CLAHE normalization and save to a universally readable PNG (see example usage below).

    Because X-ray images feature overlapping segmentation maks, the segmentations have been encoded as multi-label uint32 images, where each pixel should be treated as a binary vector with bits 1 - 10 for SA fragments, 11 - 20 for LI, and 21 - 30 for RI. Thus, the raw segmentation files are not viewable with standard image viewing software. pengwin_utilities.py includes functions for converting to and from this format and for visualizing masks overlaid onto the original image (see below).

    To use the utilities, first install dependencies with pip install -r requirement.txt. Then, to visualize an image with its segmentation, you can do the following (assuming the training set has been downloaded and unzipped in the same folder):

    import pengwin_utils from PIL import Image

    image_path = "train/input/images/x-ray/001_0000.tif" seg_path = "train/output/images/x-ray/001_0000.tif"

    load image and masks

    image = pengwin_utils.load_image(image_path) # raw intensity image masks, category_ids, fragment_ids = pengwin_utils.load_masks(seg_path)

    save visualization of image and masks

    applies CLAHE normalization to the raw intensity image before overlaying segmentations.

    vis_image = pengwin_utils.visualize_sample(image, masks, category_ids, fragment_ids) vis_path = "vis_image.png" Image.fromarray(vis_image).save(vis_path) print(f"Wrote visualization to {vis_path}")

    Obtain predicted masks, category_ids, and fragment_ids

    Category IDs are

    Fragment IDs are the integer labels from label_{category}.nii.gz, with 1 corresponding to the main fragment.

    pred_masks, pred_category_ids, pred_fragment_ids = masks, category_ids, fragment_ids # replace with your model

    save the predicted masks for upload to the challenge

    Note: cv2 does not work with uint32 images. It is recommended to use PIL or imageio.v3

    pred_seg = pengwin_utils.masks_to_seg(pred_masks, pred_category_ids, pred_fragment_ids) pred_seg_path = "pred/train/output/images/x-ray/001_0000.tif" # ensure dir exists! Image.fromarray(pred_seg).save(pred_seg_path) print(f"Wrote segmentation to {pred_seg_path}")

    The pengwin_utils.Dataset class is provided as an example of a Pytorch dataset, with strong domain randomization included to facilitate sim-to-real performance, but it is recommended to write your own as needed.

  8. Pancrease CT Segmenatation

    • kaggle.com
    zip
    Updated Mar 18, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nandeesh H U (2025). Pancrease CT Segmenatation [Dataset]. https://www.kaggle.com/datasets/nandeeshhu/pancrease-ct-segmenatation
    Explore at:
    zip(1796888062 bytes)Available download formats
    Dataset updated
    Mar 18, 2025
    Authors
    Nandeesh H U
    Description

    This dataset contains 2D image slices extracted from the publicly available Pancreas-CT-SEG dataset, which provides manually segmented pancreas annotations for contrast-enhanced 3D abdominal CT scans. The original dataset was curated by the National Institutes of Health Clinical Center (NIH) and was made available through the NCI Imaging Data Commons (IDC). The dataset consists of 82 CT scans from 53 male and 27 female subjects, converted into 2D slices for segmentation tasks.

    Dataset Details:

    Modality: Contrast-enhanced CT (portal-venous phase, ~70s post-injection)

    Number of Subjects: 82

    Age Range: 18 to 76 years (Mean: 46.8 ± 16.7 years)

    Scan Resolution: 512 × 512 pixels per slice

    Slice Thickness: Varies between 1.5 mm and 2.5 mm

    Scanners Used: Philips and Siemens MDCT scanners (120 kVp tube voltage)

    Segmentation: Manually performed by a medical student and verified by an expert radiologist

    Data Format: Converted from 3D DICOM/NIfTI to 2D PNG/JPEG slices for segmentation tasks

    Total Dataset Size: ~1.85 GB

    Category: Non-cancerous healthy controls (No pancreatic cancer lesions or major abdominal pathologies)

    Preprocessing and Conversion:

    The original 3D CT scans and corresponding pancreas segmentation masks (available in NIfTI format) were converted into 2D slices to facilitate 2D medical image segmentation tasks. The conversion steps include:

    Extracting axial slices from each 3D CT scan.

    Normalizing pixel intensities for consistency.

    Saving images in PNG/JPEG format for compatibility with deep learning frameworks.

    Generating corresponding binary segmentation masks where the pancreas region is labeled.

    Dataset Structure:

    Applications

    This dataset is ideal for medical image segmentation tasks such as:

    Deep learning-based pancreas segmentation (e.g., using U-Net, DeepLabV3+)

    Automated organ detection and localization

    AI-assisted diagnosis and analysis of abdominal CT scans

    Acknowledgments & References

    This dataset is derived from:

    National Cancer Institute Imaging Data Commons (IDC) [1]

    The Cancer Imaging Archive (TCIA) [2]

    Original dataset DOI: https://doi.org/10.7937/K9/TCIA.2016.tNB1kqBU

    Citations: If you use this dataset, please cite the following:

    Roth, H., Farag, A., Turkbey, E. B., Lu, L., Liu, J., & Summers, R. M. (2016). Data From Pancreas-CT (Version 2). The Cancer Imaging Archive. DOI: 10.7937/K9/TCIA.2016.tNB1kqBU

    Fedorov, A., Longabaugh, W. J. R., Pot, D., Clunie, D. A., Pieper, S. D., Gibbs, D. L., et al. (2023). National Cancer Institute Imaging Data Commons: Toward Transparency, Reproducibility, and Scalability in Imaging Artificial Intelligence. Radiographics 43.

    License: This dataset is provided under the Creative Commons Attribution 4.0 International (CC-BY-4.0) license. Users must abide by the TCIA Data Usage Policy and Restrictions.

    Additional Resources: Imaging Data Commons (IDC) Portal: https://portal.imaging.datacommons.cancer.gov/explore/

    OHIF DICOM Viewer: https://viewer.ohif.org/

    This dataset provides a high-quality, well-annotated resource for researchers and developers working on medical image analysis, segmentation, and AI-based pancreas detection.

  9. m

    Composite Dataset of Lumbar Spine Mid-Sagittal Images with Annotations and...

    • data.mendeley.com
    Updated Mar 2, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rao Farhat Masood (2021). Composite Dataset of Lumbar Spine Mid-Sagittal Images with Annotations and Clinically Relevant Spinal Measurements [Dataset]. http://doi.org/10.17632/k3b363f3vz.2
    Explore at:
    Dataset updated
    Mar 2, 2021
    Authors
    Rao Farhat Masood
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This composite dataset comprising of mid-sagittal views of lumbar spine is composed of images of lumbar spine with ground truth images duly labelled/annotated as well the spinal measurements. The purpose of creating this dataset was to establish a strong correlation in the images with the spinal measurements being clinically relevant. Presently, these measurements are being taken either completely through manual methods or by the use of computer assisted tools. The spinal measurements are clinically significant for a spinal surgeon before suggesting or shortlisting suitable surgical intervention procedure. Traditionally, the spinal surgeon evaluates the condition of the patient before surgical procedure in order to ascertain the usefulness of the adopted procedure. It also helps the surgeon in establishing a relation regarding effectiveness of the procedure adopted. For example, in case of spinal fusion procedure, will the fusion procedure be able to restore the spinal balance is a question for which the answered is obtained through making relevant spinal measurements, including lumbar lordotic curve angle, both segmental and for whole lumbar spine, lumbosacral angle, spinal heights, dimensions of vertebral bodies etc.

    The Composite Dataset is acquired in following steps:- 1. Exporting mid-sagittal view from the MRI dataset. (Originally taken from Sudirman, Sud; Al Kafri, Ala; natalia, friska; Meidia, Hira; Afriliana, Nunik; Al-Rashdan, Wasfi; Bashtawi, Mohammad; Al-Jumaily, Mohammed (2019), “Label Image Ground Truth Data for Lumbar Spine MRI Dataset”, Mendeley Data, V2, doi: 10.17632/zbf6b4pttk.2). The original dataset comprises of axial views with annotations however, to determine the efficacy of spinal deformities and analyzing spinal balance sagittal views are used instead. 2. Manual labelling of lumbar vertebral bodies from L1 to L5 and first sacrum bone. Total 6 regions were labelled in consultation with expert radiologists followed by validation by expert spinal surgeon. 3. Performing fully automatic spinal measurements, including, vertebral bodies identification and labelling, lumbar height, lumbosacral angle, lumbar lordotic angle, estimation of spinal curve, intervertebral body dimensions, vertebral body dimensions. All the angular measurements are in degrees, whereas the distance measurements are in millimeters.

    A total of 514 images and annotations with spinal measurements can be downloaded with request to please cite out work in your research.

  10. c

    Segmentation and Classification of Grade I and II Meningiomas from Magnetic...

    • cancerimagingarchive.net
    • stage.cancerimagingarchive.net
    dicom, n/a, xlsx
    Updated Feb 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Cancer Imaging Archive (2023). Segmentation and Classification of Grade I and II Meningiomas from Magnetic Resonance Imaging: An Open Annotated Dataset [Dataset]. http://doi.org/10.7937/0TKV-1A36
    Explore at:
    n/a, dicom, xlsxAvailable download formats
    Dataset updated
    Feb 13, 2023
    Dataset authored and provided by
    The Cancer Imaging Archive
    License

    https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/

    Time period covered
    Feb 13, 2023
    Dataset funded by
    National Cancer Institutehttp://www.cancer.gov/
    Description

    The study included 96 consecutive treatment naïve patients with intracranial meningiomas treated with surgical resection from 2010 to 2019. All patients had pre-operative T1, T1-CE, and T2-FLAIR MR images with subsequent subtotal or gross total resection of pathologically confirmed grade I or grade II meningiomas. A neuropathology team reviewed histopathology, including two subspecialty trained neuropathologists and one neuropathology fellow. The meningioma grade was confirmed based on current classification guidelines, most recently described in the 2016 WHO Bluebook. Clinical information includes grade, subtype, type of surgery, tumor location, and atypical features. Meningioma labels on T1-CE and T2-FLAIR images will also be provided in DICOM format. The hyperintense T1-contrast enhancing tumor and hyperintense T2-FLAIR and tumor were manually contoured on each MRI and reviewed by a central nervous system radiation oncologist specialist.

  11. Z

    Artefact segmentation in digital pathology whole-slide images

    • data.niaid.nih.gov
    Updated Dec 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Foucart, Adrien (2020). Artefact segmentation in digital pathology whole-slide images [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3773096
    Explore at:
    Dataset updated
    Dec 9, 2020
    Dataset provided by
    LISA, Université Libre de Bruxelles
    Authors
    Foucart, Adrien
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset with examples of Artefacts in Digital Pathology.

    The dataset contains 22 Whole-Slide Images, with H&E or IHC staining, showing various types and levels of defect to the slides. Annotations were made by a biomedical engineer based on examples given by an expert.

    The dataset is split in different folders:

    train

    18 whole-slide images (extracted at 1.25x & 2.5x magnification)

    All from the same Block (colorectal cancer tissue)

    1/2 with H&E & 1/2 with anti-pan-cytokeratin IHC staining.

    validation

    3 whole-slide images (1.25x + 2.5x mag)

    2 from the same Block as the training set (1 IHC, 1 H&E)

    1 from another Block (IHC anti-pan-cytokerating, gastroesophageal junction lesion)

    validation_tiles

    patches of varying sizes taken from the 3 validation whole-slide images @1.25x magnification.

    7 patches from each slide.

    test

    1 whole-slide image (1.25x + 2.5x mag)

    From another block: IHC staining (anti-NR2F2), mouth cancer

    For the train, validation and test whole-slide images, each slide has: - The RGB images @1.25x & 2.5x mag - The corresponding background/tissue masks - The corresponding annotation masks containing examples of artefacts (note that a majority of artefacts are not annotated. In total, 918 artefacts are in the train set)

    For the validation tiles, the following table gives the "patch-level" supervision:

    tile# Artefact(s) 00 None/Few 01 Tear&Fold 02 Ink 03 None/Few 04 None/Few 05 Tear&Fold 06 Tear&Fold + Blur 07 Knife damage 08 Knife damage 09 Ink 10 None/Few 11 Tear&Fold 12 Tear&Fold 13 None/Few 14 None/Few 15 Knife damage 16 Tear&Fold 17 None/Few 18 None/Few 19 Blur 20 Knife damage

  12. m

    Histo-Seg: H&E Whole Slide Image Segmentation Dataset

    • data.mendeley.com
    Updated Aug 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anum Abdul Salam (2025). Histo-Seg: H&E Whole Slide Image Segmentation Dataset [Dataset]. http://doi.org/10.17632/vccj8mp2cg.2
    Explore at:
    Dataset updated
    Aug 10, 2025
    Authors
    Anum Abdul Salam
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset is comprised of 38 chemically stained Whole slide image samples along with their corresponding ground truth annotated by histopathologists for 12 classes indicating skin layers (Epidermis, Reticular dermis, Papillary dermis, Dermis, Keratin), Skin tissues (Inflammation, Hair follicles, Glands), skin cancer (Basal cell carcinoma, Squamous cell carcinoma, Intraepidermal carcinoma) and background (BKG).

  13. DSC, HD, and MSD performance evaluation of total model in each patient.

    • figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated Jul 31, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sangwoon Jeong; Wonjoong Cheon; Sungjin Kim; Won Park; Youngyih Han (2024). DSC, HD, and MSD performance evaluation of total model in each patient. [Dataset]. http://doi.org/10.1371/journal.pone.0308181.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 31, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Sangwoon Jeong; Wonjoong Cheon; Sungjin Kim; Won Park; Youngyih Han
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    DSC, HD, and MSD performance evaluation of total model in each patient.

  14. BraTS2020 Dataset (Training + Validation)

    • kaggle.com
    zip
    Updated Jul 2, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Awsaf (2020). BraTS2020 Dataset (Training + Validation) [Dataset]. https://www.kaggle.com/awsaf49/brats20-dataset-training-validation
    Explore at:
    zip(4468570941 bytes)Available download formats
    Dataset updated
    Jul 2, 2020
    Authors
    Awsaf
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    BraTS has always been focusing on the evaluation of state-of-the-art methods for the segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) scans. BraTS 2020 utilizes multi-institutional pre-operative MRI scans and primarily focuses on the segmentation (Task 1) of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas. Furthemore, to pinpoint the clinical relevance of this segmentation task, BraTS’20 also focuses on the prediction of patient overall survival (Task 2), and the distinction between pseudoprogression and true tumor recurrence (Task 3), via integrative analyses of radiomic features and machine learning algorithms. Finally, BraTS'20 intends to evaluate the algorithmic uncertainty in tumor segmentation (Task 4).

    Tasks' Description and Evaluation Framework

    In this year's challenge, 4 reference standards are used for the 4 tasks of the challenge: 1. Manual segmentation labels of tumor sub-regions, 2. Clinical data of overall survival, 3. Clinical evaluation of progression status, 4. Uncertainty estimation for the predicted tumor sub-regions.

    Imaging Data Description

    All BraTS multimodal scans are available as NIfTI files (.nii.gz) and describe a) native (T1) and b) post-contrast T1-weighted (T1Gd), c) T2-weighted (T2), and d) T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) volumes, and were acquired with different clinical protocols and various scanners from multiple (n=19) institutions, mentioned as data contributors here.

    All the imaging datasets have been segmented manually, by one to four raters, following the same annotation protocol, and their annotations were approved by experienced neuro-radiologists. Annotations comprise the GD-enhancing tumor (ET — label 4), the peritumoral edema (ED — label 2), and the necrotic and non-enhancing tumor core (NCR/NET — label 1), as described both in the BraTS 2012-2013 TMI paper and in the latest BraTS summarizing paper. The provided data are distributed after their pre-processing, i.e., co-registered to the same anatomical template, interpolated to the same resolution (1 mm^3) and skull-stripped.

    Use of Data Beyond BraTS

    Participants are allowed to use additional public and/or private data (from their own institutions) for data augmentation, only if they also report results using only the BraTS'20 data and discuss any potential difference in their papers and results. This is due to our intentions to provide a fair comparison among the participating methods.

    Data Usage Agreement / Citations:

    ****You are free to use and/or refer to the BraTS datasets in your own research, provided that you always cite the following three manuscripts:****

    [1] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, et al. "The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)", IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI: 10.1109/TMI.2014.2377694

    [2] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al., "Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features", Nature Scientific Data, 4:170117 (2017) DOI: 10.1038/sdata.2017.117

    [3] S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, et al., "Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge", arXiv preprint arXiv:1811.02629 (2018)

    ****In addition, if there are no restrictions imposed from the journal/conference you submit your paper about citing "Data Citations", please be specific and also cite the following:****

    [4] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al., "Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-GBM collection", The Cancer Imaging Archive, 2017. DOI: 10.7937/K9/TCIA.2017.KLXWJJ1Q

    [5] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J. Kirby, et al., "Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-LGG collection", The Cancer Imaging Archive, 2017. DOI: 10.7937/K9/TCIA.2017.GJQ7R0EF

  15. Segmentation of lipid nanoparticles from cryogenic electron microscopy...

    • catalog.data.gov
    • data.nist.gov
    • +1more
    Updated Feb 23, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2023). Segmentation of lipid nanoparticles from cryogenic electron microscopy images [Dataset]. https://catalog.data.gov/dataset/segmentation-of-lipid-nanoparticles-from-cryogenic-electron-microscopy-images
    Explore at:
    Dataset updated
    Feb 23, 2023
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    Lipid nanoparticles (LNPs) were prepared as described (https://doi.org/10.1038/s42003-021-02441-2) using the lipids DLin-KC2-DMA, DSPC, cholesterol, and PEG-DMG2000 at mol ratios of 50:10:38.5:1.5. Four sample types were prepared: LNPs in the presence and absence of RNA, and with LNPs ejected into pH 4 and pH 7.4 buffer after microfluidic assembly. To prepare samples for imaging, 3 ?L of LNP formulation was applied to holey carbon grids (Quantifoil, R3.5/1, 200 mesh copper). Grids were then incubated for 30 s at 298 K and 100% humidity before blotting and plunge-freezing into liquid ethane using a Vitrobot Mark IV (Thermo Fisher Scientific). Grids were imaged at 200 kV using a Talos Arctica system equipped with a Falcon 3EC detector (Thermo Fisher Scientific). A nominal magnification of 45,000x was used, corresponding to images with a pixel count of 4096x4096 and a calibrated pixel spacing of 0.223 nm. Micrographs were collected as dose-fractionated ?movies? at nominal defocus values between -1 and -3 ?m, with 10 s total exposures consisting of 66 frames with a total electron dose of 12,000 electrons per square nanometer. Movies were motion-corrected using MotionCor2 (https://doi.org/10.1038/nmeth.4193), resulting in flattened micrographs suitable for downstream particle segmentation. A total of 38 images were manually segmented into particle and non-particle regions. Segmentation masks and their corresponding images are deposited in this data set.

  16. Z

    Dataset related to article "Deep learning and atlas-based models to...

    • data.niaid.nih.gov
    Updated Oct 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Damiano Dei; Nicola Lambri; Leonardo Crespi; Ricardo Coimbra Brioso; Daniele Loiacono; Elena Clerici; Luisa Bellu; Chiara De Philippis; Pierina Navarria; Stefania Bramanti; Carmelo Carlo-Stella; Roberto Rusconi; Giacomo Reggiori; Stefano Tomatis; Marta Scorsetti; Pietro Mancosu (2023). Dataset related to article "Deep learning and atlas-based models to streamline the segmentation workflow of Total Marrow and Lymphoid Irradiation" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8411233
    Explore at:
    Dataset updated
    Oct 6, 2023
    Dataset provided by
    IRCCS Humanitas Research Hospital, via Manzoni 56,20089 Rozzano (Mi) - Italy AND Humanitas University, Department of Biomedical Sciences, Via Rita Levi Montalcini 4, 20072 Pieve Emanuele – Milan, Italy
    Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
    Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy AND Health Data Science Centre, Human Technopole, Milan, Italy
    IRCCS Humanitas Research Hospital, via Manzoni 56, 20072 Rozzano (Mi) - Italy
    Authors
    Damiano Dei; Nicola Lambri; Leonardo Crespi; Ricardo Coimbra Brioso; Daniele Loiacono; Elena Clerici; Luisa Bellu; Chiara De Philippis; Pierina Navarria; Stefania Bramanti; Carmelo Carlo-Stella; Roberto Rusconi; Giacomo Reggiori; Stefano Tomatis; Marta Scorsetti; Pietro Mancosu
    Description

    This record contains raw data related to article “Deep learning and atlas-based models to streamline the segmentation workflow of Total Marrow and Lymphoid Irradiation"

    Abstract:

    Purpose: To improve the workflow of Total Marrow and Lymphoid Irradiation (TMLI) by enhancing the delineation of organs-at-risk (OARs) and clinical target volume (CTV) using deep learning (DL) and atlas-based (AB) segmentation models.

    Materials and Methods: Ninety-five TMLI plans optimized in our institute were analyzed. Two commercial DL software were tested for segmenting 18 OARs. An AB model for lymph node CTV (CTV_LN) delineation was built using 20 TMLI patients. The AB model was evaluated on 20 independent patients and a semi-automatic approach was tested by correcting the automatic contours. The generated OARs and CTV_LN contours were compared to manual contours in terms of topological agreement, dose statistics, and time workload. A clinical decision tree was developed to define a specific contouring strategy for each OAR.

    Results: The two DL models achieved a median Dice Similarity Coefficient (DSC) of 0.84 [0.73;0.92] and 0.84 [0.77;0.93] across the OARs. The absolute median dose (Dmedian) difference between manual and the two DL models was 2% [1%;5%] and 1% [0.2%;1%]. The AB model achieved a median DSC of 0.70 [0.66;0.74] for CTV_LN delineation, increasing to 0.94 [0.94;0.95] after manual revision, with minimal Dmedian differences. Since September 2022, our institution has implemented DL and AB models for all TMLI patients, reducing from 5 to 2 hours the time required to complete the entire segmentation process.

    Conclusion: DL models can streamline the TMLI contouring process of OARs. Manual revision is still necessary for lymph node delineation using AB models.

    Statements & Declarations

    Funding: This work was funded by the Italian Ministry of Health, grant AuToMI (GR-2019-12370739).

    Competing Interests: The authors have no conflict of interests to disclose.

    Author Contributions: All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by D.D., N.L., L.C., R.C.B., D.L., and P.M. The first draft of the manuscript was written by D.D. and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

    Ethics approval: The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Ethics Committee of IRCCS Humanitas Research Hospital (ID 2928, 26 January 2021). ClinicalTrials.gov identifier: NCT04976205.

    Consent to participate: Informed consent was obtained from all individual participants included in the study.

  17. f

    Table_1_Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks.pdf...

    • datasetcatalog.nlm.nih.gov
    Updated Jul 14, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Platscher, Moritz; Paganucci, Silvio; Zopes, Jonathan; Federau, Christian (2021). Table_1_Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks.pdf [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000839023
    Explore at:
    Dataset updated
    Jul 14, 2021
    Authors
    Platscher, Moritz; Paganucci, Silvio; Zopes, Jonathan; Federau, Christian
    Description

    Anatomical segmentation of brain scans is highly relevant for diagnostics and neuroradiology research. Conventionally, segmentation is performed on T1-weighted MRI scans, due to the strong soft-tissue contrast. In this work, we report on a comparative study of automated, learning-based brain segmentation on various other contrasts of MRI and also computed tomography (CT) scans and investigate the anatomical soft-tissue information contained in these imaging modalities. A large database of in total 853 MRI/CT brain scans enables us to train convolutional neural networks (CNNs) for segmentation. We benchmark the CNN performance on four different imaging modalities and 27 anatomical substructures. For each modality we train a separate CNN based on a common architecture. We find average Dice scores of 86.7 ± 4.1% (T1-weighted MRI), 81.9 ± 6.7% (fluid-attenuated inversion recovery MRI), 80.8 ± 6.6% (diffusion-weighted MRI) and 80.7 ± 8.2% (CT), respectively. The performance is assessed relative to labels obtained using the widely-adopted FreeSurfer software package. The segmentation pipeline uses dropout sampling to identify corrupted input scans or low-quality segmentations. Full segmentation of 3D volumes with more than 2 million voxels requires <1s of processing time on a graphical processing unit.

  18. Summary of MRI sequence parameters for manual segmentation dataset (Dataset...

    • plos.figshare.com
    xls
    Updated Jun 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sibaji Gaj; Daniel Ontaneda; Kunio Nakamura (2023). Summary of MRI sequence parameters for manual segmentation dataset (Dataset A). [Dataset]. http://doi.org/10.1371/journal.pone.0255939.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 9, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Sibaji Gaj; Daniel Ontaneda; Kunio Nakamura
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Summary of MRI sequence parameters for manual segmentation dataset (Dataset A).

  19. m

    Semantic Segmentation-Based Intermonthly Land Cover Mapping for Graz,...

    • data.mendeley.com
    Updated Jul 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Domen Kavran (2023). Semantic Segmentation-Based Intermonthly Land Cover Mapping for Graz, Austria and Portorož-Izola-Koper Region, Slovenia (2017-2021) [Dataset]. http://doi.org/10.17632/jdd7rf8bmn.1
    Explore at:
    Dataset updated
    Jul 26, 2023
    Authors
    Domen Kavran
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Koper, Graz, Portorož, Slovenia, Austria
    Description

    This dataset provides intermonthly mapping of land cover changes from the period 2017 to 2021 for the region of Graz, Austria, and the coastal region of Portorož, Izola, and Koper in Slovenia.

    In the Graz region, images were procured within the WGS84 bounding box defined by the coordinates [15.390816°, 46.942176°, 15.515785°, 47.015961°], accounting for a total of 40 images. The region of Portorož, Izola, and Koper in Slovenia, contained within the WGS84 bounding box [13.590260°, 45.506948°, 13.744411°, 45.554449°], yielded a total of 41 images. All images obtained maintain minimal cloud coverage and have a spatial resolution of 10 meters.

    Comprised within this dataset are raw Sentinel-2 images in numpy format in conjunction with True Color (RGB) images in PNG format, each procured from Sentinel Hub. The ground truth label data is preserved in numpy format and has been additionally rendered in color-coded PNGs. The dataset also includes land cover maps predicted for the test set (2020-2021), as outlined in the research article, available at https://doi.org/10.3390/s23146648. Each file adheres to a nomenclature denoting the year and the month (e.g., 2017_1 corresponds to an image/ground truth/prediction for the January 2017).

    Initial ground truth was obtained using the ESRI's UNet model, available at https://www.arcgis.com/home/item.html?id=afd124844ba84da69c2c533d4af10a58 (accessed on 25 July 2023). Subsequent manual corrections were administered to enhance the accuracy and integrity of the data. The Graz region contains 12 distinct classes, while the region of Portorož-Izola-Koper comprises 13 classes.

    The dataset is structured as follows: - 'classes.txt' contains a list of land cover classes, - '/data' hosts the Sentinel-2 imagery, -- '/data/numpy' retains Sentinel-2 images featuring 13 basic spectral layers (B01–B12) in numpy format, -- '/data/true_color_png' stores True Color (RGB) images in PNG format, - '/ground_truth' contains ground truth, -- '/ground_truth/numpy' houses ground truth in numpy format with values ranging from 0 to 14 representing distinct classes, -- '/ground_truth/color_labeled_png' contains color-labeled images in PNG format. - '/predictions' contains predicted land cover maps for the test set from the associated research paper, -- '/predictions/numpy' has predictions in numpy format with values ranging from 0 to 14 representing distinct classes, -- '/predictions/color_labeled_png' contains color-labeled images in PNG format.

    All these directories further include subdirectories '/graz' and '/portoroz_izola_koper' corresponding to the two regions covered in the datasets.

    Acknowledgments: Should you find this dataset useful in your work, we kindly request that you acknowledge its origin by citing the following article: Kavran, D.; Mongus, D.; Žalik, B.; Lukač, N. Graph Neural Network-Based Method of Spatiotemporal Land Cover Mapping Using Satellite Imagery. Sensors 2023, 23, 6648. https://doi.org/10.3390/s23146648.

  20. Data set_reliability_analysis_segmentation_DTI

    • figshare.com
    txt
    Updated Jul 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sebastian Vetter (2023). Data set_reliability_analysis_segmentation_DTI [Dataset]. http://doi.org/10.6084/m9.figshare.23790120.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jul 27, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Sebastian Vetter
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data acquisition: Magnet resonance imaging (MRI) of the right shoulder was performed using a 3 Tesla MRI scanner (Siemens MAGNETOM Prisma Fit, Erlangen, Germany) with a dedicated 16-channel shoulder coil in the head-first supine position. The right shoulder was placed in a neutral position with the arm adducted and the hand supinated. The MR protocol consisted of a 3D coronal T1-weighted (T1w) and a sagittal DTI sequence from distal to proximal. The total scan time was approximately twelve minutes. The T1w sequence was acquired with the following parameters: repetition and echo time TR/TE = 492/20 ms, slice thickness = 0.7 mm, flip angle = 120°, field of view FOV = 180 x 180 mm2, matrix = 256 x 256 mm2. For DTI a commercial Siemens 2D echo planar diffusion image sequence was acquired with the following parameters: repetition and echo time TR/TE = 6100/69 ms, slice thickness = 4 mm, flip angle = 90°, field of view FOV = 240 x 240 mm2, matrix = 122 x 122 mm2, 48 diffusion sampling directions with b = 400 s/mm2. Muscle segmentation Manual segmentation was based on common mDTI methods described previously. Segmentation was performed using Mimics Materialise (v.24.0, Leuven, Belgium). Two independent and operators (SBA 1 and SBA 2) segmented each M. supraspinatus. Segmentation was based on the recorded T1w sequence of each subject. To compare individual differences in segmentation, each operator generated an individual segmentation routine for the whole data set. The first segmentation step was to generate a base mask by setting a threshold on the grey values of the images to separate muscle-tendons from bony structures. Then, both operators split the basic muscle mask to separate the M. supraspinatus from the other surrounding tissues and to proceed with manual segmentation and correction. While SBA 1 preferred manual segmentation, operator two (SBA 2) focused on interpolation using the integrated multi-slice editing function (Figure 1). However, both operators used semi-automatic segmentation functions and differed in time spent on each step. Finally, each surface model was smoothed by a factor of 0.5 and exported as a ROI for fiber tracking.

    Figure 1. Workflow of methods. Workflow displays different processing steps and each methods duration in minutes (‘). Segmentation-based analysis by operator 1 (SBA 1) included four major segmentation steps, operator 2 (SBA 2) displayed three steps. Model-free analysis (MFA) did not include a segmentation and used the entire field of view as seeding area for deterministic fiber tracking. Within MFA the red cross symbolises the manual exclusion of tracts outside of the highlighted M. supraspinatus (blue color).

    DTI data processing and fiber tracking DSI Studio (v. 3th of December 2021. http://dsi-studio.labsolver.org) was used for DTI processing, deterministic fiber tracking and tract calculations. To perform tractography for the M. supraspinatus, we registered and resampled the DTI images to the T1w images. The quality of the DTI and FA maps was first visually checked by two experts using DSI Studio. In addition, the DTI images were corrected for motion and eddy current distortion using DSI Studio's integrated FSL eddy current correction. To ensure plausible fiber tracking results, we used the following stopping criteria recommended: maximum angle between tract segments 15°, 20 mm ≤ tract length ≤ 130 mm; step size = 1.5 mm. These settings were oriented to FL results of cadaveric dissectionsand recommendations for deterministic muscle fiber tracking stopping criteria. Fiber tracking was then performed either within a model ROI (SBA methods) or for the entire DTI images without using a segmented model (MFA). After a reconstruction of ~10.000 tracts for the M. supraspinatus region, tractography was terminated and duplicates were deleted. Since MFA used the entire DTI image as a seeding area for tractography we removed all tracts outside the M. supraspinatus. Next, clearly implausible tracts and tracts crossing the muscle boundary within the SBA and MFA were reviewed and removed by two experts. Finally, DTI tensor parameters (FA, AD, MD and RD) and muscle parameters (MV, FL and FV) were calculated based on a deterministic fiber tracking algorithm and specific tracking strategies using DSI Studio. Since MFA did not include a muscle segmentation step, it took approximately 30 minutes. In contrast, SBA 1 and SBA 2, including segmentation, took approximately 90 and 60 minutes respectively.

    Abbreviations dataset: SBA 1 Segmentation-based analysis by operator 1 SBA 2 Segmentation-based analysis by operator 2 MFA model-free analysis FL Fascicle length (mm) Fiber volume, FV (mm^3) MV (mm^3) muscle model volume FA (10^-3 mm/s) Fractional Anisotropy MD Mean Diffusivity RD Radial Diffusivity AD Axial Diffusivity

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Jakob Wasserthal; Jakob Wasserthal (2023). Dataset with segmentations of 117 important anatomical structures in 1228 CT images [Dataset]. http://doi.org/10.5281/zenodo.8367088
Organization logo

Dataset with segmentations of 117 important anatomical structures in 1228 CT images

Explore at:
zipAvailable download formats
Dataset updated
Oct 3, 2023
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Jakob Wasserthal; Jakob Wasserthal
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Info: This is version 2 of the TotalSegmentator dataset.

In 1228 CT images we segmented 117 anatomical structures covering a majority of relevant classes for most use cases. The CT images were randomly sampled from clinical routine, thus representing a real world dataset which generalizes to clinical application. The dataset contains a wide range of different pathologies, scanners, sequences and institutions.

Link to a copy of this dataset on Dropbox for much quicker download: Dropbox Link

Overview of differences to v1 of this dataset: here

A small subset of this dataset with only 102 subjects for quick download+exploration can be found here: here

You can find a segmentation model trained on this dataset here.

More details about the dataset can be found in the corresponding paper (the paper describes v1 of the dataset). Please cite this paper if you use the dataset.

This dataset was created by the department of Research and Analysis at University Hospital Basel.

Search
Clear search
Close search
Google apps
Main menu