57 datasets found
  1. UW Madison GI Tract Image Segmentation Dataset

    • kaggle.com
    zip
    Updated Aug 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yin Li (2023). UW Madison GI Tract Image Segmentation Dataset [Dataset]. https://www.kaggle.com/datasets/happyharrycn/uw-madison-gi-tract-image-segmentation-dataset
    Explore at:
    zip(4107976772 bytes)Available download formats
    Dataset updated
    Aug 1, 2023
    Authors
    Yin Li
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Area covered
    Madison
    Description

    Integrated MRI and linear accelerator systems (MR-Linacs) provide superior soft tissue contrast and the capability of adapting radiotherapy plans to changes in daily anatomy. In this dataset, serial MRIs of the abdomen of patients undergoing radiotherapy were collected and the luminal gastro-intestinal tract was segmented to develop a deep learning algorithm for automatic segmentation. This dataset was used by UW-Madison GI Tract Image Segmentation challenge hosted at Kaggle. This release includes both the training and test sets in the Kaggle challenge. We anticipate that the data may be utilized by radiation oncologists, medical physicists, and data scientist to further improve MRI segmentation algorithms.

    If you find our dataset useful, please consider citing our paper.

    Lee, S. L., Yadav, P., Li, Y., Meudt, J. J., Strang, J., Hebel, D., ... & Bassetti, M. F. (2024). Dataset for gastrointestinal tract segmentation on serial MRIs for abdominal tumor radiotherapy. Data in Brief, 57, 111159.

  2. r

    MS lesion segmentation challenge 2008

    • rrid.site
    • neuinfo.org
    • +2more
    Updated Oct 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). MS lesion segmentation challenge 2008 [Dataset]. http://identifiers.org/RRID:SCR_002425
    Explore at:
    Dataset updated
    Oct 25, 2025
    Description

    Training material for the MS lesion segmentation challenge 2008 to compare different algorithms to segment the MS lesions from brain MRI scans. Data used for the workshop is composed of 54 brain MRI images and represents a range of patients and pathology which was acquired from Children's Hospital Boston and University of North Carolian. Data has initially been randomized into three groups: 20 training MRI images, 24 testing images for the qualifying and 8 for the onsite contest at the 2008 workshop. The downloadable online database consists now of the training images (including reference segmentations) and all the 32 combined testing images (without segmentations). The naming has not been changed in comparison to the workshop compeition in order to allow easy comparison between the workshop papers and the online database papers. One dataset has been removed (UNC_test1_Case02) due to considerable motion present only in its T2 image (without motion artifacts in T1 and FLAIR). Such a dataset unfairly penalizes methods that use T2 images versus methods that don't use the T2 image. Currently all cases have been segmented by expert raters at each institution. They have significant intersite variablility in segmentation. MS lesion MRI image data for this competition was acquired seperately by Children's Hospital Boston and University of North Carolina. UNC cases were acquired on Siemens 3T Allegra MRI scanner with slice thickness of 1mm and in-plane resolution of 0.5mm. To ease the segmentation process all data has been rigidly registered to a common reference frame and resliced to isotrophic voxel spacing using b-spline based interpolation. Pre-processed data is stored in NRRD format containing an ASCII readable header and a separate uncompressed raw image data file. This format is ITK compatible. If you want to join the competition, you can download data set from links here, and submit your segmentation results at http://www.ia.unc.edu/MSseg after registering your team. They require team name, password, and email address for future contact. Once experiment is completed, you can submit the segmentation data in a zip file format. Please refer submission page for uploading data format.

  3. MSCMRSeg

    • kaggle.com
    zip
    Updated Apr 24, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    An Hoang Vo (2024). MSCMRSeg [Dataset]. https://www.kaggle.com/datasets/anhoangvo/mscmrseg/code
    Explore at:
    zip(143591416 bytes)Available download formats
    Dataset updated
    Apr 24, 2024
    Authors
    An Hoang Vo
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    This preprocessed dataset, derived from the Multi-sequence Cardiac MR Segmentation Challenge (MSCMRSeg) 2019, is tailored for cardiac image segmentation tasks, specifically targeting left ventricle (LV), right ventricle (RV), and myocardium (MYO) segmentation. With multi-sequence cardiac magnetic resonance (MR) images and corresponding segmentation labels, researchers can delve into the intricacies of cardiac anatomy and pathology.

    Moreover, this dataset is enriched with beautiful scribble annotations, serving as a valuable resource for scribble-supervised learning—a form of weakly supervised learning. The scribble annotation provided by paper: CycleMix: A Holistic Strategy for Medical Image Segmentation.

    For access to the original challenge and detailed information, please visit the MSCMRSeg 2019 website: MSCMRSeg 2019.

    The preprocessed dataset is constructed using code from this GitHub repository.

    Researchers and practitioners can leverage this preprocessed dataset to advance segmentation algorithms, contribute to medical image analysis, and ultimately improve patient care in cardiovascular medicine.

  4. Z

    SNEMI3D: 3D Segmentation of neurites in EM images

    • datasetcatalog.nlm.nih.gov
    • zenodo.org
    Updated Jan 15, 2013
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seung, H. Sebastian; Arganda-Carreras, Ignacio; Berger, Daniel R.; Vishwanathan, Ashwin (2013). SNEMI3D: 3D Segmentation of neurites in EM images [Dataset]. http://doi.org/10.5281/zenodo.7142003
    Explore at:
    Dataset updated
    Jan 15, 2013
    Authors
    Seung, H. Sebastian; Arganda-Carreras, Ignacio; Berger, Daniel R.; Vishwanathan, Ashwin
    Description

    In this challenge, a full stack of electron microscopy (EM) slices will be used to train machine-learning algorithms for the purpose of automatic segmentation of neurites in 3D. This imaging technique visualizes the resulting volumes in a highly anisotropic way, i.e., the x- and y-directions have a high resolution, whereas the z-direction has a low resolution, primarily dependent on the precision of serial cutting. EM produces the images as a projection of the whole section, so some of the neural membranes that are not orthogonal to a cutting plane can appear very blurred. None of these problems led to major difficulties in the manual labeling of each neurite in the image stack by an expert human neuro-anatomist. In order to gauge the current state-of-the-art in automated neurite segmentation on EM and compare between different methods, we are organizing a 3D Segmentation of neurites in EM images (SNEMI3D) challenge in conjunction with the ISBI 2013 conference. For this purpose, we are making available a large training dataset of mouse cortex in which the neurites have been manually delineated. In addition, we also provide a test dataset where the 3D labels are not available. The aim of the challenge is to compare and rank the different competing methods based on their object classification accuracy in three dimensions. The image data used in the challenge was produced by Lichtman Lab at Harvard University (Daniel R. Berger, Richard Schalek, Narayanan "Bobby" Kasthuri, Juan-Carlos Tapia, Kenneth Hayworth, Jeff W. Lichtman) and manually annotated by Daniel R. Berger. Their corresponding biological findings were published in Cell (2015).

  5. NeurIPS 2022 Cell Segmentation Competition Dataset

    • zenodo.org
    • datasetcatalog.nlm.nih.gov
    bin, zip
    Updated Dec 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jun Ma; Ronald Xie; Shamini Ayyadhury; Cheng Ge; Anubha Gupta; Ritu Gupta; Song Gu; Yao Zhang; Gihun Lee; Joonkee Kim; Wei Lou; Haofeng Li; Eric Upschulte; Timo Dickscheid; José Guilherme de Almeida; Yixin Wang; Lin Han; Xin Yang; Marco Labagnara; Vojislav Gligorovski; Maxime Scheder; Sahand Jamal Rahi; Carly Kempster; Alice Pollitt; Leon Espinosa; Tam Mignot; Jan Moritz Middeke; Jan-Niklas Eckardt; Wangkai Li; Zhaoyang Li; Xiaochen Cai; Bizhe Bai; Noah F. Greenwald; David Van Valen; Erin Weisbart; Beth A Cimini; Trevor Cheung; Oscar Brück; Gary D. Bader; Bo Wang; Jun Ma; Ronald Xie; Shamini Ayyadhury; Cheng Ge; Anubha Gupta; Ritu Gupta; Song Gu; Yao Zhang; Gihun Lee; Joonkee Kim; Wei Lou; Haofeng Li; Eric Upschulte; Timo Dickscheid; José Guilherme de Almeida; Yixin Wang; Lin Han; Xin Yang; Marco Labagnara; Vojislav Gligorovski; Maxime Scheder; Sahand Jamal Rahi; Carly Kempster; Alice Pollitt; Leon Espinosa; Tam Mignot; Jan Moritz Middeke; Jan-Niklas Eckardt; Wangkai Li; Zhaoyang Li; Xiaochen Cai; Bizhe Bai; Noah F. Greenwald; David Van Valen; Erin Weisbart; Beth A Cimini; Trevor Cheung; Oscar Brück; Gary D. Bader; Bo Wang (2024). NeurIPS 2022 Cell Segmentation Competition Dataset [Dataset]. http://doi.org/10.5281/zenodo.10719375
    Explore at:
    bin, zipAvailable download formats
    Dataset updated
    Dec 3, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jun Ma; Ronald Xie; Shamini Ayyadhury; Cheng Ge; Anubha Gupta; Ritu Gupta; Song Gu; Yao Zhang; Gihun Lee; Joonkee Kim; Wei Lou; Haofeng Li; Eric Upschulte; Timo Dickscheid; José Guilherme de Almeida; Yixin Wang; Lin Han; Xin Yang; Marco Labagnara; Vojislav Gligorovski; Maxime Scheder; Sahand Jamal Rahi; Carly Kempster; Alice Pollitt; Leon Espinosa; Tam Mignot; Jan Moritz Middeke; Jan-Niklas Eckardt; Wangkai Li; Zhaoyang Li; Xiaochen Cai; Bizhe Bai; Noah F. Greenwald; David Van Valen; Erin Weisbart; Beth A Cimini; Trevor Cheung; Oscar Brück; Gary D. Bader; Bo Wang; Jun Ma; Ronald Xie; Shamini Ayyadhury; Cheng Ge; Anubha Gupta; Ritu Gupta; Song Gu; Yao Zhang; Gihun Lee; Joonkee Kim; Wei Lou; Haofeng Li; Eric Upschulte; Timo Dickscheid; José Guilherme de Almeida; Yixin Wang; Lin Han; Xin Yang; Marco Labagnara; Vojislav Gligorovski; Maxime Scheder; Sahand Jamal Rahi; Carly Kempster; Alice Pollitt; Leon Espinosa; Tam Mignot; Jan Moritz Middeke; Jan-Niklas Eckardt; Wangkai Li; Zhaoyang Li; Xiaochen Cai; Bizhe Bai; Noah F. Greenwald; David Van Valen; Erin Weisbart; Beth A Cimini; Trevor Cheung; Oscar Brück; Gary D. Bader; Bo Wang
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    The official data set for the NeurIPS 2022 competition: cell segmentation in multi-modality microscopy images.

    https://neurips22-cellseg.grand-challenge.org/

    Please cite the following paper if this dataset is used in your research.

    @article{NeurIPS-CellSeg,
       title = {The Multi-modality Cell Segmentation Challenge: Towards Universal Solutions},
       author = {Jun Ma and Ronald Xie and Shamini Ayyadhury and Cheng Ge and Anubha Gupta and Ritu Gupta and Song Gu and Yao Zhang and Gihun Lee and Joonkee Kim and Wei Lou and Haofeng Li and Eric Upschulte and Timo Dickscheid and José Guilherme de Almeida and Yixin Wang and Lin Han and Xin Yang and Marco Labagnara and Vojislav Gligorovski and Maxime Scheder and Sahand Jamal Rahi and Carly Kempster and Alice Pollitt and Leon Espinosa and Tâm Mignot and Jan Moritz Middeke and Jan-Niklas Eckardt and Wangkai Li and Zhaoyang Li and Xiaochen Cai and Bizhe Bai and Noah F. Greenwald and David Van Valen and Erin Weisbart and Beth A. Cimini and Trevor Cheung and Oscar Brück and Gary D. Bader and Bo Wang},
       journal = {Nature Methods},
    volume={21},
    pages={1103–1113},
    year = {2024},
    doi = {https://doi.org/10.1038/s41592-024-02233-6} }

    This is an instance segmentation task where each cell has an individual label under the same category (cells). The training set contains both labeled images and unlabeled images. You can only use the labeled images to develop your model but we encourage participants to try to explore the unlabeled images through weakly supervised learning, semi-supervised learning, and self-supervised learning.

    The images are provided with original formats, including tiff, tif, png, jpg, bmp... The original formats contain the most amount of information for competitors and you have free choice over different normalization methods. For the ground truth, we standardize them as tiff formats.

    We aim to maintain this challenge as a sustainable benchmark platform. If you find the top algorithms (https://neurips22-cellseg.grand-challenge.org/awards/) don't perform well on your images, welcome to send us the dataset (neurips.cellseg@gmail.com)! We will include them in the new testing set and credit your contributions on the challenge website!

    Dataset License: CC-BY-NC-ND

  6. Liver Tumor Segmentation

    • kaggle.com
    zip
    Updated Jul 11, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Larxel (2020). Liver Tumor Segmentation [Dataset]. https://www.kaggle.com/andrewmvd/liver-tumor-segmentation
    Explore at:
    zip(5193236906 bytes)Available download formats
    Dataset updated
    Jul 11, 2020
    Authors
    Larxel
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Abstract

    130 CT scans for segmentation of the liver as well as tumor lesions.

    About this dataset

    Liver cancer is the fifth most commonly occurring cancer in men and the ninth most commonly occurring cancer in women. There were over 840,000 new cases in 2018.

    The liver is a common site of primary or secondary tumor development. Due to their heterogeneous and diffusive shape, automatic segmentation of tumor lesions is very challenging.

    In light of that, we encourage the development of automatic segmentation algorithms to segment liver lesions in contrast-enhanced abdominal CT scans. The data and segmentations are provided by various clinical sites around the world. This dataset was extracted from LiTS – Liver Tumor Segmentation Challenge (LiTS17) organised in conjunction with ISBI 2017 and MICCAI 2017.

    How to use

    Acknowledgements

    If you use this dataset in your research, please credit the authors.

    Splash banner

    Image by ©yodiyim

    Splash icon

    Icon made by Freepik available on www.flaticon.com.

    License

    CC BY NC ND 4.0

    BibTeX

    @misc{bilic2019liver, title={The Liver Tumor Segmentation Benchmark (LiTS)}, author={Patrick Bilic and Patrick Ferdinand Christ and Eugene Vorontsov and Grzegorz Chlebus and Hao Chen and Qi Dou and Chi-Wing Fu and Xiao Han and Pheng-Ann Heng and Jürgen Hesser and Samuel Kadoury and Tomasz Konopczynski and Miao Le and Chunming Li and Xiaomeng Li and Jana Lipkovà and John Lowengrub and Hans Meine and Jan Hendrik Moltz and Chris Pal and Marie Piraud and Xiaojuan Qi and Jin Qi and Markus Rempfler and Karsten Roth and Andrea Schenk and Anjany Sekuboyina and Eugene Vorontsov and Ping Zhou and Christian Hülsemeyer and Marcel Beetz and Florian Ettlinger and Felix Gruen and Georgios Kaissis and Fabian Lohöfer and Rickmer Braren and Julian Holch and Felix Hofmann and Wieland Sommer and Volker Heinemann and Colin Jacobs and Gabriel Efrain Humpire Mamani and Bram van Ginneken and Gabriel Chartrand and An Tang and Michal Drozdzal and Avi Ben-Cohen and Eyal Klang and Marianne M. Amitai and Eli Konen and Hayit Greenspan and Johan Moreau and Alexandre Hostettler and Luc Soler and Refael Vivanti and Adi Szeskin and Naama Lev-Cohain and Jacob Sosna and Leo Joskowicz and Bjoern H. Menze}, year={2019}, eprint={1901.04056}, archivePrefix={arXiv}, primaryClass={cs.CV} }

  7. f

    Manual organelle segmentations (crop67) in near-isotropic, reconstructed...

    • janelia.figshare.com
    bin
    Updated Dec 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CellMap Project Team; Rebecca Arruda; Davis Bennett; Nora Forknall; Woohyun Park; Alyson Petruncio; Jacquelyn Price; Diana Ramirez; Thomson Rymer; Alia Suleiman; Rebecca Vorimo; Aubrey Weigel; Yurii Zubov (2024). Manual organelle segmentations (crop67) in near-isotropic, reconstructed volume electron microscopy (FIB-SEM) of immortalized T-cells (jrc_jurkat-1) [Dataset]. http://doi.org/10.25378/janelia.24239218.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 18, 2024
    Dataset provided by
    Janelia Research Campus
    Authors
    CellMap Project Team; Rebecca Arruda; Davis Bennett; Nora Forknall; Woohyun Park; Alyson Petruncio; Jacquelyn Price; Diana Ramirez; Thomson Rymer; Alia Suleiman; Rebecca Vorimo; Aubrey Weigel; Yurii Zubov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This acquisition is part of the CellMap 2024 Segmentation Challenge- Challenge DOI: https://doi.org/10.25378/janelia.c.7456966- Challenge Website: https://cellmapchallenge.janelia.org/Annotation description: Dense segmentations of extracellular space in jrc_jurkat-1 using Amira 3D 2021.1, Classic Segmentation Workroom and the 'Using Amira to manually segment organelles in vEM for machine learning V.3' annotation protocol.Annotation ID: crop67Primary Annotator: COSEM Project TeamAnnotation protocol: Using Amira to manually segment organelles in vEM for machine learning V.3 (http://dx.doi.org/10.17504/protocols.io.bp2l61rb5vqe/v3)Software: Amira 3D 2021.1, Classic Segmentation WorkroomAnnotated voxel size (nm): 2 x 2 x 2 (x, y, z)Annotated data dimensions (µm): 0.8 x 0.8 x 0.8 (x, y, z)Annotated data offset (nm): 34099 x 10623 x 15899 (x, y, z)Classes annotated: extracellular spaceDataset URL: s3://janelia-cosem-datasets/jrc_jurkat-1/jrc_jurkat-1.zarr/recon-1/labels/groundtruth/crop67Source (EM) dataset ID: jrc_jurkat-1Source (EM) voxel size (nm): 4 x 4 x 3.44 (x, y, z)Source (EM) data dimensions (µm): 40 x 12 x 29.45 (x, y, z)Source (EM) DOI: https://doi.org/10.25378/janelia.13114259Visualization website: https://openorganelle.janelia.org/datasets/jrc_jurkat-1Publication: CellMap Segmentation Challenge, 2024.The CellMap Project Team during this time consisted of: David Ackerman, Davis Bennett, Marley Bryant, Hannah Nguyen, Grace Park, Alyson Petruncio, Alannah Post, Jacquelyn Price, Diana Ramirez, Jeff Rhoades, Rebecca Vorimo, Aubrey Weigel, Marwan Zouinkhi, Yurii Zubov.The CellMap Project Team Steering Committee during this time consisted of: Misha Ahrens, Christopher Beck, Teng-Leong Chew, Daniel Feliciano, Jan Funke, Harald Hess, Wyatt Korff, Jennifer Lippincott-Schwartz, Zhe J. Liu, Kayvon Pedram, Stephan Preibisch, Stephan Saalfeld, Ronald Vale, and Aubrey Weigel.

  8. c

    Data from the training set of the 2019 Kidney and Kidney Tumor Segmentation...

    • cancerimagingarchive.net
    csv, dicom, n/a
    Updated Jun 18, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Cancer Imaging Archive (2020). Data from the training set of the 2019 Kidney and Kidney Tumor Segmentation Challenge [Dataset]. http://doi.org/10.7937/TCIA.2019.IX49E8NX
    Explore at:
    n/a, dicom, csvAvailable download formats
    Dataset updated
    Jun 18, 2020
    Dataset authored and provided by
    The Cancer Imaging Archive
    License

    https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/

    Time period covered
    Jun 18, 2020
    Dataset funded by
    National Cancer Institutehttp://www.cancer.gov/
    Description

    This collection contains CT scans and segmentations from subjects from the training set of the 2019 Kidney and Kidney Tumor Segmentation Challenge (KiTS19). The challenge aimed to accelerate progress in automatic 3D semantic segmentation by releasing a dataset of CT scans for 210 patients with manual semantic segmentations of the kidneys and tumors in the corticomedullary phase.

    The imaging was collected during routine care of patients who were treated by either partial or radical nephrectomy at the University of Minnesota Medical Center. Many of the CT scans were acquired at referring institutions and are therefore heterogeneous in terms of scanner manufacturers and acquisition protocols. Semantic segmentations were performed by students under the supervision of an experienced urologic cancer surgeon.

    Protocol

    Please refer to the data descriptor manuscript for a comprehensive account of the data collection and annotation process - arXiv:1904.00445. The Clinical Trial Time Point is calculated from Day of Surgery.

  9. f

    Manual organelle segmentations (crop6) in near-isotropic, reconstructed...

    • datasetcatalog.nlm.nih.gov
    • janelia.figshare.com
    • +1more
    Updated Dec 13, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arruda, Rebecca; Forknall, Nora; Petruncio, Alyson; Park, Woohyun; Weigel, Aubrey; Rymer, Thomson; Team, CellMap Project; Price, Jacquelyn; Bennett, Davis; Vorimo, Rebecca; Ramirez, Diana; Suleiman, Alia; Zubov, Yurii (2024). Manual organelle segmentations (crop6) in near-isotropic, reconstructed volume electron microscopy (FIB-SEM) of interphase HeLa cell (jrc_hela-2) [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001400673
    Explore at:
    Dataset updated
    Dec 13, 2024
    Authors
    Arruda, Rebecca; Forknall, Nora; Petruncio, Alyson; Park, Woohyun; Weigel, Aubrey; Rymer, Thomson; Team, CellMap Project; Price, Jacquelyn; Bennett, Davis; Vorimo, Rebecca; Ramirez, Diana; Suleiman, Alia; Zubov, Yurii
    Description

    This acquisition is part of the CellMap 2024 Segmentation Challenge- Challenge DOI: https://doi.org/10.25378/janelia.c.7456966- Challenge Website: https://cellmapchallenge.janelia.org/Annotation description: Dense segmentations of mitochondrial membrane, mitochondrial lumen, vesicle membrane, endosome membrane, vesicle lumen, endoplasmic reticulum exit site lumen, endoplasmic reticulum exit site membrane, endoplasmic reticulum lumen, endoplasmic reticulum membrane, endosome lumen, microtubule out, microtubule in, cytosol, endosome, vesicle, endoplasmic reticulum exit site, mitochondria, endoplasmic reticulum, microtubule, cell, endoplasmic reticulum membrane collective in jrc_hela-2 using Amira 3D 2021.1, Classic Segmentation Workroom and the 'Using Amira to manually segment organelles in vEM for machine learning V.3' annotation protocol.Annotation ID: crop6Primary Annotator: COSEM Project TeamAnnotation protocol: Using Amira to manually segment organelles in vEM for machine learning V.3 (http://dx.doi.org/10.17504/protocols.io.bp2l61rb5vqe/v3)Software: Amira 3D 2021.1, Classic Segmentation WorkroomAnnotated voxel size (nm): 2 x 2 x 2 (x, y, z)Annotated data dimensions (µm): 1 x 1 x 1 (x, y, z)Annotated data offset (nm): 12079 x 1599 x 11995 (x, y, z)Classes annotated: mitochondrial membrane, mitochondrial lumen, vesicle membrane, endosome membrane, vesicle lumen, endoplasmic reticulum exit site lumen, endoplasmic reticulum exit site membrane, endoplasmic reticulum lumen, endoplasmic reticulum membrane, endosome lumen, microtubule out, microtubule in, cytosol, endosome, vesicle, endoplasmic reticulum exit site, mitochondria, endoplasmic reticulum, microtubule, cell, endoplasmic reticulum membrane collectiveDataset URL: s3://janelia-cosem-datasets/jrc_hela-2/jrc_hela-2.zarr/recon-1/labels/groundtruth/crop6Source (EM) dataset ID: jrc_hela-2Source (EM) voxel size (nm): 4 x 4 x 5.2 (x, y, z)Source (EM) data dimensions (µm): 48 x 6.4 x 33.11 (x, y, z)Source (EM) DOI: https://doi.org/10.25378/janelia.13114211Visualization website: https://openorganelle.janelia.org/datasets/jrc_hela-2Publication: CellMap Segmentation Challenge, 2024.The CellMap Project Team during this time consisted of: David Ackerman, Davis Bennett, Marley Bryant, Hannah Nguyen, Grace Park, Alyson Petruncio, Alannah Post, Jacquelyn Price, Diana Ramirez, Jeff Rhoades, Rebecca Vorimo, Aubrey Weigel, Marwan Zouinkhi, Yurii Zubov.The CellMap Project Team Steering Committee during this time consisted of: Misha Ahrens, Christopher Beck, Teng-Leong Chew, Daniel Feliciano, Jan Funke, Harald Hess, Wyatt Korff, Jennifer Lippincott-Schwartz, Zhe J. Liu, Kayvon Pedram, Stephan Preibisch, Stephan Saalfeld, Ronald Vale, and Aubrey Weigel.

  10. f

    Manual organelle segmentations (crop131) in near-isotropic, reconstructed...

    • datasetcatalog.nlm.nih.gov
    • janelia.figshare.com
    Updated Dec 13, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Forknall, Nora; Park, Woohyun; Rymer, Thomson; Ramirez, Diana; Petruncio, Alyson; Weigel, Aubrey; Bennett, Davis; Zubov, Yurii; Vorimo, Rebecca; Suleiman, Alia; Price, Jacquelyn; Arruda, Rebecca; Team, CellMap Project (2024). Manual organelle segmentations (crop131) in near-isotropic, reconstructed volume electron microscopy (FIB-SEM) of mouse liver (jrc_mus-liver) [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001406706
    Explore at:
    Dataset updated
    Dec 13, 2024
    Authors
    Forknall, Nora; Park, Woohyun; Rymer, Thomson; Ramirez, Diana; Petruncio, Alyson; Weigel, Aubrey; Bennett, Davis; Zubov, Yurii; Vorimo, Rebecca; Suleiman, Alia; Price, Jacquelyn; Arruda, Rebecca; Team, CellMap Project
    Description

    This acquisition is part of the CellMap 2024 Segmentation Challenge- Challenge DOI: https://doi.org/10.25378/janelia.c.7456966- Challenge Website: https://cellmapchallenge.janelia.org/Annotation description: Dense segmentations of extracellular space, plasma membrane, mitochondrial membrane, mitochondrial lumen, vesicle membrane, endosome membrane, vesicle lumen, mitochondrial ribosome, endoplasmic reticulum exit site lumen, endoplasmic reticulum exit site membrane, endoplasmic reticulum lumen, endoplasmic reticulum membrane, endosome lumen, cytosol, endosome, vesicle, peroxisome membrane, endoplasmic reticulum exit site, peroxisome, peroxisome lumen, mitochondria, endoplasmic reticulum, cell, endoplasmic reticulum membrane collective in jrc_mus-liver using Amira 3D 2021.1, Classic Segmentation Workroom and the 'Using Amira to manually segment organelles in vEM for machine learning V.3' annotation protocol.Annotation ID: crop131Primary Annotator: Woohyun ParkAnnotation protocol: Using Amira to manually segment organelles in vEM for machine learning V.3 (http://dx.doi.org/10.17504/protocols.io.bp2l61rb5vqe/v3)Software: Amira 3D 2021.1, Classic Segmentation WorkroomAnnotated voxel size (nm): 4 x 4 x 4 (x, y, z)Annotated data dimensions (µm): 1.6 x 1.6 x 0.4 (x, y, z)Annotated data offset (nm): 23718 x 23302 x 18158 (x, y, z)Classes annotated: extracellular space, plasma membrane, mitochondrial membrane, mitochondrial lumen, vesicle membrane, endosome membrane, vesicle lumen, mitochondrial ribosome, endoplasmic reticulum exit site lumen, endoplasmic reticulum exit site membrane, endoplasmic reticulum lumen, endoplasmic reticulum membrane, endosome lumen, cytosol, endosome, vesicle, peroxisome membrane, endoplasmic reticulum exit site, peroxisome, peroxisome lumen, mitochondria, endoplasmic reticulum, cell, endoplasmic reticulum membrane collectiveDataset URL: s3://janelia-cosem-datasets/jrc_mus-liver/jrc_mus-liver.zarr/recon-1/labels/groundtruth/crop131Source (EM) dataset ID: jrc_mus-liverSource (EM) voxel size (nm): 8 x 8 x 8 (x, y, z)Source (EM) data dimensions (µm): 101.98 x 101.82 x 71.46 (x, y, z)Source (EM) DOI: https://doi.org/10.25378/janelia.16913047Visualization website: https://openorganelle.janelia.org/datasets/jrc_mus-liverPublication: CellMap Segmentation Challenge, 2024.The CellMap Project Team during this time consisted of: David Ackerman, Davis Bennett, Marley Bryant, Hannah Nguyen, Grace Park, Alyson Petruncio, Alannah Post, Jacquelyn Price, Diana Ramirez, Jeff Rhoades, Rebecca Vorimo, Aubrey Weigel, Marwan Zouinkhi, Yurii Zubov.The CellMap Project Team Steering Committee during this time consisted of: Misha Ahrens, Christopher Beck, Teng-Leong Chew, Daniel Feliciano, Jan Funke, Harald Hess, Wyatt Korff, Jennifer Lippincott-Schwartz, Zhe J. Liu, Kayvon Pedram, Stephan Preibisch, Stephan Saalfeld, Ronald Vale, and Aubrey Weigel.

  11. Heart MRI Image DataSet : Left Atrial Segmentation

    • kaggle.com
    zip
    Updated Jun 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    KA-KA-shi (2021). Heart MRI Image DataSet : Left Atrial Segmentation [Dataset]. https://www.kaggle.com/datasets/adarshsng/heart-mri-image-dataset-left-atrial-segmentation/code
    Explore at:
    zip(480931868 bytes)Available download formats
    Dataset updated
    Jun 20, 2021
    Authors
    KA-KA-shi
    License

    https://ec.europa.eu/info/legal-notice_enhttps://ec.europa.eu/info/legal-notice_en

    Description

    Left Atrial Segmentation Challenge

    Authors: Catalina Tobon-Gomez (catactg@gmail.com) and Arjan Geers (ajgeers@gmail.com) About

    This repository is associated with the Left Atrial Segmentation Challenge 2013 (LASC'13). LASC'13 was part of the STACOM'13 workshop, held in conjunction with MICCAI'13. Seven international research groups, comprising 11 algorithms, participated in the challenge.

    For a detailed report, please refer to:

    Tobon-Gomez C, Geers AJ, Peters, J, Weese J, Pinto K, Karim R, Ammar M, Daoudi A, Margeta J, Sandoval Z, Stender B, Zheng Y, Zuluaga, MA, Betancur J, Ayache N, Chikh MA, Dillenseger J-L, Kelm BM, Mahmoudi S, Ourselin S, Schlaefer A, Schaeffter T, Razavi R, Rhode KS. Benchmark for Algorithms Segmenting the Left Atrium From 3D CT and MRI Datasets. IEEE Transactions on Medical Imaging, 34(7):1460–1473, 2015.

    The challenge is also featured on Cardiac Atlas Project.

    The Python scripts in this repository take as input a segmentation and output the two evaluation metrics described in the paper.

    The data and code of the challenge have been made publicly available to serve as a benchmark for left atrial segmentation algorithms.

    Feel free to contact us with any questions. Abbreviations

    CT: Computed tomography
    GT: Ground truth
    MRI: Magnetic resonance imaging
    LA: Left atrium
    LASC'13: Left Atrial Segmentation Challenge 2013
    PV: Pulmonary vein
    

    Data

    The benchmark consists of 30 CT and 30 MRI datasets. Per modality, 10 datasets are for training of segmentation algorithms and 20 datasets are for testing.

    The MRI datasets are publicly available on Figshare:

    Training
    Testing
    Results
    

    The data agreement for CT datasets expired on September 2018. Therefore, we can not share these datasets anymore.

    Sample data from an arbitrary modality/institute/case were included in this repository to be able to run the scripts.

    Abstract

    The knowledge of left atrial (LA) anatomy is important for atrial fibrillation ablation guidance. More recently, LA anatomical models have been used for cardiac biophysical modelling. Segmentation of the LA from Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) images is a complex problem. We aimed at evaluating current algorithms that address this problem by creating a unified benchmarking framework through the mechanism of a challenge, the Left Atrial Segmentation Challenge 2013 (LASC’13). Thirty MRI and thirty CT datasets were provided to participants for segmentation. Ten data sets for each modality were provided with expert manual segmentations for algorithm training. The other 20 data sets per modality were used for evaluation. The datasets were provided by King’s College London and Philips Technologie GmbH. Each participant segmented the LA including a short part of the LA appendage trunk plus the proximal parts of the pulmonary veins. Details on the evaluation framework and the results obtained in this challenge are presented in this manuscript. The results showed that methodologies combining statistical models with region growing approaches were the most appropriate to handle the proposed task.

  12. Data for - Tracking one-in-a-million: Large-scale benchmark for microbial...

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated Dec 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johannes Seiffarth; Johannes Seiffarth; Luisa Blöbaum; Katharina Löffler; Katharina Löffler; Tim Scherr; Tim Scherr; Alexander Grünberger; Alexander Grünberger; Hanno Scharr; Hanno Scharr; Ralf Mikut; Ralf Mikut; Katharina Nöh; Katharina Nöh; Luisa Blöbaum (2024). Data for - Tracking one-in-a-million: Large-scale benchmark for microbial single-cell tracking with experiment-aware robustness metrics [Dataset]. http://doi.org/10.5281/zenodo.7260137
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 10, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Johannes Seiffarth; Johannes Seiffarth; Luisa Blöbaum; Katharina Löffler; Katharina Löffler; Tim Scherr; Tim Scherr; Alexander Grünberger; Alexander Grünberger; Hanno Scharr; Hanno Scharr; Ralf Mikut; Ralf Mikut; Katharina Nöh; Katharina Nöh; Luisa Blöbaum
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Large-scale Corynebacterium glutamicum data set with Segmentation and Tracking Annotation

    We provide five time-lapse sequences with manually corrected segmentation and tracking annotations of growing C. glutamicum cultivations. The dataset contains more than 1.4 million cell observations in 29k cell tracks and 14k cell divisions. We provide videos of the annotations (videos.zip) and the dataset in Cell Tracking Challenge format (ctc_format.zip). In the videos, cell contours are rendered in yellow, cell links between frames are colored red and cell divisions, and their links are colored in blue.

    Data Acquisition

    Corynebacterium glutamicum ATCC 13032 was cultivated in BHI-medium at 30°C in this study. From and overnight preculture, the main culture was inoculated the next day with a starting OD600 of 0.05 and grown at 120 rpm to a OD600 of 0.25. A chip was fabricated, according to (Täuber et al., 2020), and fixed to the microscope’s holder. The main culture cells were transferred to monolayer growth chambers (height = 720 nm) on the microfluidic chip. Flow through the microfluidic device was mediated by pressure driven pumps with a pressure of 100 mbar on the medium reservoir.

    The time-lapse phase contrast images of five monolayer growth chambers were taken every minute using an inverted microscope (Nikon Eclipse Ti2) with a 100x oil emersion objective and a DS-QI2 camera (Nikon) at 15 % relative DIA-illumination intensity and 100 ms exposure time. The spatial image resolution is 0.072 μm/px.

  13. Z

    HaN-Seg: The head and neck organ-at-risk CT & MR segmentation dataset

    • nde-dev.biothings.io
    • data.niaid.nih.gov
    Updated Feb 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bulat Ibragimov (2023). HaN-Seg: The head and neck organ-at-risk CT & MR segmentation dataset [Dataset]. https://nde-dev.biothings.io/resources?id=zenodo_7442913
    Explore at:
    Dataset updated
    Feb 7, 2023
    Dataset provided by
    Bulat Ibragimov
    Tomaž Vrtovec
    Gašper Podobnik
    Primož Peterlin
    Primož Strojan
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    The HaN-Seg: Head and Neck Organ-at-Risk CT & MR Segmentation Dataset is a publicly available dataset of anonymized head and neck (HaN) images of 42 patients that underwent both CT and T1-weighted MR imaging for the purpose of image-guided radiotherapy planning. In addition, the dataset also contains reference segmentations of 30 organs-at-risk (OARs) for CT images in the form of binary segmentation masks, which were obtained by curating manual pixel-wise expert image annotations. A full description of the HaN-Seg dataset can be found in:

    G. Podobnik, P. Strojan, P. Peterlin, B. Ibragimov, T. Vrtovec, "HaN-Seg: The head and neck organ-at-risk CT & MR segmentation dataset", Medical Physics, 2023. https://doi.org/10.1002/mp.16197,

    and any research originating from its usage is required to cite this paper.

    In parallel with the release of the dataset, the HaN-Seg: The Head and Neck Organ-at-Risk CT & MR Segmentation Challenge is launched to promote the development of new and application of existing state-of-the-art fully automated techniques for OAR segmentation in the HaN region from CT images that exploit the information of multiple imaging modalities, in this case from CT and MR images. The task of the HaN-Seg challenge is to automatically segment up to 30 OARs in the HaN region from CT images in the devised test set, consisting of 14 CT and MR images of the same patients, given the availability of the training set (i.e. the herein publicly available HaN-Seg dataset), consisting of 42 CT and MR images of the same patients with reference 3D OAR binary segmentation masks for CT images.

  14. KITS 19 - Kidney Tumor Segmentation

    • kaggle.com
    zip
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Orvile (2025). KITS 19 - Kidney Tumor Segmentation [Dataset]. https://www.kaggle.com/datasets/orvile/kits19-png-zipped/code
    Explore at:
    zip(6701910250 bytes)Available download formats
    Dataset updated
    Apr 11, 2025
    Authors
    Orvile
    Description

    There are more than 400,000 new cases of kidney cancer each year, and surgery is its most common treatment. Due to the wide variety in kidney and kidney tumor morphology, there is currently great interest in how tumor morphology relates to surgical outcomes, as well as in developing advanced surgical planning techniques. Automatic semantic segmentation is a promising tool for these efforts, but morphological heterogeneity makes it a difficult problem.

    The goal of this challenge is to accelerate the development of reliable kidney and kidney tumor semantic segmentation methodologies. We have produced ground truth semantic segmentations for arterial phase abdominal CT scans of 300 unique kidney cancer patients who underwent partial or radical nephrectomy at our institution. 210 of these have been released for model training and validation, and the remaining 90 will be held out for objective model evaluation (see the detailed data description).

    https://kits19.grand-challenge.org/

    References

    1. “Kidney Cancer Statistics.” World Cancer Research Fund, 12 Sept. 2018, www.wcrf.org/dietandcancer/cancer-trends/kidney-cancer-statistics.

    2. “Cancer Diagnosis and Treatment Statistics.” Stages | Mesothelioma | Cancer Research UK, 26 Oct. 2017, www.cancerresearchuk.org/health-professional/cancer-statistics/diagnosis-and-treatment.

    3. Kutikov, Alexander, and Robert G. Uzzo. "The RENAL nephrometry score: a comprehensive standardized system for quantitating renal tumor size, location and depth." The Journal of urology 182.3 (2009): 844-853.

    4. Ficarra, Vincenzo, et al. "Preoperative aspects and dimensions used for an anatomical (PADUA) classification of renal tumours in patients who are candidates for nephron-sparing surgery." European urology 56.5 (2009): 786-793.

    5. Taha, Ahmed, et al. "Kid-Net: Convolution Networks for Kidney Vessels Segmentation from CT-Volumes." arXiv preprint arXiv:1806.06769 (2018).

  15. MICCAI 2021 FLARE Challenge Dataset

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jun Ma; Jun Ma (2022). MICCAI 2021 FLARE Challenge Dataset [Dataset]. http://doi.org/10.1109/tpami.2021.3100536
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 26, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Jun Ma; Jun Ma
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abdominal organ segmentation plays an important role in clinical practice, and to some extent, it seems to be a solved problem because the state-of-the-art methods have achieved inter-observer performance in several benchmark datasets. However, most of the existing abdominal datasets only contain single-center, single-phase, single-vendor, or single-disease cases, and it is unclear whether the excellent performance can be generalized on more diverse datasets. Moreover, many SOTA methods use model ensembles to boost performance, but these solutions usually have a large model size and cost extensive computational resources, which are impractical to be deployed in clinical practice.

    To address these limitations, we organize the Fast and Low GPU Memory Abdominal Organ Segmentation challenge that has two main features: (1) the dataset is large and diverse, includes 511 cases from 11 medical centers. (2) we not only focus on segmentation accuracy but also segmentation efficiency, which are in concordance with real clinical practice and requirements.

    Challenge Homepage: https://flare.grand-challenge.org/

  16. Z

    Data from: A Comprehensive Analysis of Weakly-Supervised Semantic...

    • data.niaid.nih.gov
    Updated Jun 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chan, Lyndon; Hosseini, Mahdi S.; Plataniotis, Konstantinos N. (2020). A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3902505
    Explore at:
    Dataset updated
    Jun 21, 2020
    Dataset provided by
    University of Toronto
    Authors
    Chan, Lyndon; Hosseini, Mahdi S.; Plataniotis, Konstantinos N.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Content

    This repository contains pre-trained computer vision models, data labels, and images used in the pre-print publication "A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains":

    ADPdevkit: a folder containing the 50 validation ("tuning") set and 50 evaluation ("segtest") set of images from the Atlas of Digital Pathology database formatted in the VOC2012 style--the full database of 17,668 images is available for download from the original website

    VOCdevkit: a folder containing the relevant files for the PASCAL VOC2012 Segmentation dataset, with both the trainaug and test sets

    DGdevkit: a folder containing the 803 test images of the DeepGlobe Land Cover challenge dataset formatted in the VOC2012 style

    cues: a folder containing the pre-generated weak cues for ADP, VOC2012, and DeepGlobe datasets, as required for the SEC and DSRG methods

    models_cnn: a folder containing the pre-trained CNN models

    models_wsss: a folder containing the pre-trained SEC, DSRG, and IRNet models, along with dense CRF settings

    More information

    For more information, please refer to the following article. Please cite this article when using the data set.

    @misc{chan2019comprehensive, title={A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains}, author={Lyndon Chan and Mahdi S. Hosseini and Konstantinos N. Plataniotis}, year={2019}, eprint={1912.11186}, archivePrefix={arXiv}, primaryClass={cs.CV} }

    For the full code released on GitHub, please visit the repository at: https://github.com/lyndonchan/wsss-analysis

    Contact

    For questions, please contact: Lyndon Chan lyndon.chan@mail.utoronto.ca http://orcid.org/0000-0002-1185-7961

  17. Liver Tumor Segmentation in TFRecords Part 1

    • kaggle.com
    zip
    Updated May 2, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LangeB (2021). Liver Tumor Segmentation in TFRecords Part 1 [Dataset]. https://www.kaggle.com/langeb/liver-tumor-segmenation-challenge-in-tfrecords
    Explore at:
    zip(4918167351 bytes)Available download formats
    Dataset updated
    May 2, 2021
    Authors
    LangeB
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Abstract

    Part 1 of 3 of the Liver Tumor Segmentation Challenge Data.

    To participate in the challenge look at LiTS competition website

    For info on TPU setup please look at the documentation

    TFRecord content

    Please look at the starter notebook for below steps

    The bucket path: python from kaggle_datasets import KaggleDatasets GCS_PATH = KaggleDatasets().get_gcs_path('liver-tumor-segmenation-challenge-in-tfrecords') paths = tf.io.gfile.glob(f"{GCS_PATH}/*")

    The TFRecord reader: ```python def read_tfrecord(serialized_example): """ Reads a serialized tf.Example message from a google storage bucket. """ feature_description = {'example_id': tf.io.FixedLenFeature([1], tf.int64), 'shape': tf.io.FixedLenFeature([3], tf.int64), 'volume': tf.io.FixedLenFeature([], tf.string), 'segmentation': tf.io.FixedLenFeature([], tf.string)}

    example = tf.io.parse_single_example(serialized_example, feature_description)
    volume = tf.io.parse_tensor(example['volume'], tf.int16)
    volume.set_shape((None, None, None))
    segmentation = tf.io.parse_tensor(example['segmentation'], tf.uint8)
    segmentation.set_shape((None, None, None))
    
    return example['example_id'], example['shape'], volume, segmentation
    
    
    Initilize dataset:
    ```python
    raw_train_dataset = tf.data.TFRecordDataset(filenames=paths, compression_type='GZIP')
    raw_train_dataset = raw_train_dataset.map(read_tfrecord)
    

    Organizer and Data Contributors

    Technical University of Munich

    Patrick Christ, Florian Ettlinger, Felix Gruen, Sebastian Schlecht, Jana Lipkova, Georgios Kassis, Sebastian Ziegelmayer, Fabian Lohöfer, Rickmer Braren & Bjoern Menze Ludwig Maxmilian University of Munich

    Julian Holch, Felix Hofmann, Wieland Sommer & Volker Heinemann Radboudumc

    Colin Jacobs, Gabriel Efrain HumpireMamani & Bram van Ginneken Polytechnique Montréal & CHUM Research Center

    Gabriel Chartrand, Eugene Vorontsov, An Tang, Michal Drozdzal & Samuel Kadoury Tel Aviv University & Sheba Medical Center

    Avi Ben-Cohen, Eyal Klang, Marianne M. Amitai, Eli Konen & Hayit Greenspan. IRCAD

    Johan Moreau, Alexandre Hostettler & Luc Soler The Hebrew University of Jerusalem & Hadassah University Medical Center

    Refael Vivanti, Adi Szeskin, Naama Lev-Cohain, Jacob Sosna & Leo Joskowicz Special thanks to the CodaLab Team for helping us

    Eric Carmichael & Flavio Alexander

    Attribute

    @misc{bilic2019liver, title={The Liver Tumor Segmentation Benchmark (LiTS)}, author={Patrick Bilic and Patrick Ferdinand Christ and Eugene Vorontsov and Grzegorz Chlebus and Hao Chen and Qi Dou and Chi-Wing Fu and Xiao Han and Pheng-Ann Heng and Jürgen Hesser and Samuel Kadoury and Tomasz Konopczynski and Miao Le and Chunming Li and Xiaomeng Li and Jana Lipkovà and John Lowengrub and Hans Meine and Jan Hendrik Moltz and Chris Pal and Marie Piraud and Xiaojuan Qi and Jin Qi and Markus Rempfler and Karsten Roth and Andrea Schenk and Anjany Sekuboyina and Eugene Vorontsov and Ping Zhou and Christian Hülsemeyer and Marcel Beetz and Florian Ettlinger and Felix Gruen and Georgios Kaissis and Fabian Lohöfer and Rickmer Braren and Julian Holch and Felix Hofmann and Wieland Sommer and Volker Heinemann and Colin Jacobs and Gabriel Efrain Humpire Mamani and Bram van Ginneken and Gabriel Chartrand and An Tang and Michal Drozdzal and Avi Ben-Cohen and Eyal Klang and Marianne M. Amitai and Eli Konen and Hayit Greenspan and Johan Moreau and Alexandre Hostettler and Luc Soler and Refael Vivanti and Adi Szeskin and Naama Lev-Cohain and Jacob Sosna and Leo Joskowicz and Bjoern H. Menze}, year={2019}, eprint={1901.04056}, archivePrefix={arXiv}, primaryClass={cs.CV} }

  18. CURVAS-PDACVI dataset

    • zenodo.org
    zip
    Updated May 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Meritxell Riera-Marín; Meritxell Riera-Marín; SIKHA O K; SIKHA O K; MARIA MONTSERRAT DUH; MARIA MONTSERRAT DUH; Anton Aubanell; Anton Aubanell; de Figueiredo Cardoso Ruben; Egger-Hackenschmidt Saskia; Júlia Rodríguez-Comas; Júlia Rodríguez-Comas; Miguel Ángel González Ballester; Miguel Ángel González Ballester; Javier Garcia López; Javier Garcia López; de Figueiredo Cardoso Ruben; Egger-Hackenschmidt Saskia (2025). CURVAS-PDACVI dataset [Dataset]. http://doi.org/10.5281/zenodo.15401568
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 15, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Meritxell Riera-Marín; Meritxell Riera-Marín; SIKHA O K; SIKHA O K; MARIA MONTSERRAT DUH; MARIA MONTSERRAT DUH; Anton Aubanell; Anton Aubanell; de Figueiredo Cardoso Ruben; Egger-Hackenschmidt Saskia; Júlia Rodríguez-Comas; Júlia Rodríguez-Comas; Miguel Ángel González Ballester; Miguel Ángel González Ballester; Javier Garcia López; Javier Garcia López; de Figueiredo Cardoso Ruben; Egger-Hackenschmidt Saskia
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This challenge will be hosted soon in Grand Challenge. Currently under construction.

    Clinical Problem

    In medical imaging, DL models are often tasked with delineating structures or abnormalities within complex anatomical structures, such as tumors, blood vessels, or organs. Uncertainty arises from the inherent complexity and variability of these structures, leading to challenges in precisely defining their boundaries. This uncertainty is further compounded by interrater variability, as different medical experts may have varying opinions on where the true boundaries lie. DL models must grapple with these discrepancies, leading to inconsistencies in segmentation results across different annotators and potentially impacting diagnosis and treatment decisions. Addressing interrater variability in DL for medical segmentation involves the development of robust algorithms capable of capturing and quantifying uncertainty, as well as standardizing annotation practices and promoting collaboration among medical experts to reduce variability and improve the reliability of DL-based medical image analysis. Interrater variability poses significant challenges in the field of DL for medical image segmentation.

    This challenge is designed to promote awareness of the impact uncertainty has on clinical applications of medical image analysis. In our last-year edition, we proposed a competition based on modeling the uncertainty of segmenting three abdominal organs, namely kidney, liver and pancreas, focusing on organ volume as a clinical quantity of interest. This year, we go one step further and propose to segment pancreatic pathological structures, namely Pancreatic Ductal Adenocarcinoma (PDAC), with the clinical goal of understanding vascular involvement, a key measure of tumor resectability. In this above context, uncertainty quantification is a much more challenging task, given the wildly varying contours that different PDAC instances show.

    This year, we will provide a richer dataset, in which we start from an already existing dataset of clinically verified contrast-enhanced abdominal CT scans with a single set of manual annotations (provided by the PANORAMA organization), and make an effort to construct four extra manual annotations per PDAC case. In this way, we will assemble a unique dataset that creates a notable opportunity to analyze the impact of multi-rater annotations in several dimensions, e.g. different annotation protocols or different annotator experiences, to name a few.

    CURVAS Challenge Goal

    This challenge aims to advance deep learning methods for medical image segmentation by focusing on the critical issue of interrater variability, particularly in the context of pancreatic cancer. Building on last year's focus on organ segmentation uncertainty, this edition shifts to the more complex task of segmenting Pancreatic Ductal Adenocarcinoma (PDAC) to assess vascular involvement—a key indicator of tumor resectability. By providing a unique, richly annotated dataset with multiple expert annotations per case, the challenge encourages participants to develop robust models that can quantify and manage uncertainty arising from differing expert opinions, ultimately improving the clinical reliability of AI-based image analysis.

    For more information about the challenge, visit our website to join CURVAS-PDACVI (Calibration and Uncertainty for multiRater Volume Assessment in multistructure Segmentation - Pancreatic Ductal AdenoCarcinoma Vascular Invasion). This challenge will be held in MICCAI 2025.

    Dataset Cohort

    The challenge cohort comprises upper-abdominal axial, portal-venous CECT 125 CT scans selected from a subset of the PANORAMA challenge dataset. The selection process will prioritize CT scans with manually generated labels, excluding those with automatically derived annotations. Additionally, only cases with a conclusive diagnostic test (e.g., pathology, cytology, histopathology) are included, while patients with radiology-based diagnoses have been excluded.

    To ensure the subset is representative of common real-world scenarios, lesion sizes have been analyzed, and a diverse range of cases have been selected. Furthermore, patient demographics, including sex and age, have been considered to enhance the cohort's representativeness.

    Finally, a preliminary visual analysis have been conducted before sending the image to radiologists for segmentation. This ensures the tumor's location, size, and relevance, helping maintain the dataset's representativeness for the challenge.

    The previously indicated cohort of 125 CT scans is splitted in the following way:

    • Training Phase cohort:

    40 CT scans with the respective annotations is given. It is encouraged to leverage publicly available external data annotated by multiple raters. The idea of giving a small amount of data for the training set and giving the opportunity of using a public dataset for training is to make the challenge more inclusive, giving the option to develop a method by using data that is in anyone's hands. Furthermore, by using this data to train and using other data to evaluate, it makes it more robust to shifts and other sources of variability between datasets.

    • Validation Phase cohort:

    5 CT scans will be used for this phase.

    • Test Phase cohort:

    85 CT scans will be used for evaluation.

    Both validation and testing CT scans cohorts will not be published until the end of the challenge. Furthermore, to which group each CT scan belongs will not be revealed until after the challenge.

    Each folder containing a study is named with a unique ID (CURVASPDAC_XXXX) so it cannot be directy related to the PANORAMA ID and has the following structure:

    • annotation_X.nii.gz: contains the Pancreatic Ductal Adenocarcinoma (PDAC) segmentations (X=1 being the PANORAMA segmentation, X=2,..,5 being the other experts segmentations)
    • image.nii.gz: CT volume

    The four additional annotations are done from radiologists at Universitätsklinikum Erlangen, Hospital de Sant Pau, and Hospital de Mataró. Hence, four new annotations plus the PANORAMA annotation are provied. Another clinician, focused on modifying the annotations from the vascular structures of the PANORAMA dataset and separated veins and arteries in single strcutures segmentations. This structures are the ones considered highly relevant for the study of Vascular Invasion (VI): Porta, Superior Mesenteric Vein (SMV), Superior Mesenteric Artery (SMA), Hepatic Artery and Celiac Trunk. The vascular annotations will be made public later in the challenge, so the participants can try out the evaluation code.

    A balance to ensure representiveness within the subsets have been performed as well. Factors such as devices, sex, and patient age have been considered to improve the cohort's representativeness. Efforts have been made to balance bias as evenly as possible across these variables. For age distribution, the target percentages are as follows: below 50 years (5%), 50–59 years (15%), 60–69 years (20%), 70–79 years (30%), and 80–89 years (30%) [1,2,3,4]. While these percentages are approximate and have been rounded for simplicity, the balance aims to be as close to these proportions as feasible. For the sex, 40-50% for females and 50-60% for males [5]. For location of the PDAC, 60-70% head, 15-25% body and 10-15% tail [6]. The size of the lesions has been analyzed and a subset will be selected and this values will be published in the future with the entire dataset.

    Data from PANORAMA Batch 1 (https://zenodo.org/records/13715870), Batch 2 (https://zenodo.org/records/13742336), and Batch 3 (https://zenodo.org/records/11034011)), are not allowed for training the models. Batch 4 (https://zenodo.org/records/10999754) can be used.

    For more technical information about the dataset visit the platform: https://panorama.grand-challenge.org/datasets-imaging-labels/

    Ethical Approval and Data Usage Agreement

    No other information that is not already public about the patient will be released since the CT images and their corresponding information are already publicly available.

    References

    [1] Lee, K.S.; Sekhar, A.; Rofsky, N.M.; Pedrosa, I. Prevalence of Incidental Pancreatic Cysts in the Adult Population on MR Imaging. Am J Gastroenterol 2010, 105, 2079–2084, doi:10.1038/ajg.2010.122.

    [2] Canakis, A.; Lee, L.S. State-of-the-Art Update of Pancreatic Cysts. Dig Dis Sci 2021.

    [3] De Oliveira, P.B.; Puchnick, A.; Szejnfeld, J.; Goldman, S.M. Prevalence of Incidental Pancreatic Cysts on 3 Tesla Magnetic Resonance. PLoS One 2015, 10, doi:10.1371/JOURNAL.PONE.0121317.

    [4] Kimura, W.; Nagai, H.; Kuroda, A.; Muto, T.; Esaki, Y. Analysis of Small Cystic Lesions of the Pancreas. Int J Pancreatol 1995, 18, 197–206, doi:10.1007/BF02784942.

    [5] Natalie Moshayedi et al. Race, sex, age, and geographic disparities in pancreatic cancer incidence. JCO 40, 520-520(2022). DOI:10.1200/JCO.2022.40.4_suppl.520

    [6] Avo Artinyan, Perry A. Soriano, Christina Prendergast, Tracey Low, Joshua D.I. Ellenhorn, Joseph Kim, The anatomic location of pancreatic cancer is a prognostic

  19. Challenges implementing micro-segmentation into a company's Zero Trust...

    • statista.com
    Updated Sep 15, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2021). Challenges implementing micro-segmentation into a company's Zero Trust strategy 2021 [Dataset]. https://www.statista.com/statistics/1299029/implementing-micro-segmentation-into-zero-trust/
    Explore at:
    Dataset updated
    Sep 15, 2021
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Aug 2021 - Sep 2021
    Area covered
    Worldwide
    Description

    One of the main challenges when implementing micro-segmentation into a company's Zero Trust strategy was that 43 percent of organizations lacked qualified business outcomes. At the same time, another 42 percent of companies did not believe in micro-segmentation.

    Micro-segmentation is a method to logically create network segments and completely control traffic within and between the segments. Specifically, it controls workloads in data centers, as well as in multi-cloud environments, while also restricting the spread of lateral threats in the data centers.

  20. f

    Manual organelle segmentations (crop206) in near-isotropic, reconstructed...

    • janelia.figshare.com
    bin
    Updated Dec 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    CellMap Project Team; Rebecca Arruda; Davis Bennett; Nora Forknall; Woohyun Park; Alyson Petruncio; Jacquelyn Price; Diana Ramirez; Thomson Rymer; Alia Suleiman; Rebecca Vorimo; Aubrey Weigel; Yurii Zubov (2024). Manual organelle segmentations (crop206) in near-isotropic, reconstructed volume electron microscopy (FIB-SEM) of (jrc_sum159-4) [Dataset]. http://doi.org/10.25378/janelia.24256003.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 17, 2024
    Dataset provided by
    Janelia Research Campus
    Authors
    CellMap Project Team; Rebecca Arruda; Davis Bennett; Nora Forknall; Woohyun Park; Alyson Petruncio; Jacquelyn Price; Diana Ramirez; Thomson Rymer; Alia Suleiman; Rebecca Vorimo; Aubrey Weigel; Yurii Zubov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This acquisition is part of the CellMap 2024 Segmentation Challenge- Challenge DOI: https://doi.org/10.25378/janelia.c.7456966- Challenge Website: https://cellmapchallenge.janelia.org/Annotation description: Dense segmentations of extracellular space, plasma membrane, vesicle membrane, endosome membrane, vesicle lumen, endoplasmic reticulum lumen, endoplasmic reticulum membrane, lysosome lumen, lysosome membrane, endosome lumen, microtubule out, microtubule in, cytosol, lysosome, endosome, vesicle, endoplasmic reticulum, microtubule, cell, endoplasmic reticulum membrane collective in jrc_sum159-4 using Amira 3D 2021.1, Classic Segmentation Workroom and the 'Using Amira to manually segment organelles in vEM for machine learning V.3' annotation protocol.Annotation ID: crop206Primary Annotator: Woohyun ParkAnnotation protocol: Using Amira to manually segment organelles in vEM for machine learning V.3 (http://dx.doi.org/10.17504/protocols.io.bp2l61rb5vqe/v3)Software: Amira 3D 2021.1, Classic Segmentation WorkroomAnnotated voxel size (nm): 4 x 4 x 4 (x, y, z)Annotated data dimensions (µm): 1.6 x 1.6 x 1.6 (x, y, z)Annotated data offset (nm): 39038 x 5566 x 37518 (x, y, z)Classes annotated: extracellular space, plasma membrane, vesicle membrane, endosome membrane, vesicle lumen, endoplasmic reticulum lumen, endoplasmic reticulum membrane, lysosome lumen, lysosome membrane, endosome lumen, microtubule out, microtubule in, cytosol, lysosome, endosome, vesicle, endoplasmic reticulum, microtubule, cell, endoplasmic reticulum membrane collectiveDataset URL: s3://janelia-cosem-datasets/jrc_sum159-4/jrc_sum159-4.zarr/recon-1/labels/groundtruth/crop206Source (EM) dataset ID: jrc_sum159-4Source (EM) voxel size (nm): 8 x 8 x 8 (x, y, z)Source (EM) data dimensions (µm): 95 x 8.5 x 47.8 (x, y, z)Source (EM) DOI: https://doi.org/10.25378/janelia.20134103Visualization website: https://openorganelle.janelia.org/datasets/jrc_sum159-4Publication: CellMap Segmentation Challenge, 2024.The CellMap Project Team during this time consisted of: David Ackerman, Davis Bennett, Marley Bryant, Hannah Nguyen, Grace Park, Alyson Petruncio, Alannah Post, Jacquelyn Price, Diana Ramirez, Jeff Rhoades, Rebecca Vorimo, Aubrey Weigel, Marwan Zouinkhi, Yurii Zubov.The CellMap Project Team Steering Committee during this time consisted of: Misha Ahrens, Christopher Beck, Teng-Leong Chew, Daniel Feliciano, Jan Funke, Harald Hess, Wyatt Korff, Jennifer Lippincott-Schwartz, Zhe J. Liu, Kayvon Pedram, Stephan Preibisch, Stephan Saalfeld, Ronald Vale, and Aubrey Weigel.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Yin Li (2023). UW Madison GI Tract Image Segmentation Dataset [Dataset]. https://www.kaggle.com/datasets/happyharrycn/uw-madison-gi-tract-image-segmentation-dataset
Organization logo

UW Madison GI Tract Image Segmentation Dataset

MRI image segmentation; Medical Image Analysis; Machine Learning

Explore at:
zip(4107976772 bytes)Available download formats
Dataset updated
Aug 1, 2023
Authors
Yin Li
License

Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically

Area covered
Madison
Description

Integrated MRI and linear accelerator systems (MR-Linacs) provide superior soft tissue contrast and the capability of adapting radiotherapy plans to changes in daily anatomy. In this dataset, serial MRIs of the abdomen of patients undergoing radiotherapy were collected and the luminal gastro-intestinal tract was segmented to develop a deep learning algorithm for automatic segmentation. This dataset was used by UW-Madison GI Tract Image Segmentation challenge hosted at Kaggle. This release includes both the training and test sets in the Kaggle challenge. We anticipate that the data may be utilized by radiation oncologists, medical physicists, and data scientist to further improve MRI segmentation algorithms.

If you find our dataset useful, please consider citing our paper.

Lee, S. L., Yadav, P., Li, Y., Meudt, J. J., Strang, J., Hebel, D., ... & Bassetti, M. F. (2024). Dataset for gastrointestinal tract segmentation on serial MRIs for abdominal tumor radiotherapy. Data in Brief, 57, 111159.

Search
Clear search
Close search
Google apps
Main menu