17 datasets found
  1. u

    Data from: Simcol3D - 3D Reconstruction during Colonoscopy Challenge Dataset...

    • rdr.ucl.ac.uk
    bin
    Updated Sep 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anita Rau; Sophia Bano; Yueming Jin; Danail Stoyanov (2023). Simcol3D - 3D Reconstruction during Colonoscopy Challenge Dataset [Dataset]. http://doi.org/10.5522/04/24077763.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Sep 7, 2023
    Dataset provided by
    University College London
    Authors
    Anita Rau; Sophia Bano; Yueming Jin; Danail Stoyanov
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Colorectal cancer is one of the most common cancers in the world. By establishing a benchmark, SimCol3D aimed to facilitate data-driven navigation during colonoscopy. More details about the challenge and corresponding data can be found in the challenge paper on arXiv.

    The challenge consisted of simulated colonoscopy data and images from real patients. This data release encompasses the synthetic portion of the challenge. The synthetic data includes three different anatomies derived from real human CT scans. Each anatomy provides several randomly generated trajectories with RGB renderings, camera intrinsics, ground truth depths, and ground truth poses. In total, this dataset includes more than 37,000 labelled images.

    The real colonoscopy data used in the SimCol3D challenge consists of images extracted from the EndoMapper dataset. The real data is available on the EndoMapper Synapse page.

    The synthetic colonoscopy data is made available in this repository.

  2. XDCycleGAN Depth Dataset

    • zenodo.org
    zip
    Updated Sep 23, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shawn Mathew; Saad Nadeem; Arie Kaufman; Shawn Mathew; Saad Nadeem; Arie Kaufman (2021). XDCycleGAN Depth Dataset [Dataset]. http://doi.org/10.5281/zenodo.5520029
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 23, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Shawn Mathew; Saad Nadeem; Arie Kaufman; Shawn Mathew; Saad Nadeem; Arie Kaufman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the public dataset that is used for training XDCycleGAN, a deep learning model for unpaired image-to-image translation. TrainA contains the optical colonoscopy images (OC). TrainB contains the depth images. The model trained on this dataset is also included here.

    Abstract:

    Colorectal cancer screening modalities, such as optical colonoscopy (OC) and virtual colonoscopy (VC), are critical for diagnosing and ultimately removing polyps (precursors for colon cancer). The non-invasive VC is normally used to inspect a 3D reconstructed colon (from computed tomography scans) for polyps and if found, the OC procedure is performed to physically traverse the colon via endoscope and remove these polyps. In this paper, we present a deep learning framework, Extended and Directional CycleGAN, for lossy unpaired image-to-image translation between OC and VC to augment OC video sequences with scale-consistent depth information from VC and VC with patient-specific textures, color and specular highlights from OC (e.g. for realistic polyp synthesis). Both OC and VC contain structural information, but it is obscured in OC by additional patient-specific texture and specular highlights, hence making the translation from OC to VC lossy. The existing CycleGAN approaches do not handle lossy transformations. To address this shortcoming, we introduce an extended cycle consistency loss, which compares the geometric structures from OC in the VC domain. This loss removes the need for the CycleGAN to embed OC information in the VC domain. To handle a stronger removal of the textures and lighting, a Directional Discriminator is introduced to differentiate the direction of translation (by creating paired information for the discriminator), as opposed to the standard CycleGAN which is direction-agnostic. Combining the extended cycle consistency loss and the Directional Discriminator, we show state-of-the-art results on scale-consistent depth inference for phantom, textured VC and for real polyp and normal colon video sequences. We also present results for realistic pendunculated and flat polyp synthesis from bumps introduced in 3D VC models. You can find the code and additional detail about Foldit via our Computation Endoscopy Platform at https://github.com/nadeemlab/CEP

    Please cite the following papers when using this dataset.

    The OC data came from the Hyper Kvasir datset:

    Borgli, Hanna, et al. "HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy." Scientific data 7.1 (2020): 1-14.

    The depth data came from the following TCIA dataset:

    Smith K, Clark K, Bennett W, Nolan T, Kirby J, Wolfsberger M, Moulton J, Vendt B, Freymann J. (2015). Data From CT_COLONOGRAPHY. The Cancer Imaging Archive. https://doi.org/10.7937/K9/TCIA.2015.NWTESAY1

  3. e

    Simcol3D - 3D Reconstruction during Colonoscopy Challenge Dataset - Dataset...

    • b2find.eudat.eu
    Updated Apr 7, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Simcol3D - 3D Reconstruction during Colonoscopy Challenge Dataset - Dataset - B2FIND [Dataset]. https://b2find.eudat.eu/dataset/b9ec9aee-035b-590d-8897-5026ca49c094
    Explore at:
    Dataset updated
    Apr 7, 2024
    Description

    Colorectal cancer is one of the most common cancers in the world. By establishing a benchmark, SimCol3D aimed to facilitate data-driven navigation during colonoscopy. More details about the challenge and corresponding data can be found in the challenge paper on arXiv. The challenge consisted of simulated colonoscopy data and images from real patients. This data release encompasses the synthetic portion of the challenge. The synthetic data includes three different anatomies derived from real human CT scans. Each anatomy provides several randomly generated trajectories with RGB renderings, camera intrinsics, ground truth depths, and ground truth poses. In total, this dataset includes more than 37,000 labelled images. The real colonoscopy data used in the SimCol3D challenge consists of images extracted from the EndoMapper dataset. The real data is available on the EndoMapper Synapse page. The synthetic colonoscopy data is made available in this repository.

  4. u

    Data from: Procedurally Generated Colonoscopy and Laparoscopy Data For...

    • rdr.ucl.ac.uk
    zip
    Updated Aug 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thomas Dowrick; Matt Clarkson; Joao Ramalhinho; Long Chen; Juana Gonzalez Bueno Puyal (2023). Procedurally Generated Colonoscopy and Laparoscopy Data For Improved Model Training Performance [Dataset]. http://doi.org/10.5522/04/23843904.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 4, 2023
    Dataset provided by
    University College London
    Authors
    Thomas Dowrick; Matt Clarkson; Joao Ramalhinho; Long Chen; Juana Gonzalez Bueno Puyal
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is the training data to support the work in 'Procedurally Generated Colonoscopy and Laparoscopy Data For Improved Model Training Performance', published at the 2023 Data Engineering in Medical Imaging Workshop at MICCAI 2023.

    Contents:

    1. blender.zip - Blender files used to generate data.
    2. examples.zip - Example videos showing Shader Graphs, Geometry Nodes and data generation.
    3. liver.zip - The full generated laparoscopy dataset.
    4. colon.zip - The full generated colonoscopy dataset.
  5. Labeled Images for Ulcerative Colitis (LIMUC) Dataset

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 28, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gorkem Polat; Gorkem Polat; Haluk Tarik Kani; Haluk Tarik Kani; Ilkay Ergenc; Ilkay Ergenc; Yesim Ozen Alahdab; Yesim Ozen Alahdab; Alptekin Temizel; Alptekin Temizel; Ozlen Atug; Ozlen Atug (2025). Labeled Images for Ulcerative Colitis (LIMUC) Dataset [Dataset]. http://doi.org/10.5281/zenodo.5827695
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 28, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Gorkem Polat; Gorkem Polat; Haluk Tarik Kani; Haluk Tarik Kani; Ilkay Ergenc; Ilkay Ergenc; Yesim Ozen Alahdab; Yesim Ozen Alahdab; Alptekin Temizel; Alptekin Temizel; Ozlen Atug; Ozlen Atug
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset Details

    The LIMUC dataset compromises 11276 images from 564 patients and 1043 colonoscopy procedures, who underwent colonoscopy for ulcerative colitis between December 2011 and July 2019 at the Department of Gastroenterology in Marmara University School of Medicine. Two experienced gastroenterologists blindly reviewed and classified all images according to the Mayo endoscopic score (MES). Images that were differently labeled by two reviewers were also labeled by a third experienced reviewer independently without seeing their previous labels. The final MES for differently labeled images was determined using majority voting.


    Mayo 0: 6105 (54.14%)
    Mayo 1: 3052 (27.70%)
    Mayo 2: 1254 (11.12%)
    Mayo 3: 865 (7.67%)

    patient_based_classified_images: Images of each patient are separated according to Mayo classes. If a train-val-test splitting is to be made according to the ratios desired by the user, this folder should be used.

    train_and_validation_sets: Train and validation sets used in the research paper. Using the scripts in dataset's GitHub repository, same 10-fold can be generated for replicating the results.

    test_set: Test set used for performance measurement in the research paper. For a fair performance comparisons, this should be used to report performances.

    Suggested Metrics

    Since there are imbalances and ordinality among classes (Mayo-0, Mayo-1, Mayo-2, Mayo-3), quadratic weighted kappa (QWK) can be used as the main performance metric. The QWK is one of the commonly used statistics for the assessment of agreement on an ordinal scale and it is one of the best singular performance metrics for this problem regarding class imbalances. Mean absolute error (MAE), macro F1 score, or macro accuracy can be used as alternative performance metrics.

    LIMUC Code Repository

    Many scripts for preprocessing, splitting, training, and validating the dataset are provided in this GitHub repository.

    Terms and Conditions

    The LIMUC dataset is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. This license permits unrestricted use, distribution, and reproduction in any medium, provided that proper attribution is given to the original creators. This ensures that the dataset can be used for both research and commercial applications while maintaining transparency and acknowledgment of the contributors.

    For more details about the license, please refer to: Creative Commons Attribution 4.0 International License.

    Regarding the questions, please contact polatgorkem@gmail.com.

  6. f

    Data_Sheet_1_Development and Validation of a Deep Neural Network for...

    • frontiersin.figshare.com
    pdf
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Guangcong Ruan; Jing Qi; Yi Cheng; Rongbei Liu; Bingqiang Zhang; Min Zhi; Junrong Chen; Fang Xiao; Xiaochun Shen; Ling Fan; Qin Li; Ning Li; Zhujing Qiu; Zhifeng Xiao; Fenghua Xu; Linling Lv; Minjia Chen; Senhong Ying; Lu Chen; Yuting Tian; Guanhu Li; Zhou Zhang; Mi He; Liang Qiao; Zhu Zhang; Dongfeng Chen; Qian Cao; Yongjian Nian; Yanling Wei (2023). Data_Sheet_1_Development and Validation of a Deep Neural Network for Accurate Identification of Endoscopic Images From Patients With Ulcerative Colitis and Crohn's Disease.pdf [Dataset]. http://doi.org/10.3389/fmed.2022.854677.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    Frontiers
    Authors
    Guangcong Ruan; Jing Qi; Yi Cheng; Rongbei Liu; Bingqiang Zhang; Min Zhi; Junrong Chen; Fang Xiao; Xiaochun Shen; Ling Fan; Qin Li; Ning Li; Zhujing Qiu; Zhifeng Xiao; Fenghua Xu; Linling Lv; Minjia Chen; Senhong Ying; Lu Chen; Yuting Tian; Guanhu Li; Zhou Zhang; Mi He; Liang Qiao; Zhu Zhang; Dongfeng Chen; Qian Cao; Yongjian Nian; Yanling Wei
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Background and AimThe identification of ulcerative colitis (UC) and Crohn's disease (CD) is a key element interfering with therapeutic response, but it is often difficult for less experienced endoscopists to identify UC and CD. Therefore, we aimed to develop and validate a deep learning diagnostic system trained on a large number of colonoscopy images to distinguish UC and CD.MethodsThis multicenter, diagnostic study was performed in 5 hospitals in China. Normal individuals and active patients with inflammatory bowel disease (IBD) were enrolled. A dataset of 1,772 participants with 49,154 colonoscopy images was obtained between January 2018 and November 2020. We developed a deep learning model based on a deep convolutional neural network (CNN) in the examination. To generalize the applicability of the deep learning model in clinical practice, we compared the deep model with 10 endoscopists and applied it in 3 hospitals across China.ResultsThe identification accuracy obtained by the deep model was superior to that of experienced endoscopists per patient (deep model vs. trainee endoscopist, 99.1% vs. 78.0%; deep model vs. competent endoscopist, 99.1% vs. 92.2%, P < 0.001) and per lesion (deep model vs. trainee endoscopist, 90.4% vs. 59.7%; deep model vs. competent endoscopist 90.4% vs. 69.9%, P < 0.001). In addition, the mean reading time was reduced by the deep model (deep model vs. endoscopists, 6.20 s vs. 2,425.00 s, P < 0.001).ConclusionWe developed a deep model to assist with the clinical diagnosis of IBD. This provides a diagnostic device for medical education and clinicians to improve the efficiency of diagnosis and treatment.

  7. f

    Table_1_Optical diagnosis in still images of colorectal polyps: comparison...

    • frontiersin.figshare.com
    docx
    Updated May 23, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pedro Davila-Piñón; Alba Nogueira-Rodríguez; Astrid Irene Díez-Martín; Laura Codesido; Jesús Herrero; Manuel Puga; Laura Rivas; Eloy Sánchez; Florentino Fdez-Riverola; Daniel Glez-Peña; Miguel Reboiro-Jato; Hugo López-Fernández; Joaquín Cubiella (2024). Table_1_Optical diagnosis in still images of colorectal polyps: comparison between expert endoscopists and PolyDeep, a Computer-Aided Diagnosis system.docx [Dataset]. http://doi.org/10.3389/fonc.2024.1393815.s004
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 23, 2024
    Dataset provided by
    Frontiers
    Authors
    Pedro Davila-Piñón; Alba Nogueira-Rodríguez; Astrid Irene Díez-Martín; Laura Codesido; Jesús Herrero; Manuel Puga; Laura Rivas; Eloy Sánchez; Florentino Fdez-Riverola; Daniel Glez-Peña; Miguel Reboiro-Jato; Hugo López-Fernández; Joaquín Cubiella
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundPolyDeep is a computer-aided detection and classification (CADe/x) system trained to detect and classify polyps. During colonoscopy, CADe/x systems help endoscopists to predict the histology of colonic lesions.ObjectiveTo compare the diagnostic performance of PolyDeep and expert endoscopists for the optical diagnosis of colorectal polyps on still images.MethodsPolyDeep Image Classification (PIC) is an in vitro diagnostic test study. The PIC database contains NBI images of 491 colorectal polyps with histological diagnosis. We evaluated the diagnostic performance of PolyDeep and four expert endoscopists for neoplasia (adenoma, sessile serrated lesion, traditional serrated adenoma) and adenoma characterization and compared them with the McNemar test. Receiver operating characteristic curves were constructed to assess the overall discriminatory ability, comparing the area under the curve of endoscopists and PolyDeep with the chi- square homogeneity areas test.ResultsThe diagnostic performance of the endoscopists and PolyDeep in the characterization of neoplasia is similar in terms of sensitivity (PolyDeep: 89.05%; E1: 91.23%, p=0.5; E2: 96.11%, p

  8. d

    Replication Data for: CAPTIV8 : A comprehensive large scale CAPsule...

    • search.dataone.org
    • dataverse.no
    Updated Sep 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vats, Anuja; Ahmed, Bilal; Floor, Pål Anders; Mohammed, Ahmed; Pedersen, Marius; Hovde, Øistein (2024). Replication Data for: CAPTIV8 : A comprehensive large scale CAPsule endoscopy dataset for Integrated diagnosis [Dataset]. http://doi.org/10.18710/BSXNA1
    Explore at:
    Dataset updated
    Sep 25, 2024
    Dataset provided by
    DataverseNO
    Authors
    Vats, Anuja; Ahmed, Bilal; Floor, Pål Anders; Mohammed, Ahmed; Pedersen, Marius; Hovde, Øistein
    Description

    General description and ethics approvals: The dataset contains images and videos of wireless capsule endoscopic examinations of 10 patients focused on the large colon conducted using the PillCAM colon 2 capsule manufactured by Medtronic. In addition to images and videos it includes alphanumeric metadata comprising of diagnostic summaries from capsule endoscopy, colonoscopy and histopathology reports. The dataset includes 8 different types of pathologies in addition to symptoms of ulcerative colitis. The examinations were conducted in 2021 at the Innlandet Hospital Trust, Gjøvik (Norway) with confirmed patients of ulcerative colitis. All patients gave written informed consent and ethical approvals to publish the anonymized image, video and text data were obtained from the director of medicine and health at Innlandet Hospital Trust in 2021. Patient information was not linked to the study to preseve anonymity and pseudo IDs were assigned instead. Data acquistion procedure : The patients underwent capsule endoscopy examination on the first day followed by a colonoscopy the next day. Tissue samples were retrieved during colonoscopy from different bowel segments and sent for histopathology. The histopathology report corresponds to 5 sections of the colon numbered 1 to 5, these can be interpreted as such : 1 : cecum/ascending 2: transverse 3: descending, 4: sigmoid, 5 rectum. Annotation procedures: The annotations were performed by experienced gastroenetrologist in the software Rapid reader. Clean and representative normal as well as abnormal frames were selected in the video and a text describing the images was written corresponding to the images. A short video segment of approximately 150 frames each was extracted around each of these normal/abnormal images. These are available in the dataset. The video can be assumed to carry the same weak-label as the frame. Certain video fragments have been cut to be shorter than 150 frames intentionally to prevent accidental identification pre or post capsule ingestion.

  9. AI Colon Polyp Morphology Predictor Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Jul 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). AI Colon Polyp Morphology Predictor Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/ai-colon-polyp-morphology-predictor-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Jul 5, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    AI Colon Polyp Morphology Predictor Market Outlook



    As per our latest research, the AI Colon Polyp Morphology Predictor Market size is valued at USD 312.5 million in 2024, with robust momentum expected over the coming years. The market is forecasted to reach USD 1,480.2 million by 2033, expanding at a compelling CAGR of 18.7% from 2025 to 2033. This significant growth is driven by increasing adoption of artificial intelligence in medical diagnostics, rising colorectal cancer screening rates, and technological advancements in imaging and data analytics.




    One of the primary growth factors propelling the AI Colon Polyp Morphology Predictor Market is the escalating prevalence of colorectal cancer globally. With colorectal cancer ranking as the third most diagnosed cancer worldwide, early detection and accurate characterization of polyps are critical for improving patient outcomes. AI-powered morphology predictors are revolutionizing diagnostic workflows by offering rapid, reproducible, and highly accurate assessments of polyp characteristics, thereby aiding clinicians in differentiating between benign and malignant lesions. This not only enhances diagnostic confidence but also reduces the rate of unnecessary biopsies and interventions, contributing to improved patient management and cost efficiency in healthcare systems.




    Another significant driver is the rapid advancement and integration of machine learning algorithms and deep learning technologies into endoscopic imaging platforms. The continuous evolution of AI models, trained on vast datasets of colonoscopy images, has led to remarkable improvements in the sensitivity and specificity of polyp detection and morphological classification. These AI systems are increasingly being incorporated into routine clinical practice, supported by robust clinical validation studies and regulatory approvals. The growing collaboration between medical device manufacturers, software developers, and healthcare institutions further accelerates the pace of innovation, making AI-based morphology prediction tools more accessible and reliable for end-users.




    Moreover, the increasing emphasis on personalized medicine and value-based healthcare is fostering the adoption of AI colon polyp morphology predictors. Healthcare providers are under mounting pressure to deliver precise, efficient, and patient-centric care, which is driving investments in advanced diagnostic technologies. AI-enabled solutions not only streamline the workflow for gastroenterologists but also provide standardized assessments that minimize inter-observer variability. This aligns with broader healthcare initiatives focused on improving screening rates, reducing diagnostic errors, and optimizing treatment pathways for colorectal cancer, thereby fueling market growth.




    From a regional perspective, North America dominates the AI Colon Polyp Morphology Predictor Market, owing to its advanced healthcare infrastructure, high adoption rates of AI technologies, and strong presence of leading industry players. Europe follows closely, benefiting from favorable government initiatives and increasing awareness about colorectal cancer screening. The Asia Pacific region is poised for the fastest growth, driven by rising healthcare investments, expanding access to diagnostic services, and a growing burden of colorectal diseases. Latin America and the Middle East & Africa are also witnessing gradual uptake, supported by improving healthcare facilities and strategic collaborations with global technology providers.





    Component Analysis



    The AI Colon Polyp Morphology Predictor Market is segmented by component into software, hardware, and services, each playing a pivotal role in the overall ecosystem. The software segment constitutes the largest share, primarily due to the growing demand for advanced AI algorithms capable of real-time image analysis and decision support during colonoscopy procedures. These software solutions are designed to seamlessly integrate with existing endoscopic equipment, providing intuiti

  10. FoldIt Public Dataset

    • zenodo.org
    zip
    Updated Sep 23, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shawn Mathew; Saad Nadeem; Arie Kaufman; Shawn Mathew; Saad Nadeem; Arie Kaufman (2021). FoldIt Public Dataset [Dataset]. http://doi.org/10.5281/zenodo.5519974
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 23, 2021
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Shawn Mathew; Saad Nadeem; Arie Kaufman; Shawn Mathew; Saad Nadeem; Arie Kaufman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the public dataset that is used for training FoldIt, a deep learning model for haustral fold detection and segmentation. TrainA contains the optical colonoscopy images (OC). TrainB contains the haustral fold annotations overlayed on the virtual colonoscopy (VC) images. Lastly, TrainC contains the VC images. The model trained on this dataset is also included here.

    Abstract:

    Haustral folds are colon wall protrusions implicated for high polyp miss rate during optical colonoscopy procedures. If segmented accurately, haustral folds can allow for better estimation of missed surface and can also serve as valuable landmarks for registering pre-treatment virtual (CT) and optical colonoscopies, to guide navigation towards the anomalies found in pre-treatment scans. We present a novel generative adversarial network, FoldIt, for feature-consistent image translation of optical colonoscopy videos to virtual colonoscopy renderings with haustral fold overlays. A new transitive loss is introduced in order to leverage ground truth information between haustral fold annotations and virtual colonoscopy renderings. We demonstrate the effectiveness of our model on real challenging optical colonoscopy videos as well as on textured virtual colonoscopy videos with clinician-verified haustral fold annotations. In essence, the FoldIt model is a method for translating between domains when a shared common domain is available. We use the FoldIt model to learn a translation from optical colonoscopy to haustral fold annotation via a common virtual colonoscopy domain. You can find the code and additional detail about Foldit via our Computation Endoscopy Platform at https://github.com/nadeemlab/CEP

    Please cite the following papers when using this dataset.

    The OC data came from the Hyper Kvasir datset:

    Borgli, Hanna, et al. "HyperKvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy." Scientific data 7.1 (2020): 1-14.

    The VC and fold annotation data came from the following TCIA dataset:

    Smith K, Clark K, Bennett W, Nolan T, Kirby J, Wolfsberger M, Moulton J, Vendt B, Freymann J. (2015). Data From CT_COLONOGRAPHY. The Cancer Imaging Archive. https://doi.org/10.7937/K9/TCIA.2015.NWTESAY1

  11. Data from: Spatial characterization and stratification of colorectal...

    • data.niaid.nih.gov
    xml
    Updated Aug 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mario Oroshi; Matthias Mann (2024). Spatial characterization and stratification of colorectal adenomas by Deep Visual Proteomics [Dataset]. https://data.niaid.nih.gov/resources?id=pxd046999
    Explore at:
    xmlAvailable download formats
    Dataset updated
    Aug 4, 2024
    Dataset provided by
    Department of Proteomics and Signal Transduction, Max Planck Institute of Biochemistry, Martinsried, Germany. & NNF Center for Protein Research, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
    Proteomics
    Authors
    Mario Oroshi; Matthias Mann
    Variables measured
    Proteomics
    Description

    Background and Aims: Colorectal adenomas (CRAs) are precursor lesions that can progress to adenocarcinomas. Current clinical guidelines categorize patients into risk groups based on adenoma characteristics observed during index colonoscopy, but this may lead to overtreatment. Our aim was to establish a molecular feature-based risk allocation framework towards improved patient stratification. Methods: Deep Visual Proteomics (DVP) is a novel approach that combines image-based artificial intelligence with automated microdissection and ultra-high sensitive mass spectrometry. Here we used DVP on formalin-fixed, paraffin-embedded (FFPE) CRA tissues from nine patients. Immunohistological staining for Caudal-type homeobox 2 (CDX2), a gene implicated in colorectal cancer, enabled the characterization of cellular heterogeneity within distinct tissue regions and across patients. Results: DVP seamlessly integrated with current pathology workflows and equipment, identifying deleted in malignant brain tumors 1 (DMBT1), myristoylated alanine rich protein kinase C (MARCKS), and cluster of differentiation 99 (CD99) correlated with disease recurrence history, making them potential markers of risk stratification. The spatial and cell type specific capabilities of DVP uncovered a metabolic switch towards anaerobic glycolysis in areas of high dysplasia, which was specific for the cells with high CDX2 expression. Conclusion: The application of spatially resolved proteomics to CRA revealed three new potential markers for early-stage tumor development, and provided novel insights into metabolic reprogramming. Our findings underscore the potential of this technology to refine early-stage detection and contribute to personalized patient management strategies.

  12. Analysis of New Endoscopic Features and Variable Stiffness in Colonoscopy:...

    • data.niaid.nih.gov
    xml
    Updated Oct 15, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). Analysis of New Endoscopic Features and Variable Stiffness in Colonoscopy: Prospective Randomised Trial [Dataset]. https://data.niaid.nih.gov/resources?id=2250639
    Explore at:
    xmlAvailable download formats
    Dataset updated
    Oct 15, 2016
    Area covered
    Hungary
    Variables measured
    Clinical
    Description

    The aim of the present study is to develop and evaluate a computer-based methods for automated and improved detection and classification of different colorectal lesions, especially polyps. For this purpose first, pit pattern and vascularization features of up to 1000 polyps with a size of 10 mm or smaller will be detected and stored in our web based picture database made by a zoom BLI colonoscopy. These polyps are going to be imaged and subsequently removed for histological analysis. The polyp images are analyzed by a newly developed deep learning computer algorithm. The results of the deep learning automatic classification (sensitivity, specificity, negative predictive value, positive predictive value and accuracy) are compared to those of human observers, who were blinded to the histological gold standard. In a second approach we are planning to use LCI of the colon, rather than the usual white light. Here, we will determine, whether this technique could improve the detection of flat neoplastic lesions, laterally spreading tumors, small pedunculated adenomas and serrated polyps. The polyps are called serrated because of their appearance under the microscope after they have been removed. They tend to be located up high in the colon, far away from the rectum. They have been definitely shown to be a type of precancerous polyp and it is possible that using LCI will make it easier to see them, as they can be quite difficult to see with standard white light.

  13. f

    Data_Sheet_1_Development and validation of a three-dimensional deep...

    • frontiersin.figshare.com
    docx
    Updated Dec 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lina Feng; Jiaxin Xu; Xuantao Ji; Liping Chen; Shuai Xing; Bo Liu; Jian Han; Kai Zhao; Junqi Li; Suhong Xia; Jialun Guan; Chenyu Yan; Qiaoyun Tong; Hui Long; Juanli Zhang; Ruihong Chen; Dean Tian; Xiaoping Luo; Fang Xiao; Jiazhi Liao (2023). Data_Sheet_1_Development and validation of a three-dimensional deep learning-based system for assessing bowel preparation on colonoscopy video.docx [Dataset]. http://doi.org/10.3389/fmed.2023.1296249.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Dec 18, 2023
    Dataset provided by
    Frontiers
    Authors
    Lina Feng; Jiaxin Xu; Xuantao Ji; Liping Chen; Shuai Xing; Bo Liu; Jian Han; Kai Zhao; Junqi Li; Suhong Xia; Jialun Guan; Chenyu Yan; Qiaoyun Tong; Hui Long; Juanli Zhang; Ruihong Chen; Dean Tian; Xiaoping Luo; Fang Xiao; Jiazhi Liao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundThe performance of existing image-based training models in evaluating bowel preparation on colonoscopy videos was relatively low, and only a few models used external data to prove their generalization. Therefore, this study attempted to develop a more precise and stable AI system for assessing bowel preparation of colonoscopy video.MethodsWe proposed a system named ViENDO to assess the bowel preparation quality, including two CNNs. First, Information-Net was used to identify and filter out colonoscopy video frames unsuitable for Boston bowel preparation scale (BBPS) scoring. Second, BBPS-Net was trained and tested with 5,566 suitable short video clips through three-dimensional (3D) convolutional neural network (CNN) technology to detect BBPS-based insufficient bowel preparation. Then, ViENDO was applied to complete withdrawal colonoscopy videos from multiple centers to predict BBPS segment scores in clinical settings. We also conducted a human-machine contest to compare its performance with endoscopists.ResultsIn video clips, BBPS-Net for determining inadequate bowel preparation generated an area under the curve of up to 0.98 and accuracy of 95.2%. When applied to full-length withdrawal colonoscopy videos, ViENDO assessed bowel cleanliness with an accuracy of 93.8% in the internal test set and 91.7% in the external dataset. The human-machine contest demonstrated that the accuracy of ViENDO was slightly superior compared to most endoscopists, though no statistical significance was found.ConclusionThe 3D-CNN-based AI model showed good performance in evaluating full-length bowel preparation on colonoscopy video. It has the potential as a substitute for endoscopists to provide BBPS-based assessments during daily clinical practice.

  14. f

    CP-CHILD, a three-class colon polyp dataset

    • figshare.com
    zip
    Updated Oct 29, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wang wei; jinge tian; yanhong luo; Jieyu You; Wang Xin (2020). CP-CHILD, a three-class colon polyp dataset [Dataset]. http://doi.org/10.6084/m9.figshare.13159811.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 29, 2020
    Dataset provided by
    figshare
    Authors
    wang wei; jinge tian; yanhong luo; Jieyu You; Wang Xin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CP-CHILD records the colonoscopy data of children. The dataset is in a folder named “CP-CHILD”. There are two subfolders under the “CP-CHILD” folder, “Train” and “Test”. The “Train” folder contains 4600 images in total, and “Test” folder contains 1100 images. The images are all taken by Olympus PCF-H290DI. “Train” folder contains three subfolders: “Normal”, “Polyp” and “Others”, which correspond to normal colon, colon polyp, and other colon diseases, respectively. The “Normal” contains 3000 normal colon images, the “Polyp” folder contains 800 polyp images, and the “Others” contains 800 other colon disease images. Similarly, the “Test” folder also contains three subfolders. The “Normal” contains 700 normal colon images, the “Polyp” contains 200 polyp images, and the “Others” contains 200 other colon disease images. CP-CHILD-DA is the dataset obtained by data enhancement of of the data in CP-CHILD dataset. There are two subfolders under the “CP-CHILD-DA” folder, “Train-DA” and “Test-DA”. The “Train-DA” folder contains 17000 images in total, and “Test-DA” folder contains 5800 images. “Train-DA” folder contains three subfolders: “Normal-DA”, “Polyp-DA” and “Others-DA”. The “Normal-DA” contains 11000 normal colon images, the “Polyp-DA” folder contains 3000 polyp images, and the “Others-DA” contains 3000 other colon disease images. Similarly, the “Test-DA” folder also contains three subfolders. The “Normal-DA” contains 3800 normal colon images, the “Polyp-DA” contains 1000 polyp images, and the “Others-DA” contains 1000 other colon disease images.

  15. f

    DataSheet_1_A bibliometric and visual analysis of publications on artificial...

    • frontiersin.figshare.com
    docx
    Updated Jun 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pan Huang; Zongfeng Feng; Xufeng Shu; Ahao Wu; Zhonghao Wang; Tengcheng Hu; Yi Cao; Yi Tu; Zhengrong Li (2023). DataSheet_1_A bibliometric and visual analysis of publications on artificial intelligence in colorectal cancer (2002-2022).docx [Dataset]. http://doi.org/10.3389/fonc.2023.1077539.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    Frontiers
    Authors
    Pan Huang; Zongfeng Feng; Xufeng Shu; Ahao Wu; Zhonghao Wang; Tengcheng Hu; Yi Cao; Yi Tu; Zhengrong Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundColorectal cancer (CRC) has the third-highest incidence and second-highest mortality rate of all cancers worldwide. Early diagnosis and screening of CRC have been the focus of research in this field. With the continuous development of artificial intelligence (AI) technology, AI has advantages in many aspects of CRC, such as adenoma screening, genetic testing, and prediction of tumor metastasis.ObjectiveThis study uses bibliometrics to analyze research in AI in CRC, summarize the field’s history and current status of research, and predict future research directions.MethodWe searched the SCIE database for all literature on CRC and AI. The documents span the period 2002-2022. we used bibliometrics to analyze the data of these papers, such as authors, countries, institutions, and references. Co-authorship, co-citation, and co-occurrence analysis were the main methods of analysis. Citespace, VOSviewer, and SCImago Graphica were used to visualize the results.ResultThis study selected 1,531 articles on AI in CRC. China has published a maximum number of 580 such articles in this field. The U.S. had the most quality publications, boasting an average citation per article of 46.13. Mori Y and Ding K were the two authors with the highest number of articles. Scientific Reports, Cancers, and Frontiers in Oncology are this field’s most widely published journals. Institutions from China occupy the top 9 positions among the most published institutions. We found that research on AI in this field mainly focuses on colonoscopy-assisted diagnosis, imaging histology, and pathology examination.ConclusionAI in CRC is currently in the development stage with good prospects. AI is currently widely used in colonoscopy, imageomics, and pathology. However, the scope of AI applications is still limited, and there is a lack of inter-institutional collaboration. The pervasiveness of AI technology is the main direction of future housing development in this field.

  16. f

    Comparison of models on test set GIANA2017-T.

    • plos.figshare.com
    xls
    Updated Jul 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Haitao Bian; Min Jiang; Jingjing Qian (2023). Comparison of models on test set GIANA2017-T. [Dataset]. http://doi.org/10.1371/journal.pone.0288376.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 12, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Haitao Bian; Min Jiang; Jingjing Qian
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Colorectal cancer (CRC) is one of the significant threats to public health and the sustainable healthcare system during urbanization. As the primary method of screening, colonoscopy can effectively detect polyps before they evolve into cancerous growths. However, the current visual inspection by endoscopists is insufficient in providing consistently reliable polyp detection for colonoscopy videos and images in CRC screening. Artificial Intelligent (AI) based object detection is considered as a potent solution to overcome visual inspection limitations and mitigate human errors in colonoscopy. This study implemented a YOLOv5 object detection model to investigate the performance of mainstream one-stage approaches in colorectal polyp detection. Meanwhile, a variety of training datasets and model structure configurations are employed to identify the determinative factors in practical applications. The designed experiments show that the model yields acceptable results assisted by transfer learning, and highlight that the primary constraint in implementing deep learning polyp detection comes from the scarcity of training data. The model performance was improved by 15.6% in terms of average precision (AP) when the original training dataset was expanded. Furthermore, the experimental results were analysed from a clinical perspective to identify potential causes of false positives. Besides, the quality management framework is proposed for future dataset preparation and model development in AI-driven polyp detection tasks for smart healthcare solutions.

  17. G

    Gastrointestinal AI-assisted Diagnosis Solution Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jun 3, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Gastrointestinal AI-assisted Diagnosis Solution Report [Dataset]. https://www.datainsightsmarket.com/reports/gastrointestinal-ai-assisted-diagnosis-solution-527824
    Explore at:
    pdf, doc, pptAvailable download formats
    Dataset updated
    Jun 3, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The global market for Gastrointestinal (GI) AI-assisted diagnosis solutions is experiencing robust growth, driven by the increasing prevalence of gastrointestinal diseases, advancements in AI and machine learning technologies, and the rising demand for improved diagnostic accuracy and efficiency. The market, estimated at $500 million in 2025, is projected to exhibit a Compound Annual Growth Rate (CAGR) of 15% from 2025 to 2033, reaching approximately $1.8 billion by 2033. Key drivers include the ability of AI to analyze medical images (endoscopy, colonoscopy) significantly faster and more accurately than humans, leading to earlier and more precise diagnoses. This translates to improved patient outcomes, reduced healthcare costs associated with delayed or misdiagnosis, and enhanced workflow efficiency for medical professionals. Furthermore, the integration of AI into existing endoscopy and imaging systems is streamlining the diagnostic process, increasing accessibility to advanced diagnostics, particularly in underserved regions. However, challenges remain, including regulatory hurdles for AI-based medical devices, concerns about data privacy and security, and the need for extensive clinical validation and adoption by healthcare professionals. The market is segmented by technology (image analysis, pathology analysis), application (colon cancer screening, polyp detection, inflammatory bowel disease diagnosis), and end-user (hospitals, clinics, diagnostic centers). Leading companies such as Medtronic, Vision, and SenseTime are actively shaping the market landscape through continuous innovation and strategic partnerships. The competitive landscape is characterized by a mix of established medical device companies and emerging AI technology firms. Companies are focusing on developing sophisticated algorithms and user-friendly interfaces to optimize the integration of AI into clinical workflows. Future growth will be heavily influenced by advancements in deep learning, the development of more comprehensive datasets for algorithm training, and the growing adoption of cloud-based AI solutions. Addressing regulatory concerns and building trust among healthcare providers are crucial factors that will influence the market's trajectory in the coming years. Successful strategies will involve collaborative efforts among technology developers, healthcare providers, and regulatory bodies to ensure the safe and effective implementation of GI AI-assisted diagnostic solutions. The market’s geographic distribution is likely to mirror existing healthcare infrastructure and technological advancements, with North America and Europe leading in adoption initially, followed by a gradual expansion into Asia-Pacific and other regions.

  18. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Anita Rau; Sophia Bano; Yueming Jin; Danail Stoyanov (2023). Simcol3D - 3D Reconstruction during Colonoscopy Challenge Dataset [Dataset]. http://doi.org/10.5522/04/24077763.v1

Data from: Simcol3D - 3D Reconstruction during Colonoscopy Challenge Dataset

Related Article
Explore at:
binAvailable download formats
Dataset updated
Sep 7, 2023
Dataset provided by
University College London
Authors
Anita Rau; Sophia Bano; Yueming Jin; Danail Stoyanov
License

Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically

Description

Colorectal cancer is one of the most common cancers in the world. By establishing a benchmark, SimCol3D aimed to facilitate data-driven navigation during colonoscopy. More details about the challenge and corresponding data can be found in the challenge paper on arXiv.

The challenge consisted of simulated colonoscopy data and images from real patients. This data release encompasses the synthetic portion of the challenge. The synthetic data includes three different anatomies derived from real human CT scans. Each anatomy provides several randomly generated trajectories with RGB renderings, camera intrinsics, ground truth depths, and ground truth poses. In total, this dataset includes more than 37,000 labelled images.

The real colonoscopy data used in the SimCol3D challenge consists of images extracted from the EndoMapper dataset. The real data is available on the EndoMapper Synapse page.

The synthetic colonoscopy data is made available in this repository.

Search
Clear search
Close search
Google apps
Main menu