100+ datasets found
  1. a

    DeepLesion (10,594 CT scans with lesions)

    • academictorrents.com
    bittorrent
    Updated Jan 26, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ke Yan (National Institutes of Health Clinical Center) (2019). DeepLesion (10,594 CT scans with lesions) [Dataset]. https://academictorrents.com/details/de50f4d4aa3d028944647a56199c07f5fa6030ff
    Explore at:
    bittorrent(243037288033)Available download formats
    Dataset updated
    Jan 26, 2019
    Dataset authored and provided by
    Ke Yan (National Institutes of Health Clinical Center)
    License

    https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

    Description

    Introduction The DeepLesion dataset contains 32,120 axial computed tomography (CT) slices from 10,594 CT scans (studies) of 4,427 unique patients. There are 1–3 lesions in each image with accompanying bounding boxes and size measurements, adding up to 32,735 lesions altogether. The lesion annotations were mined from NIH’s picture archiving and communication system (PACS). Some meta-data are also provided. The contents include: - Folder “Images_png”: png image files. We named each slice with the format “patient index_study index_series index_slice index.png”, with the last underscore being / or \ to indicate sub-folders. The images are stored in unsigned 16 bit. One should subtract 32768 from the pixel intensity to obtain the original Hounsfield unit (HU) values. We provide not only the key CT slice that contains the lesion annotation, but also its 3D context (30mm extra slices above and below the key slice). Due to the large size of the data and the file size limit o

  2. h

    CT_DeepLesion-MedSAM2

    • huggingface.co
    Updated Jun 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    WangLab UofT (2025). CT_DeepLesion-MedSAM2 [Dataset]. https://huggingface.co/datasets/wanglab/CT_DeepLesion-MedSAM2
    Explore at:
    Dataset updated
    Jun 22, 2025
    Dataset authored and provided by
    WangLab UofT
    Description

    CT_DeepLesion-MedSAM2 Dataset

      Authors
    

    Jun Ma* 1,2, Zongxin Yang* 3, Sumin Kim2,4,5, Bihui Chen2,4,5, Mohammed Baharoon2,3,5, Adibvafa Fallahpour2,4,5, Reza Asakereh4,7, Hongwei Lyu4, Bo Wang† 1,2,4,5,6

  3. f

    Data set of multi-source dermatoscopic images of skin hair for skin lesions

    • figshare.com
    • data.4tu.nl
    txt
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alessio Gallucci (2023). Data set of multi-source dermatoscopic images of skin hair for skin lesions [Dataset]. http://doi.org/10.4121/uuid:9ed94e25-8b74-4807-b84a-2c54ec9d96f0
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    4TU.ResearchData
    Authors
    Alessio Gallucci
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    Skin hair annotations for 75 images taken randomly from: P. Tschandl, C. Rosendahl, and H. Kittler, “The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions,” Sci. data, vol. 5, p. 180161, 2018.

  4. m

    COVID-19 & Normal CT Segmentation Dataset

    • data.mendeley.com
    Updated Nov 27, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Arvin Arian (2023). COVID-19 & Normal CT Segmentation Dataset [Dataset]. http://doi.org/10.17632/pfmgfpwnmm.2
    Explore at:
    Dataset updated
    Nov 27, 2023
    Authors
    Arvin Arian
    License

    Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
    License information was derived automatically

    Description

    This dataset includes CT data and segmentation masks from patients diagnosed with COVID-19, as well as data from subjects without the infection.

    This study is approved under the ethical approval codes of IR.TUMS.IKHC.REC.1399.255 and IR.TUMS.VCR.REC.1399.488 at Tehran University of Medical Sciences.

    The code for loading the dataset and running an AI model is available on: https://github.com/SamanSotoudeh/COVID19-segmentation

    Please use the following citations:

    1- Arian, Arvin; Mehrabinejad, Mohammad-Mehdi; Zoorpaikar, Mostafa; Hasanzadeh, Navid; Sotoudeh-Paima, Saman; Kolahi, Shahriar; Gity, Masoumeh; Soltanian-Zadeh, "Accuracy of Artificial Intelligence CT Quantification in Predicting COVID-19 Subjects’ Prognosis" PLoS ONE (2023).

    2- Sotoudeh-Paima, Saman, et al. "A Multi-centric Evaluation of Deep Learning Models for Segmentation of COVID-19 Lung Lesions on Chest CT Scans." Iranian Journal of Radiology 19.4 (2022).

    3- Hasanzadeh, Navid, et al. "Segmentation of COVID-19 Infections on CT: Comparison of four UNet-based networks." 2020 27th National and 5th International Iranian Conference on Biomedical Engineering (ICBME). IEEE, 2020.

  5. f

    iToBoS 2024 - Skin Lesion Detection with 3D-TBP

    • figshare.com
    • api.isic-archive.com
    png
    Updated May 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anup Saha (2025). iToBoS 2024 - Skin Lesion Detection with 3D-TBP [Dataset]. http://doi.org/10.6084/m9.figshare.28452545.v6
    Explore at:
    pngAvailable download formats
    Dataset updated
    May 12, 2025
    Dataset provided by
    figshare
    Authors
    Anup Saha
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The iToBoS dataset: skin region images extracted from 3D total body photographs for lesion detectionThe early detection of skin cancer is critical for improving patient outcomes. Traditionally, dermatologists rely on dermoscopy to examine pigmented skin lesions. While this non-invasive technique enhances diagnostic accuracy, its effectiveness is highly dependent on the clinician’s expertise. Additionally, capturing dermoscopic images for every suspicious lesion is a labor-intensive process. Given these challenges, there is an increasing need for computer-aided diagnosis (CAD) systems that utilize conventional cameras. Such systems can support general physicians and other non-specialist practitioners in identifying potential malignant lesion, improving early detection and intervention. Moreover, they facilitate longitudinal tracking of lesions, aiding researchers in studying disease progression and treatment efficacy.This dataset provides high-resolution skin patch images extracted from 3D total body photographs to support the development of advanced machine learning models for lesion detection. It serves as a valuable resource for researchers working on automated skin lesion analysis, particularly in the context of total body photography (TBP).Dataset Description:The iToBoS dataset consists of 16,954 high-resolution images of skin regions obtained from anonymized 3D avatars of patients. These avatars were generated using the Canfield VECTRA WB360 system, a cutting-edge imaging technology that captures comprehensive, full-body skin images using 92 fixed cameras arranged in 46 stereo pairs with xenon flash lighting. The images were collected from patients at two clinical sites: the Clinical Hospital of Barcelona (Spain) and the University of Queensland (Australia).The dataset provides diverse anatomical locations, including the torso, arms, and legs, with each image having an average resolution of 1012x827 pixels and a 45-pixel overlap between adjacent images. The images are extracted from 3D avatars while ensuring compliance with GDPR regulations by automatically removing patient facial features. Each image is accompanied by metadata, including patient age range, body location, and sun damage score, allowing for in-depth analysis and stratification.Significance of the Dataset:Facilitates Automated Skin Lesion Detection: The dataset supports the development of AI-based lesion detection models that can improve early diagnosis of skin cancer, particularly in regions with limited access to dermatological expertise.Supports Total Body Photography Research: Leveraging 3D TBP for lesion detection is an emerging field, and this dataset provides a benchmark for further exploration.Enhances Machine Learning Applications: The dataset serves as a benchmark for developing state-of-the-art computer vision and deep learning models for detection of skin lesions.

  6. t

    Automatic Breast Lesion Detection in Ultrafast DCE-MRI Using Deep Learning -...

    • service.tib.eu
    Updated Dec 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Automatic Breast Lesion Detection in Ultrafast DCE-MRI Using Deep Learning - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/automatic-breast-lesion-detection-in-ultrafast-dce-mri-using-deep-learning
    Explore at:
    Dataset updated
    Dec 2, 2024
    Description

    A deep learning-based computer-aided detection (CADe) method to detect breast lesions in ultrafast DCE-MRI sequences.

  7. Comparative experimental evaluation metrics for ISIC 2017.

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    • +1more
    xls
    Updated Mar 21, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    You Xue; Xinya Chen; Pei Liu; Xiaoyi Lv (2024). Comparative experimental evaluation metrics for ISIC 2017. [Dataset]. http://doi.org/10.1371/journal.pone.0299392.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Mar 21, 2024
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    You Xue; Xinya Chen; Pei Liu; Xiaoyi Lv
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Comparative experimental evaluation metrics for ISIC 2017.

  8. Z

    Data from: Improving Automatic Melanoma Diagnosis using Deep Learn-ing-based...

    • data.niaid.nih.gov
    Updated Jan 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anand K Nambisan; Akanksha Maurya; Norsang lama; Thanh Phan; Gehana Patel; Keith Miller; Binita Lama; Jason Hagerty; Ronald J Stanley; William V Stoecker (2023). Improving Automatic Melanoma Diagnosis using Deep Learn-ing-based Segmentation of Irregular Networks [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_7557346
    Explore at:
    Dataset updated
    Jan 22, 2023
    Dataset provided by
    S&A Technologies
    Missouri University Of Science and Technology/
    University of Missouri-Columbia
    Missouri University Of Science and Technology
    Authors
    Anand K Nambisan; Akanksha Maurya; Norsang lama; Thanh Phan; Gehana Patel; Keith Miller; Binita Lama; Jason Hagerty; Ronald J Stanley; William V Stoecker
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Irregular masks dataset created on a subset of the ISIC19 training dataset. All annotations are for melanoma lesions. The filename indicates the ISIC19 image id along with suffix indicating annotator and/or verifier. This dataset was used in the publication "Improving Automatic Melanoma Diagnosis using Deep Learn-ing-based Segmentation of Irregular Networks" to be submitted to the Cancers Journal.

    Please cite the corresponding article (to be published) if data is used in your work.

    The references for the ISIC19 dataset that this is built on is given below.

    BCN_20000 Dataset: (c) Department of Dermatology, Hospital Clínic de Barcelona

    HAM10000 Dataset: (c) by ViDIR Group, Department of Dermatology, Medical University of Vienna; https://doi.org/10.1038/sdata.2018.161

    MSK Dataset: (c) Anonymous; https://arxiv.org/abs/1710.05006; https://arxiv.org/abs/1902.03368

  9. z

    A sample of images for review from the following dataset: "An Annotated...

    • zenodo.org
    Updated Mar 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Noran Ayman; Noran Ayman (2025). A sample of images for review from the following dataset: "An Annotated Clinical Image Dataset for Deep Learning-Based Classification of Oral Lesions" [Dataset]. http://doi.org/10.5281/zenodo.14954544
    Explore at:
    Dataset updated
    Mar 4, 2025
    Dataset provided by
    Noran Ayman
    Authors
    Noran Ayman; Noran Ayman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Feb 2025
    Description

    Please be advised that the complete dataset is present as version 1 but will be available upon publication . In the interim, a sample of the dataset has been provided in JSON format for review, designated as Version 2. The full dataset, referred to as Version 1, will be made available upon request following its publication in a journal.

    This sample dataset includes:

    300 images from the normal category

    300 images from the low risk of malignant transformation category

    300 images from the high risk of malignant transformation category

    For further inquiries or access to the complete dataset, kindly reach out after its official publication.

  10. D

    AI Prostate MRI Lesion Detection Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Oct 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). AI Prostate MRI Lesion Detection Market Research Report 2033 [Dataset]. https://dataintelo.com/report/ai-prostate-mri-lesion-detection-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Oct 1, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    AI Prostate MRI Lesion Detection Market Outlook



    According to our latest research, the global AI Prostate MRI Lesion Detection market size reached USD 402.5 million in 2024, and is set to grow at a CAGR of 22.1% from 2025 to 2033, resulting in a forecasted market size of USD 2,828.7 million by 2033. This rapid growth is primarily driven by the increasing prevalence of prostate cancer, advancements in artificial intelligence algorithms, and the demand for more accurate, non-invasive diagnostic technologies. The integration of AI into prostate MRI lesion detection is revolutionizing clinical workflows, improving diagnostic accuracy, and enabling earlier and more personalized treatment interventions.




    One of the key growth factors propelling the AI Prostate MRI Lesion Detection market is the rising incidence of prostate cancer worldwide. Prostate cancer remains one of the most common cancers among men, and early detection is crucial for effective management and improved patient outcomes. Traditional diagnostic methods, such as biopsies and manual MRI interpretation, often suffer from limitations including subjectivity and missed lesions. The adoption of AI-driven solutions significantly enhances lesion detection sensitivity and specificity, reducing false negatives and enabling radiologists to identify clinically significant tumors with greater confidence. This technological leap is fostering trust among healthcare providers, further accelerating market adoption.




    Another significant driver for the AI Prostate MRI Lesion Detection market is the ongoing evolution of deep learning and machine learning algorithms tailored for medical imaging applications. The proliferation of high-quality annotated MRI datasets, coupled with increased computational power, has enabled the development of robust AI models capable of real-time lesion detection and characterization. These advanced solutions are not only streamlining the radiology workflow but also reducing the time required for image interpretation and report generation. As a result, healthcare institutions are experiencing improved operational efficiency and cost-effectiveness, which is incentivizing further investments in AI-powered diagnostic platforms.




    The growing emphasis on precision medicine and personalized healthcare is also fueling the expansion of the AI Prostate MRI Lesion Detection market. AI-driven lesion detection tools facilitate risk stratification, treatment planning, and monitoring, allowing clinicians to tailor interventions based on individual patient profiles. Furthermore, regulatory bodies across major markets are increasingly approving AI-based diagnostic solutions, providing a favorable environment for innovation and commercialization. Strategic collaborations between healthcare providers, technology vendors, and research institutions are accelerating the integration of AI into routine clinical practice, thus broadening the market’s reach.




    From a regional perspective, North America currently dominates the AI Prostate MRI Lesion Detection market due to its advanced healthcare infrastructure, high awareness levels, and early adoption of emerging technologies. Europe follows closely, benefiting from robust research initiatives and favorable reimbursement scenarios. The Asia Pacific region is witnessing the fastest growth, driven by rising healthcare investments, increasing cancer prevalence, and expanding digital health initiatives. Meanwhile, Latin America and the Middle East & Africa are experiencing gradual adoption, supported by improving access to diagnostic imaging and growing government focus on cancer screening programs. These regional dynamics collectively contribute to the global market’s robust outlook.



    Component Analysis



    The AI Prostate MRI Lesion Detection market is segmented by component into software, hardware, and services, each playing a pivotal role in the overall ecosystem. The software segment accounts for the largest share, driven by the proliferation of advanced AI algorithms designed for image analysis, lesion segmentation, and risk assessment. These software solutions integrate seamlessly with existing MRI systems and PACS, enabling radiologists to leverage AI capabilities without significant workflow disruptions. Continuous updates and improvements in AI models, such as the incorporation of deep learning and neural networks, are further enhancing diagnostic accuracy and reliability. The rise of cloud-based software platforms is also c

  11. f

    Demographic information of the 769 patients.

    • plos.figshare.com
    xls
    Updated Jun 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sunghun Kim; Eunjee Lee (2023). Demographic information of the 769 patients. [Dataset]. http://doi.org/10.1371/journal.pone.0287301.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 29, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Sunghun Kim; Eunjee Lee
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Recent advancements in computer vision and neural networks have facilitated the medical imaging survival analysis for various medical applications. However, challenges arise when patients have multiple images from multiple lesions, as current deep learning methods provide multiple survival predictions for each patient, complicating result interpretation. To address this issue, we developed a deep learning survival model that can provide accurate predictions at the patient level. We propose a deep attention long short-term memory embedded aggregation network (DALAN) for histopathology images, designed to simultaneously perform feature extraction and aggregation of lesion images. This design enables the model to efficiently learn imaging features from lesions and aggregate lesion-level information to the patient level. DALAN comprises a weight-shared CNN, attention layers, and LSTM layers. The attention layer calculates the significance of each lesion image, while the LSTM layer combines the weighted information to produce an all-encompassing representation of the patient’s lesion data. Our proposed method performed better on both simulated and real data than other competing methods in terms of prediction accuracy. We evaluated DALAN against several naive aggregation methods on simulated and real datasets. Our results showed that DALAN outperformed the competing methods in terms of c-index on the MNIST and Cancer dataset simulations. On the real TCGA dataset, DALAN also achieved a higher c-index of 0.803±0.006 compared to the naive methods and the competing models. Our DALAN effectively aggregates multiple histopathology images, demonstrating a comprehensive survival model using attention and LSTM mechanisms.

  12. D

    AI Breast Ultrasound Lesion Detection Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). AI Breast Ultrasound Lesion Detection Market Research Report 2033 [Dataset]. https://dataintelo.com/report/ai-breast-ultrasound-lesion-detection-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    AI Breast Ultrasound Lesion Detection Market Outlook



    According to our latest research, the global AI Breast Ultrasound Lesion Detection market size reached USD 298 million in 2024, driven by the rising prevalence of breast cancer and the urgent need for advanced diagnostic solutions. The market is expected to expand at a CAGR of 14.8% from 2025 to 2033, with the forecasted market size projected to hit USD 984 million by 2033. The growth of this market is primarily attributed to the continuous advancements in artificial intelligence technologies, increasing adoption of AI-powered imaging tools in clinical settings, and the growing emphasis on early breast cancer detection to improve patient outcomes.




    One of the primary growth factors propelling the AI Breast Ultrasound Lesion Detection market is the surge in breast cancer cases globally. According to the World Health Organization, breast cancer remains the most common cancer among women worldwide, with millions of new cases diagnosed annually. This alarming prevalence has intensified the demand for early and accurate diagnostic solutions, making AI-powered breast ultrasound lesion detection systems critical in healthcare settings. These advanced systems leverage deep learning algorithms to analyze ultrasound images with high precision, reducing human error and enabling radiologists to detect subtle lesions that may be missed by conventional methods. Additionally, AI-driven solutions facilitate faster workflows, allowing healthcare providers to manage increasing patient loads without compromising diagnostic quality.




    Another significant driver is the technological evolution within the AI and medical imaging sectors. The integration of advanced machine learning models and neural networks has revolutionized the capabilities of breast ultrasound imaging. Modern AI systems are now capable of distinguishing between benign and malignant lesions with remarkable accuracy, supporting clinicians in making informed decisions. Furthermore, these technologies are continuously being refined through large datasets and real-world clinical feedback, resulting in improved diagnostic performance over time. The rapid adoption of cloud-based platforms further enhances the scalability and accessibility of AI breast ultrasound lesion detection tools, enabling healthcare facilities of all sizes to benefit from these innovations.




    The supportive regulatory environment and growing investment in digital health infrastructure are also fueling market expansion. Governments and private organizations are increasingly recognizing the value of AI in improving cancer diagnostics and are providing funding for research and development initiatives. Regulatory bodies such as the FDA and EMA are streamlining approval processes for AI-driven medical devices, encouraging manufacturers to innovate and bring new products to market more efficiently. Additionally, collaborations between technology companies, healthcare providers, and academic institutions are accelerating the development and deployment of AI-powered breast ultrasound solutions, further solidifying the market's upward trajectory.




    From a regional perspective, North America currently dominates the AI Breast Ultrasound Lesion Detection market, owing to its advanced healthcare infrastructure, high adoption rate of innovative technologies, and significant investments in research and development. Europe follows closely, driven by supportive government policies and the presence of leading healthcare technology firms. The Asia Pacific region is expected to witness the fastest growth during the forecast period, fueled by increasing healthcare expenditure, rising awareness about early breast cancer detection, and the rapid digital transformation of healthcare systems in countries such as China, India, and Japan.



    Component Analysis



    The AI Breast Ultrasound Lesion Detection market is segmented by component into software, hardware, and services, each playing a pivotal role in the overall ecosystem. The software segment, which includes AI algorithms, image analysis platforms, and diagnostic decision support tools, constitutes the largest share of the market. This dominance is attributed to the continuous advancements in deep learning, natural language processing, and image recognition technologies that power the core functionality of AI-based lesion detection systems. Software solutions are increasingly cloud-based, allowing for seamless integration with existing ho

  13. i

    Tooth Lesion Model Market - In-Deep Analysis Focusing on Market Share

    • imrmarketreports.com
    Updated Jun 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Swati Kalagate; Akshay Patil; Vishal Kumbhar (2023). Tooth Lesion Model Market - In-Deep Analysis Focusing on Market Share [Dataset]. https://www.imrmarketreports.com/reports/tooth-lesion-model-market
    Explore at:
    Dataset updated
    Jun 2023
    Dataset provided by
    IMR Market Reports
    Authors
    Swati Kalagate; Akshay Patil; Vishal Kumbhar
    License

    https://www.imrmarketreports.com/privacy-policy/https://www.imrmarketreports.com/privacy-policy/

    Description

    Global Tooth Lesion Model Market Report 2022 comes with the extensive industry analysis of development components, patterns, flows and sizes. The report also calculates present and past market values to forecast potential market management through the forecast period between 2022-2028. The report may be the best of what is a geographic area which expands the competitive landscape and industry perspective of the market.

  14. f

    Supplementary Material for: Deep learning-based algorithm for staging...

    • karger.figshare.com
    • datasetcatalog.nlm.nih.gov
    pdf
    Updated Oct 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    vanNistelrooij N.; Chaves E.T.; Cenci M.S.; Cao L.; Loomans B.A.C.; Xi T.; El-Ghoul K.; Romero V.H.D.; Lima G.S.; Flügge T.; vanGinneken B.; Huysmans M.-C.; Vinayahalingam S.; Mendes F.M. (2024). Supplementary Material for: Deep learning-based algorithm for staging secondary caries in bitewings [Dataset]. http://doi.org/10.6084/m9.figshare.27323880.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Oct 29, 2024
    Dataset provided by
    Karger Publishers
    Authors
    vanNistelrooij N.; Chaves E.T.; Cenci M.S.; Cao L.; Loomans B.A.C.; Xi T.; El-Ghoul K.; Romero V.H.D.; Lima G.S.; Flügge T.; vanGinneken B.; Huysmans M.-C.; Vinayahalingam S.; Mendes F.M.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Introduction: Despite the notable progress in developing artificial intelligence (AI)-based tools for caries detection in bitewings, limited research has addressed the detection and staging of secondary caries. Therefore, we aimed to develop a Convolutional neural network (CNN)-based algorithm for these purposes using a novel approach for determining lesion severity. Methods: We used a dataset from a Dutch dental practice-based research network containing 2,612 restored teeth in 413 bitewings from 383 patients aged 15 to 88 years and trained the Mask R-CNN architecture with a Swin Transformer backbone. Two-stage training fine-tuned caries detection accuracy and severity assessment. Annotations of caries around restorations were made by two evaluators and checked by two other experts. Aggregated accuracy metrics (mean ± Standard Deviation - SD) in detecting teeth with secondary caries were calculated considering two thresholds: detecting all lesions and dentine lesions. The correlation between the lesion severity scores obtained with the algorithm and the annotators’ consensus was determined using the Pearson correlation coefficient and Bland-Altman plots. Results: Our refined algorithm showed high specificity in detecting all lesions (0.966 ± 0.025) and dentine lesions (0.964 ± 0.019). Sensitivity values were lower: 0.737 ± 0.079 for all lesions and 0.808 ± 0.083 for dentine lesions. The areas under ROC curves (SD) were 0.940 (0.025) for all lesions and 0.946 (0.023) for dentine lesions. The correlation coefficient for severity scores was 0.802. Conclusion: We developed an improved algorithm to support clinicians in detecting and staging secondary caries in bitewing, incorporating an innovative approach for annotation, considering the lesion severity as a continuous outcome.

  15. D

    AI Brain MRI Lesion Detection Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). AI Brain MRI Lesion Detection Market Research Report 2033 [Dataset]. https://dataintelo.com/report/ai-brain-mri-lesion-detection-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    AI Brain MRI Lesion Detection Market Outlook



    According to our latest research, the global AI Brain MRI Lesion Detection market size reached USD 684.5 million in 2024, with robust growth driven by increasing adoption of artificial intelligence in neuroimaging. The market is expected to expand at a remarkable CAGR of 23.8% from 2025 to 2033, projected to achieve a value of USD 6.2 billion by 2033. This impressive growth is underpinned by the escalating demand for advanced diagnostic tools, rising prevalence of neurological disorders, and the continuous integration of AI-powered solutions in clinical practice, as per our latest research findings.




    The primary growth factor propelling the AI Brain MRI Lesion Detection market is the increasing incidence of neurological diseases such as brain tumors, multiple sclerosis, and stroke worldwide. As the global population ages and lifestyle-related risk factors proliferate, the burden of these conditions continues to rise. Conventional MRI interpretation is time-consuming and susceptible to human error, fueling the need for automated, accurate, and rapid lesion detection tools. AI-driven solutions are revolutionizing this landscape by providing radiologists with enhanced image analysis capabilities, improving diagnostic confidence, and enabling earlier detection of subtle lesions that may otherwise go unnoticed. This technological leap is not only enhancing clinical outcomes but also reducing the workload on healthcare professionals, making AI-powered MRI lesion detection an indispensable asset in modern neuroimaging.




    Another significant driver for market expansion is the rapid advancement in machine learning algorithms and deep learning architectures, which have substantially improved the sensitivity and specificity of lesion detection. The growing availability of large, annotated neuroimaging datasets has enabled the training of robust AI models capable of distinguishing between various lesion types with high precision. Integration of these algorithms into commercial software platforms is increasingly common, with regulatory agencies such as the FDA and EMA accelerating approvals for AI-based diagnostic tools. Furthermore, collaborations between technology firms, academic research institutions, and healthcare providers are fostering innovation and accelerating the translation of AI research into clinically validated products, further stimulating market growth.




    The proliferation of cloud-based deployment models and the increasing accessibility of high-performance computing infrastructure are also catalyzing the adoption of AI in brain MRI lesion detection. Cloud-based solutions offer scalable, cost-effective, and secure platforms for processing large volumes of imaging data, making advanced AI tools available to a broader range of healthcare facilities, including those in resource-limited settings. Additionally, the integration of AI with hospital information systems and picture archiving and communication systems (PACS) is streamlining clinical workflows and facilitating seamless data exchange. These developments are democratizing access to cutting-edge diagnostic technologies, driving market penetration across both developed and emerging regions.




    From a regional perspective, North America currently dominates the AI Brain MRI Lesion Detection market, accounting for the largest share in 2024. This leadership is attributed to the presence of major AI technology vendors, advanced healthcare infrastructure, and a high prevalence of neurological disorders. Europe follows closely, benefiting from strong research collaborations and supportive regulatory frameworks. Meanwhile, the Asia Pacific region is poised for the fastest growth, driven by rising healthcare investments, increasing awareness of AI in diagnostic imaging, and a rapidly expanding patient pool. Latin America and the Middle East & Africa are also witnessing gradual adoption, though at a relatively slower pace due to infrastructural and economic constraints.



    Component Analysis



    The AI Brain MRI Lesion Detection market by component is segmented into software, hardware, and services, each playing a pivotal role in the deployment and utilization of AI-powered neuroimaging solutions. Software dominates the market, capturing the largest revenue share in 2024, as AI algorithms and specialized imaging platforms are the backbone of lesion detection workflows. These software solutions leverage advanced deep learning and pattern recogni

  16. m

    Panoramic radiographs with periapical lesions Dataset

    • data.mendeley.com
    Updated Apr 10, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    viet do (2024). Panoramic radiographs with periapical lesions Dataset [Dataset]. http://doi.org/10.17632/kx52tk2ddj.3
    Explore at:
    Dataset updated
    Apr 10, 2024
    Authors
    viet do
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Apical periodontitis is a common dental disease. The periapical lesions in radiographs are important symptom to diagnosis apical periodontitis. The application of machine learning in detection of periapical lesions is a hot topic in dentistry. This dataset of panoramic radiographs with periapical lesions from real-world patients. This dataset can be used to train a deep learning model for apical lesions detection. This dataset provides 3926 radiographs. In order to enhance the dataset, the original images was scaled, rotated, flipped and added noise, expanding to. This dataset contains three main folders: Original JPG Images, Augmentation JPG Images, Image Annotations. The folder size of the dataset is

  17. f

    Table_1_Core Needle Biopsy Targeting the Viable Area of Deep-Sited Dominant...

    • frontiersin.figshare.com
    • datasetcatalog.nlm.nih.gov
    docx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jian Li; Jing Han; Yu Wang; Yunxian Mo; Jibin Li; Jin Xiang; Zhiming Li; Jianhua Zhou; Siyu Wang (2023). Table_1_Core Needle Biopsy Targeting the Viable Area of Deep-Sited Dominant Lesion Verified by Color Doppler and/or Contrast-Enhanced Ultrasound Contribute to the Actionable Diagnosis of the Patients Suspicious of Lymphoma.DOCX [Dataset]. http://doi.org/10.3389/fonc.2020.500153.s002
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Frontiers
    Authors
    Jian Li; Jing Han; Yu Wang; Yunxian Mo; Jibin Li; Jin Xiang; Zhiming Li; Jianhua Zhou; Siyu Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundInadequate accuracy of ultrasound-guided core needle biopsy (US-CNB) urges further improvement for the diagnosis and management of lymphoma to meet with the practitioners’ increased reliance on this mini-invasive approach.MethodsData related to US-CNB of the deep-sited dominant lesions suspicious of lymphoma detected by computer tomography or positron-emission tomography/computer tomography for eligibility assessment of three prospective clinical trials were collected in advance. A retrospective analysis of the prospective data collection was performed, in which Viable-targeting US-CNB that Color Doppler flow imaging (CDFI) and/or contrast enhanced ultrasound (CEUS) were employed to select viable area for biopsy target compared with Routine US-CNB that routine procedure of evaluation and guidance using gray-scale ultrasound with CDFI in terms of the yield of clinically actionable diagnosis and safety, and determinants for the successful US-CNB that established an actionable diagnosis were explored. The establishment of final diagnosis was based on surgical pathology or medical response to therapy with follow-up at least 6 months.ResultsA total of 245 patients underwent Routine US-CNB (N = 120) or Viable-targeting US-CNB (N = 125), of which 91 (91/120, 75.8%) and 112 (112/125, 89.6%) were revealed with actionable diagnoses, respectively (p = 0.004, OR 0.846, 95% CI: 0.753–0.952). And 239 patients established final diagnoses. Diagnostic yields of actionable diagnosis according to the final diagnoses were 78.4% (91/116) and 91.1% (112/123) (p = 0.006, OR 0.554, 95% CI: 0.333–0.920), 82.6% (90/109) and 92.5% (111/120) for malignancy, 84.0% (84/100) and 91.8% (101/110) for lymphoma, 85.1% (80/94) and 92.3% (96/104) for Non-Hodgkin Lymphoma, 66.7% (4/6) and 83.3% (5/6) for Hodgkin Lymphoma in Routine and Viable-targeting CNB groups, respectively. No major complications were observed. Dominant lesions with actionable diagnosis in US-CNB were with higher FDG-avid Standardized Uptake Value. Binomial logistic regression revealed that actionable diagnosis of US-CNB was correlated with group and ancillary studies.ConclusionViable-Targeting US-CNB was superior to routine US-CNB in term of the yield of actionable diagnosis for deep-sited dominant lesions suspicious of lymphoma, which demonstrated a potential to be the initial approach in this setting.

  18. u

    Lu-177 DOTATATE Anonymized Patient Datasets: Lesion and Organ Volumes of...

    • deepblue.lib.umich.edu
    Updated May 19, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dewaraja, Yuni K; Van, Benjamin J (2021). Lu-177 DOTATATE Anonymized Patient Datasets: Lesion and Organ Volumes of Interest [Dataset]. http://doi.org/10.7302/vhrh-qg23
    Explore at:
    Dataset updated
    May 19, 2021
    Dataset provided by
    Deep Blue Data
    Authors
    Dewaraja, Yuni K; Van, Benjamin J
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2018
    Description

    This publication contains anonymized DICOM structure files that were outlined on SPECT/CT scans for two patients. Organ contours include the liver, right and left kidneys, and spleen when present. Two to four lesions were outlined for each patient.

  19. Data from: Basal cell carcinoma diagnosis with fusion of deep learning and...

    • zenodo.org
    jpeg, tiff, zip
    Updated Jul 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Akanksha Maurya; Akanksha Maurya; Ronald J Stanley; Hemanth Y Aradhyula; Norsang Lama; Anand K Nambisan; Gehana Patel; Daniyal Saeed; Samantha Swinfard; Colin Smith; Sadhika Jagannathan; Jason Hagerty; William V Stoecker; Ronald J Stanley; Hemanth Y Aradhyula; Norsang Lama; Anand K Nambisan; Gehana Patel; Daniyal Saeed; Samantha Swinfard; Colin Smith; Sadhika Jagannathan; Jason Hagerty; William V Stoecker (2024). Basal cell carcinoma diagnosis with fusion of deep learning and telangiectasia features [Dataset]. http://doi.org/10.5281/zenodo.7709824
    Explore at:
    zip, tiff, jpegAvailable download formats
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Akanksha Maurya; Akanksha Maurya; Ronald J Stanley; Hemanth Y Aradhyula; Norsang Lama; Anand K Nambisan; Gehana Patel; Daniyal Saeed; Samantha Swinfard; Colin Smith; Sadhika Jagannathan; Jason Hagerty; William V Stoecker; Ronald J Stanley; Hemanth Y Aradhyula; Norsang Lama; Anand K Nambisan; Gehana Patel; Daniyal Saeed; Samantha Swinfard; Colin Smith; Sadhika Jagannathan; Jason Hagerty; William V Stoecker
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Telangiectasia masks dataset created on a subset of the ISIC18, ISIC19 training datasets and the NIH study dataset R43 CA153927-01 and CA101639-02A2. All annotations are for Basal Cell Carcinoma lesions. This is an expanded dataset that was initially used in “A Deep Learning Approach to Detect Blood Vessels in Basal Cell Carcinoma”.

    A sample lesion image and the corresponding mask is provided for preview. Lesion images and masks have been uploaded as separate zipped folders that can be downloaded.

  20. f

    Parameters between different network architectures.

    • plos.figshare.com
    xls
    Updated Mar 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    You Xue; Xinya Chen; Pei Liu; Xiaoyi Lv (2024). Parameters between different network architectures. [Dataset]. http://doi.org/10.1371/journal.pone.0299392.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Mar 21, 2024
    Dataset provided by
    PLOS ONE
    Authors
    You Xue; Xinya Chen; Pei Liu; Xiaoyi Lv
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Parameters between different network architectures.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ke Yan (National Institutes of Health Clinical Center) (2019). DeepLesion (10,594 CT scans with lesions) [Dataset]. https://academictorrents.com/details/de50f4d4aa3d028944647a56199c07f5fa6030ff

DeepLesion (10,594 CT scans with lesions)

Explore at:
bittorrent(243037288033)Available download formats
Dataset updated
Jan 26, 2019
Dataset authored and provided by
Ke Yan (National Institutes of Health Clinical Center)
License

https://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified

Description

Introduction The DeepLesion dataset contains 32,120 axial computed tomography (CT) slices from 10,594 CT scans (studies) of 4,427 unique patients. There are 1–3 lesions in each image with accompanying bounding boxes and size measurements, adding up to 32,735 lesions altogether. The lesion annotations were mined from NIH’s picture archiving and communication system (PACS). Some meta-data are also provided. The contents include: - Folder “Images_png”: png image files. We named each slice with the format “patient index_study index_series index_slice index.png”, with the last underscore being / or \ to indicate sub-folders. The images are stored in unsigned 16 bit. One should subtract 32768 from the pixel intensity to obtain the original Hounsfield unit (HU) values. We provide not only the key CT slice that contains the lesion annotation, but also its 3D context (30mm extra slices above and below the key slice). Due to the large size of the data and the file size limit o

Search
Clear search
Close search
Google apps
Main menu