Facebook
TwitterBoost your AI projects with our 40,000-image high-quality Ultrasound dataset in DICOM, ideal for healthcare computer vision.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Worldwide, breast cancer ranks high among women's leading causes of death. Reducing the number of premature deaths can be achieved through early detection. The information is based on medical ultrasound scans that show signs of breast cancer. There are three types of images included in the Breast Ultrasound Dataset: normal, benign, and malignant. Incorporating machine learning into breast ultrasound images improves their ability to detect, classify, and segment breast cancer. Data Image data… See the full description on the dataset page: https://huggingface.co/datasets/gymprathap/Breast-Cancer-Ultrasound-Images-Dataset.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains a curated benchmark collection of 1,062 labelled lung ultrasound (LUS) images collected from patients at Mulago National Referral Hospital and Kiruddu Referral Hospital in Kampala, Uganda. The images were acquired and annotated by senior radiologists to support the development and evaluation of artificial intelligence (AI) models for pulmonary disease diagnosis. Each image is categorized into one of three classes: Probably COVID-19 (COVID-19), Diseased Lung but Probably Not COVID-19 (Other Lung Disease), and Healthy Lung.
The dataset addresses key challenges in LUS interpretation, including inter-operator variability, low signal-to-noise ratios, and reliance on expert sonographers. It is particularly suitable for training and testing convolutional neural network (CNN)-based models for medical image classification tasks in low-resource settings. The images are provided in standard formats such as PNG or JPEG, with corresponding labels stored in structured files like CSV or JSON to facilitate ease of use in machine learning workflows.
In this second version of the dataset, we have extended the resource by including a folder containing the original unprocessed raw data, as well as the scripts used to process, clean, and sort the data into the final labelled set. These additions promote transparency and reproducibility, allowing researchers to understand the full data pipeline and adapt it for their own applications. This resource is intended to advance research in deep learning for lung ultrasound analysis and to contribute toward building more accessible and reliable diagnostic tools in global health.
Facebook
Twitterhttps://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/https://www.cancerimagingarchive.net/data-usage-policies-and-restrictions/
This dataset was derived from tracked biopsy sessions using the Artemis biopsy system, many of which included image fusion with MRI targets. Patients received a 3D transrectal ultrasound scan, after which nonrigid registration (e.g. “fusion”) was performed between real-time ultrasound and preoperative MRI, enabling biopsy cores to be sampled from MR regions of interest. Most cases also included sampling of systematic biopsy cores using a 12-core digital template. The Artemis system tracked targeted and systematic core locations using encoder kinematics of a mechanical arm, and recorded locations relative to the Ultrasound scan. MRI biopsy coordinates were also recorded for most cases. STL files and biopsy overlays are available and can be visualized in 3D Slicer with the SlicerHeart extension. Spreadsheets summarizing biopsy and MR target data are also available. See the Detailed Description tab below for more information.
MRI targets were defined using multiparametric MRI, e.g. t2-weighted, diffusion-weighted, and perfusion-weighted sequences, and scored on a Likert-like scale with close correspondence to PIRADS version 2. t2-weighted MRI was used to trace ROI contours, and is the only sequence provided in this dataset. MR imaging was performed on a 3 Tesla Trio, Verio or Skyra scanner (Siemens, Erlangen, Germany). A transabdominal phased array was used in all cases, and an endorectal coil was used in a subset of cases. The majority of pulse sequences are 3D T2:SPC, with TR/TE 2200/203, Matrix/FOV 256 × 205/14 × 14 cm, and 1.5mm slice spacing. Some cases were instead 3D T2:TSE with TR/TE 3800–5040/101, and a small minority were imported from other institutions (various T2 protocols.)
Ultrasound scans were performed with Hitachi Hi-Vision 5500 7.5 MHz or the Noblus C41V 2-10 MHz end-fire probe. 3D scans were acquired by rotation of the end-fire probe 200 degrees about its axis, and interpolating to resample the volume with isotropic resolution.
Patients with suspicion of prostate cancer due to elevated PSA and/or suspicious imaging findings were consecutively accrued. Any consented patient who underwent or had planned to receive a routine, standard-of-care prostate biopsy at the UCLA Clark Urology Center was included.
Note: Some Private Tags in this collection are critical to properly displaying the STL surface and the Prostate anatomy. Private Tag (1129,"Eigen, Inc",1016) DS VoxelSize is especially important for multi-frame US cases.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises micro-ultrasound scans and human prostate annotations of 75 patients who underwent micro-ultrasound guided prostate biopsy at the University of Florida. All images and segmentations have been fully de-identified in the NIFTI format.
Under the "train" folder, you'll find three subfolders:
"micro_ultrasound_scans" contains micro-ultrasound images from 55 patients for training.
"expert_annotations" contains ground truth prostate segmentations annotated by our expert urologist.
"non_expert_annotations" contains prostate segmentations annotated by a graduate student.
In the "test" folder, there are five subfolders:
"micro_ultrasound_scans" contains micro-ultrasound images from 20 patients for testing.
"expert_annotations" contains ground truth prostate segmentations by the expert urologist.
"master_student_annotations" contains segmentations by a master's student.
"medical_student_annotations" contains segmentations by a medical student.
"clinician_annotations" contains segmentations by a urologist with limited experience in reading micro-ultrasound images.
If you use this dataset, please cite our paper: Jiang, Hongxu, et al. "MicroSegNet: A deep learning approach for prostate segmentation on micro-ultrasound images." Computerized Medical Imaging and Graphics (2024): 102326. DOI: https://doi.org/10.1016/j.compmedimag.2024.102326.
For any dataset-related queries, please reach out to Dr. Wei Shao: weishao@ufl.edu.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Ultrasound is a primary diagnostic tool commonly used to evaluate internal body structures, including organs, blood vessels, the musculoskeletal system, and fetal development. Due to challenges such as operator dependence, noise, limited field of view, difficulty in imaging through bone and air, and variability across different systems make diagnosing abnormalities in ultrasound images particularly challenging for less experienced clinicians. The development of artificial intelligence technology could assist in the diagnosis of ultrasound images. However, many databases are created using a single device type and collection site, limiting the generalizability of machine learning classification models. Therefore, we have collected a large, publicly accessible ultrasound challenge database that is intended to significantly enhance the performance of traditional ultrasound image classification. This dataset is derived from publicly available data on the Internet and comprises a total of 1,833 distinct ultrasound data. It includes 13 different ultrasound image anomalies, and all data have been anonymized. Our data-sharing program aims to support benchmark testing of ultrasound image disease diagnosis and classification accuracy in multicenter environments.
Facebook
TwitterEchoNet-Dynamic is a dataset of over 10k echocardiogram, or cardiac ultrasound, videos from unique patients at Stanford University Medical Center. Each apical-4-chamber video is accompanied by an estimated ejection fraction, end-systolic volume, end-diastolic volume, and tracings of the left ventricle performed by an advanced cardiac sonographer and reviewed by an imaging cardiologist.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A large dataset of routinely acquired maternal-fetal screening ultrasound images collected from two different hospitals by several operators and ultrasound machines. All images were manually labeled by an expert maternal fetal clinician. Images are divided into 6 classes: four of the most widely used fetal anatomical planes (Abdomen, Brain, Femur and Thorax), the mother’s cervix (widely used for prematurity screening) and a general category to include any other less common image plane. Fetal brain images are further categorized into the 3 most common fetal brain planes (Trans-thalamic, Trans-cerebellum, Trans-ventricular) to judge fine grain categorization performance. Meta information (patient number, us machine, operator) is also provided, as well as the training-test split used in the Nature Sci Rep paper.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was created by Ly Tran Hoang Hieu
Released under CC0: Public Domain
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
the area of the gallbladder extracted from the image segmentation
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The item published is a dataset that provides the raw data and original code to generate Figure 4 in the research paper, Correlative non-destructive techniques to investigate ageing and orientation effects in automotive Li-ion pouch cells, https://doi.org/10.5522/04/c.6868027 of which I am first author. The measurements and following data analysis took place between January 2022 – November 2022.
The figure illustrates the ultrasonic mapping measurements of pouch cells that have been extracted from electric vehicles and have been aged in real-world conditions. The degradation of the cells was measured using four different complementary characterisation measurement techniques, one of which was ultrasonic mapping.
The ultrasonic mapping measurements were performed using an Olympus Focus PX phased-array instrument (Olympus Corp., Japan) with a 5 MHz 1D linear phased array probe consisting of 64 transducers. The transducer had an active aperture of 64 mm with an element pitch (centre-to-centre distance between elements) of 1 mm. The cell was covered with ultrasonic couplant (Fannin UK Ltd.), prior to every scan to ensure good acoustic transmission. The transducer was moved along the length of each cell at a fixed pressure using an Olympus GLIDER 2-axis encoded scanner with the step size set at 1 mm to give a resolution of ca. 1 mm2. Due to the large size of the cells, the active aperture of the probe was wide enough to cover 1/3 the width, meaning that three measurements for each cell were taken and the data was combined to form the colour maps.
Data from the ultrasonic signals were analysed using FocusPC software. The waveforms recorded by the transducer were exported and plotted using custom Python code to compare how the signal changes at different points in the cell. For consistency, a specific ToF range was selected for all cells, chosen because it is where the part of the waveform, known as the ‘echo-peak’, is located.74 The echo-peak is useful to monitor as it is where the waveform has travelled the whole way through the cell and reflected from the back surface, so characterising the entire cell. The maximum amplitude of the ultrasonic signal within this ToF range, at each point, are combined to produce a colour map. The signal amplitude is a percentage proportion of 100 where 100 is the maximum intensity of the signal, meaning that the signal has been attenuated the least as it travels through the cell, and 0 is the minimum intensity. The intensity is absolute and not normalised across all scans, meaning that an amplitude values on different cells can be directly compared. The Pristine cell is a second-generation Nissan Leaf pouch, different to the first-generation aged cells of varying orientation. The authors were not able to acquire an identical first-generation pristine Nissan Leaf cell. Nonetheless, it was expected that the Pristine cell would contain a uniform internal structure regardless of the specific chemistry and this would be identified in an ultrasound map consisting of a single colour (or narrow colour range).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Musculoskeletal disorders present significant health and economic challenges on a global scale. Current intraoperative imaging techniques, including computed tomography (CT) and radiography, involve high radiation exposure and limited soft tissue visualization. Ultrasound (US) offers a non-invasive, real-time alternative but is highly observer-dependent and underutilized intraoperatively. US enhanced by artificial intelligence shows high potential for observer-independent pattern recognition and robot-assisted applications in orthopedics. Given the limited availability of in-vivo imaging data, we introduce a comprehensive dataset from a comparative collection of handheld US (HUS) and robot-assisted ultrasound (RUS) lumbar spine imaging in 63 healthy volunteers. This dataset includes demographic data, paired CT, HUS, RUS imaging, synchronized tracking data for HUS and RUS, and 3D-CT-segmentations. It establishes a robust baseline for machine learning algorithms by focusing on healthy individuals, circumventing the limitations of simulations and pathological anatomy. To our knowledge, this extensive collection is the first healthy anatomy dataset for the lumbar spine that includes paired CT, HUS, and RUS imaging, supporting advancements in computer- and robotic-assisted diagnostic and intraoperative techniques for musculoskeletal disorders.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The liver is regarded as one of the essential organs in the body, playing a crucial role in digestion, nutrient absorption, and food processing. This indispensable organ is tasked with cleansing the blood that flows from the digestive tract. Additionally, the liver detoxifies harmful substances and metabolizes various medications. A key function of this organ involves intricate metabolic processes that transform food into energy. However, the liver's heightened sensitivity makes it susceptible to a variety of common ailments, underscoring the need for careful attention to its health.Among liver diseases, fatty liver stands out as the most prevalent, characterized by the buildup of fat within liver cells. This condition is particularly common among individuals who are overweight or have abdominal obesity. Specifically, non-alcoholic fatty liver disease (NAFLD) refers to the excessive fat accumulation in liver cells, a condition known as steatosis. Various diagnostic techniques have been developed for this disease, each offering distinct advantages and drawbacks. Ultrasound imaging has gained popularity due to its accessibility, non-invasive nature, and affordability. One of the approaches utilized in this diagnostic process is the employment of deep learning models. Sporadic studies have been conducted to leverage the available data in diagnosing the degree of non-alcoholic fatty liver disease, and given the significance of the issue, this body of research is continually expanding. Developing robust model training and conducting comprehensive analyses necessitates access to a diverse and comprehensive data repository that can be leveraged for training and validating the proposed models. To this end, we present the BEHSOF dataset, which comprises a collection of samples gathered with varying degrees of non-alcoholic fatty liver disease. Specifically, this data bank consists of ultrasound images from a population of 113 individuals under study, along with the corresponding labels for the levels of steatosis and fibrosis. In addition to the ultrasound images, the data bank provides clinical information, blood test results, and Fibroscan outcomes for the participants, which serve as reference data for fibrosis assessment. Finally, we showcase the results of two deep learning models as examples for training and testing the introduced data set.The ultrasound images have been categorized into two distinct groups based on the diagnostic findings and labeling conventions. Each file within these categories is further distinguished by the location of the medical facility (Taleghani Hospital (TAL) or Behbood Clinic (BEH)), the row number, and the order of grading (steatosis score by expert, steatosis score by CAPscore, fibrosis score by the rate of Elasticity) (TALXXXXX, BEHXXXXX).
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Ultrasound imaging data (RF and scan-converted formats) of custom-made elastography phantoms. The data are robotically-acquired image sequences with controlled and gradually increasing phantom indentation. The phantom consists of a background medium traversed by a cylindrical inclusion. The data (images and RF signals) are contained in the 6 folders Acqui1 → Acqui6 corresponding to the 6 acquired image sequences on the phantom. Each Acqui_i folder contains two subfolders: the RF folder containing RF signals in .mat file format (readable in Matlab), and the folder US_Image containing the images of the sequence in PNG format. Both images and RF signals are numbered, with each index corresponding to an indentation level and a force measured by the force sensor as outlined in the Excel file (Tabulation 1). Each RFi.mat file comprises 3152 rows representing the signal along the temporal axis, and 256 columns corresponding to the number of A lines in the image. The Excel file has 5 tabs: -The first tab contains, for each of the 6 acquired image sequences, the frame number in the sequence, the corresponding indentation of the probe (in mm), the recorded voltage value on the force sensor (V), and the corresponding calculated force value (N). Thus, each image in the 6 sequences is identified by a frame number, an indentation, and a force value. -The second tab provide the acquisition parameters of the ultrasound images (frequency, depth, gain, etc.) performed using the SonixTablet ultrasound system from Ultrasonix (now bk medical). -The third tab contains stress-strain curves, the mean, and the standard deviation of the Young's modulus for both the inclusion and the background of the phantom. The Young's modulus is obtained through compression tests conducted by an electromechanical testing machine (Bose, Electroforce 3200) on 10 small cylindrical samples taken from the background and the inclusion. -The fourth tab contains the geometry and dimensions of the phantom. -The fifth tab contains the recipe used to make the gelatin phantom.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Datasets provided for Open Platform for Ultrasound Localization Microscopy: Performance Assessment of Localization Algorithms.
Abstract:
Ultrasound Localization Microscopy (ULM) is an ultrasound imaging technique that relies on the acoustic response of sub-wavelength ultrasound scatterers to map the microcirculation with an order of magnitude increase in resolution. Initially demonstrated in vitro, this technique has matured and sees implementation in vivo for vascular imaging of organs, and tumors in both animal models and humans. The performance of the localization algorithm greatly defines the quality of vascular mapping. We compiled and implemented a collection of ultrasound localization algorithms and devised three datasets in silico and in vivo to compare their performance through 18 metrics. We also present two novel algorithms designed to increase speed and performance. By openly providing a complete package to perform ULM with the algorithms, the datasets used, and the metrics, we aim to give researchers a tool to identify the optimal localization algorithm for their usage, benchmark their software and enhance the overall image quality in the field while uncovering its limits.
This article provides all materials and post-processing scripts and functions.
Methods:
200.000 ultrasound images have been acquired in vivo on a rat brain with skull removal at 1000 Hz with a 15 MHz linear probe.
This dataset contains raw radiofrequency data (RF) and beamformed images (IQ) of the brain vascularization with flowing microbubbles (ultrasound contrast agent).
Article to be cited: Heiles, Chavignon, Hingot, Lopez, Teston and Couture.
Performance benchmarking of microbubble-localization algorithms for ultrasound localization microscopy, Nature Biomedical Engineering, 2022, (doi.org/10.1038/s41551-021-00824-8).
Related processing scripts and codes: github.com/AChavignon/PALA
Related datasets: doi.org/10.5281/zenodo.4343435
Acknowledgments:
We thank Cyrille Orset (INSERM UMR-S U1237, Physiopathology and Imaging of Neurological Disorders, GIP Cyceron, BB@C, Caen, France) for animals’ preparation and perfusion of contrast agent and the biomedical imaging platform CYCERON (UMS 3408 Unicaen/CNRS, Caen, France).
Facebook
Twitterhttps://www.futuremarketinsights.com/privacy-policyhttps://www.futuremarketinsights.com/privacy-policy
The ultrasound systems market is estimated to reach USD 11,260.1 million in 2025. It is estimated that revenue will increase at a CAGR of 5.5% between 2025 and 2035. The market is anticipated to reach USD 19,233.9 million by 2035.
| Attributes | Key Insights |
|---|---|
| Historical Size, 2024 | USD 10,673.1 million |
| Estimated Size, 2025 | USD 11,260.1 million |
| Projected Size, 2035 | USD 19,233.9 million |
| Value-based CAGR (2025 to 2035) | 5.5% |
Semi-Annual Industry Outlook
| Particular | Value CAGR |
|---|---|
| H1 | 6.3% (2024 to 2034) |
| H2 | 6.0% (2024 to 2034) |
| H1 | 5.5% (2025 to 2035) |
| H2 | 5.0% (2025 to 2035) |
Country-wise Insights
| Countries | Value CAGR (2025 to 2035) |
|---|---|
| United States | 3.4% |
| Canada | 4.4% |
| Germany | 4.5% |
| France | 3.8% |
| Italy | 4.7% |
| UK | 6.6% |
| Spain | 4.5% |
| China | 5.5% |
Category-wise Insights
| Modality | Cart/Trolley Based Ultrasound Systems |
|---|---|
| Value Share (2025) | 66.4% |
| Application | Radiology |
|---|---|
| Value Share (2025) | 41.6% |
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Breast Ultrasound Images Dataset includes 780 high-quality PNG images collected in 2018 from 600 female patients aged 25 to 75. Each image is paired with a ground truth mask and categorized into normal, benign, or malignant classes, supporting supervised learning and diagnostic research for breast cancer detection.
Facebook
TwitterThis dataset was created by Baizid MD Ashadzzaman
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Datasets provided for Open Platform for Ultrasound Localization Microscopy: Performance Assessment of Localization Algorithms.
Abstract:
Ultrasound Localization Microscopy (ULM) is an ultrasound imaging technique that relies on the acoustic response of sub-wavelength ultrasound scatterers to map the microcirculation with an order of magnitude increase in resolution. Initially demonstrated in vitro, this technique has matured and sees implementation in vivo for vascular imaging of organs, and tumors in both animal models and humans. The performance of the localization algorithm greatly defines the quality of vascular mapping. We compiled and implemented a collection of ultrasound localization algorithms and devised three datasets in silico and in vivo to compare their performance through 18 metrics. We also present two novel algorithms designed to increase speed and performance. By openly providing a complete package to perform ULM with the algorithms, the datasets used, and the metrics, we aim to give researchers a tool to identify the optimal localization algorithm for their usage, benchmark their software and enhance the overall image quality in the field while uncovering its limits.
This article provides all materials and post-processing scripts and functions.
Article to be cited: Heiles, Chavignon, Hingot, Lopez, Teston and Couture.
Performance benchmarking of microbubble-localization algorithms for ultrasound localization microscopy, Nature Biomedical Engineering, 2022, (doi.org/10.1038/s41551-021-00824-8).
Related processing scripts and codes: github.com/AChavignon/PALA
Request on data: arthur.chavignon.pro(at)gmail.com
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
IRE Ultrasound is a dataset for object detection tasks - it contains Area annotations for 218 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterBoost your AI projects with our 40,000-image high-quality Ultrasound dataset in DICOM, ideal for healthcare computer vision.